Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent_simulator response to get_config request #1266

Closed
Selutario opened this issue Apr 30, 2021 · 14 comments · Fixed by #4895 or wazuh/qa-integration-framework#139
Closed

Agent_simulator response to get_config request #1266

Selutario opened this issue Apr 30, 2021 · 14 comments · Fixed by #4895 or wazuh/qa-integration-framework#139
Assignees
Labels
level/task Task issue qa_known Issues that are already known by the QA team tool/agent-simulator Development that involves modifying the agent-simulator script/tool type/enhancement

Comments

@Selutario
Copy link
Contributor

Description

Hi team!

During the testing of certain endpoints such as PUT /active-response, the framework first asks the agent what is its active response configuration in order to make sure it is enabled. However, when using the agent_simulator tool, it does not respond correctly to requests aimed to get the active configuration of an agent.

Currently framework sends to the socket /var/ossec/queue/sockets/request this (byte-encoded) command:

<agent_id> com getconfig active-response

and should get this response:

ok {"active-response":{"disabled":"no","ca_store":["wpk_root.pem"],"ca_verification":"yes"}}

The exact method that handles such communication is get_active_configuration(<agent_id>, component="com", configuration="active-response")

Regards,
Selu.

@jmv74211 jmv74211 added the tool/agent-simulator Development that involves modifying the agent-simulator script/tool label May 11, 2022
@nico-stefani
Copy link
Member

nico-stefani commented Jul 24, 2024

Reopened due to the failure encountered in wazuh/wazuh#24894

@nico-stefani nico-stefani reopened this Jul 24, 2024
@wazuhci wazuhci moved this from Done to Triage in Release 4.9.0 Jul 24, 2024
@wazuhci wazuhci moved this from Triage to Backlog in Release 4.9.0 Jul 24, 2024
@wazuhci wazuhci moved this from Backlog to In progress in Release 4.9.0 Jul 25, 2024
@RamosFe
Copy link
Member

RamosFe commented Jul 25, 2024

Update - Workload benchmarks metrics

The error in wazuh/wazuh#24894 is partially related to this Issue but not caused by it. If we analyze the error, we can see that is a Bad Request

tests/performance/test_api/test_api_endpoints_performance.py:50: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_case = {'body': {'command': 'custom', 'custom': True}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
>           assert response.status_code == 200
E           assert 400 == 200
E            +  where 400 = <Response [400]>.status_code

tests/performance/test_api/test_api_endpoints_performance.py:41: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.054s

Status code: 400

Full response: 
{
  "title": "Bad Request",
  "detail": "Invalid field found {'custom'}"
}

The custom field was present in 4.7:

image

But was deleted in 4.8:

image

The configuration of the performance was never changed and no errors appeared during the newest Workload benchmark tests due to the test being marked as xFail until this issue was merged in 4.9.0.

We should create a new Issue to delete the custom field in the performance/test_api configuration file.

- endpoint: /active-response
method: put
parameters: {}
body:
command: custom
custom: True
restart: False

@RamosFe
Copy link
Member

RamosFe commented Jul 25, 2024

Update - New Issue

The following Issue was created to address this error #5611.

@wazuhci wazuhci moved this from In progress to Pending review in Release 4.9.0 Jul 25, 2024
@GGP1
Copy link
Member

GGP1 commented Jul 26, 2024

Review

The error was caused by a removal of one of the endpoint's parameters, that error is not related to this one and will be fixed in #5611. LGTM

@wazuhci wazuhci moved this from Pending review to Pending final review in Release 4.9.0 Jul 26, 2024
@wazuhci wazuhci moved this from Pending final review to In final review in Release 4.9.0 Jul 26, 2024
@fdalmaup
Copy link
Member

LGTM!

@wazuhci wazuhci moved this from In final review to Done in Release 4.9.0 Jul 26, 2024
@GGP1
Copy link
Member

GGP1 commented Aug 5, 2024

Reopening

The test is still failing because of the active-response endpoint in Beta 1. The error seems to be related to the changes introduced here:

"message": "The command used is not defined in the configuration."
Full error
___________________ test_api_endpoints[put_/active-response] ___________________

test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
            assert response.json()['error'] == 0
    
        except AssertionError as e:
            # If the assertion fails, and is marked as xfail
            if test_case['endpoint'] in xfailed_items.keys() and \
                    test_case['method'] == xfailed_items[test_case['endpoint']]['method']:
                pytest.xfail(xfailed_items[test_case['endpoint']]['message'])
    
>           raise e

tests/performance/test_api/test_api_endpoints_performance.py:50: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
>           assert response.json()['error'] == 0
E           assert 1 == 0

tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.193s

Status code: 200

Full response: 
{
  "data": {
    "affected_items": [],
    "total_affected_items": 0,
    "total_failed_items": 10,
    "failed_items": [
      {
        "error": {
          "code": 1652,
          "message": "The command used is not defined in the configuration.",
          "remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
        },
        "id": [
          "001",
          "002",
          "003",
          "004",
          "005",
          "006",
          "007",
          "008",
          "009",
          "010"
        ]
      }
    ]
  },
  "message": "AR command was not sent to any agent",
  "error": 1
}

@fdalmaup
Copy link
Member

fdalmaup commented Aug 6, 2024

This issue requires running the related Workload pipeline and verifying that the test passes. The parameters should be set to the minimum (e.g., 10 agents and 2 workers) since we just want to check that the error does not occur.

@wazuhci wazuhci moved this from Triage to Backlog in Release 4.9.0 Aug 6, 2024
@GGP1
Copy link
Member

GGP1 commented Aug 6, 2024

Closing

Closed in favor of #5639.

@GGP1 GGP1 closed this as completed Aug 6, 2024
@wazuhci wazuhci moved this from Backlog to Done in Release 4.9.0 Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
level/task Task issue qa_known Issues that are already known by the QA team tool/agent-simulator Development that involves modifying the agent-simulator script/tool type/enhancement
Projects
No open projects
Status: Done