Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix PUT /active-response API performance test #5639

Closed
GGP1 opened this issue Aug 6, 2024 · 2 comments · Fixed by #5660
Closed

Fix PUT /active-response API performance test #5639

GGP1 opened this issue Aug 6, 2024 · 2 comments · Fixed by #5660
Assignees
Labels

Comments

@GGP1
Copy link
Member

GGP1 commented Aug 6, 2024

Description

During the last Workload benchmarks metrics release testing, we noticed that the test of the endpoint PUT /active-response is failing because of using a command that is not in the configuration.

"message": "The command used is not defined in the configuration."

The error seems to be related to the changes introduced in #1266, here is the full trace

Trace
___________________ test_api_endpoints[put_/active-response] ___________________

test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
            assert response.json()['error'] == 0
    
        except AssertionError as e:
            # If the assertion fails, and is marked as xfail
            if test_case['endpoint'] in xfailed_items.keys() and \
                    test_case['method'] == xfailed_items[test_case['endpoint']]['method']:
                pytest.xfail(xfailed_items[test_case['endpoint']]['message'])
    
>           raise e

tests/performance/test_api/test_api_endpoints_performance.py:50: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
>           assert response.json()['error'] == 0
E           assert 1 == 0

tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.193s

Status code: 200

Full response: 
{
  "data": {
    "affected_items": [],
    "total_affected_items": 0,
    "total_failed_items": 10,
    "failed_items": [
      {
        "error": {
          "code": 1652,
          "message": "The command used is not defined in the configuration.",
          "remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
        },
        "id": [
          "001",
          "002",
          "003",
          "004",
          "005",
          "006",
          "007",
          "008",
          "009",
          "010"
        ]
      }
    ]
  },
  "message": "AR command was not sent to any agent",
  "error": 1
}

We should fix the test and make sure all API performance tests pass by executing the Workload benchmarks metrics pipeline with 10 agents and 2 workers.

@RamosFe
Copy link
Member

RamosFe commented Aug 6, 2024

Update

This error is generated by the arguments of PUT /active-response, due to not existing the custom command, no command is found when running the API request. I changed the command to one that doesn't affect the rest of the tests and is one of the default ones that comes in the Agent:

- endpoint: /active-response
method: put
parameters: {}
body:
command: wazuh-restart
restart: True

I started running the Workload Benchmark with the new changes (Run ID: #612).

Changes to the command

The command was changed from deny-host to wazuh-restart due to the following error:

Test
test_case = {'body': {'alert': {'data': {'dstuser': 'root', 'srcip': '192.168.33.44', 'srcport': '51104'}}, 'arguments': ['add'], 'command': 'host-deny'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
>           assert response.json()['error'] == 0
E           assert 1 == 0

tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.130s

Status code: 200

Full response: 
{
  "data": {
    "affected_items": [],
    "total_affected_items": 0,
    "total_failed_items": 10,
    "failed_items": [
      {
        "error": {
          "code": 1652,
          "message": "The command used is not defined in the configuration.",
          "remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
        },
        "id": [
          "001",
          "002",
          "003",
          "004",
          "005",
          "006",
          "007",
          "008",
          "009",
          "010"
        ]
      }
    ]
  },
  "message": "AR command was not sent to any agent",
  "error": 1
}
- generated html file: file:///mnt/efs/CLUSTER-Workload_benchmarks_metrics/B_609/api_performance_tests/result.html -
=========================== short test summary info ============================
FAILED tests/performance/test_api/test_api_endpoints_performance.py::test_api_endpoints[put_/active-response]
============ 1 failed, 54 passed, 56 warnings in 228.72s (0:03:48) =============

@RamosFe
Copy link
Member

RamosFe commented Aug 7, 2024

Update

After consulting it with the team and tested multiples commands, the same error is found when trying default commands.

Wazuh Restart test
=================================== FAILURES ===================================
___________________ test_api_endpoints[put_/active-response] ___________________

test_case = {'body': {'command': 'wazuh-restart'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
            assert response.json()['error'] == 0
    
        except AssertionError as e:
            # If the assertion fails, and is marked as xfail
            if test_case['endpoint'] in xfailed_items.keys() and \
                    test_case['method'] == xfailed_items[test_case['endpoint']]['method']:
                pytest.xfail(xfailed_items[test_case['endpoint']]['message'])
    
>           raise e

tests/performance/test_api/test_api_endpoints_performance.py:50: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_case = {'body': {'command': 'wazuh-restart'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None

    @pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
    def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
        """Make an API request for each `test_case`.
    
        Args:
            test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
            set_api_test_environment (fixture): Fixture that modifies the API security options.
            api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
        """
        base_url = api_details['base_url']
        headers = api_details['auth_headers']
        response = None
    
        try:
            response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
                                                              params=test_case['parameters'], json=test_case['body'],
                                                              verify=False)
            assert response.status_code == 200
>           assert response.json()['error'] == 0
E           assert 1 == 0

tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.174s

Status code: 200

Full response: 
{
  "data": {
    "affected_items": [],
    "total_affected_items": 0,
    "total_failed_items": 10,
    "failed_items": [
      {
        "error": {
          "code": 1652,
          "message": "The command used is not defined in the configuration.",
          "remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
        },
        "id": [
          "001",
          "002",
          "003",
          "004",
          "005",
          "006",
          "007",
          "008",
          "009",
          "010"
        ]
      }
    ]
  },
  "message": "AR command was not sent to any agent",
  "error": 1
}
- generated html file: file:///mnt/efs/CLUSTER-Workload_benchmarks_metrics/B_612/api_performance_tests/result.html -
=========================== short test summary info ============================
FAILED tests/performance/test_api/test_api_endpoints_performance.py::test_api_endpoints[put_/active-response]
============ 1 failed, 54 passed, 57 warnings in 259.29s (0:04:19) =============

I created the following Issue and passed it to the QA Team. In the meanwhile, the PUT /active-response endpoint is marked as xfail.

@wazuhci wazuhci moved this from In progress to On hold in Release 4.9.0 Aug 7, 2024
@wazuhci wazuhci moved this from On hold to Pending review in Release 4.9.0 Aug 8, 2024
@wazuhci wazuhci moved this from Pending review to Pending final review in Release 4.9.0 Aug 8, 2024
@RamosFe RamosFe linked a pull request Aug 8, 2024 that will close this issue
@wazuhci wazuhci moved this from Pending final review to Done in Release 4.9.0 Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Status: Done
3 participants