-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix PUT /active-response
API performance test
#5639
Comments
UpdateThis error is generated by the arguments of wazuh-qa/tests/performance/test_api/data/wazuh_api_endpoints_performance.yaml Lines 113 to 118 in 4a44934
I started running the Workload Benchmark with the new changes (Run ID: Changes to the commandThe command was changed from Testtest_case = {'body': {'alert': {'data': {'dstuser': 'root', 'srcip': '192.168.33.44', 'srcport': '51104'}}, 'arguments': ['add'], 'command': 'host-deny'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None
@pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
"""Make an API request for each `test_case`.
Args:
test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
set_api_test_environment (fixture): Fixture that modifies the API security options.
api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
"""
base_url = api_details['base_url']
headers = api_details['auth_headers']
response = None
try:
response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
params=test_case['parameters'], json=test_case['body'],
verify=False)
assert response.status_code == 200
> assert response.json()['error'] == 0
E assert 1 == 0
tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.130s
Status code: 200
Full response:
{
"data": {
"affected_items": [],
"total_affected_items": 0,
"total_failed_items": 10,
"failed_items": [
{
"error": {
"code": 1652,
"message": "The command used is not defined in the configuration.",
"remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
},
"id": [
"001",
"002",
"003",
"004",
"005",
"006",
"007",
"008",
"009",
"010"
]
}
]
},
"message": "AR command was not sent to any agent",
"error": 1
}
- generated html file: file:///mnt/efs/CLUSTER-Workload_benchmarks_metrics/B_609/api_performance_tests/result.html -
=========================== short test summary info ============================
FAILED tests/performance/test_api/test_api_endpoints_performance.py::test_api_endpoints[put_/active-response]
============ 1 failed, 54 passed, 56 warnings in 228.72s (0:03:48) ============= |
UpdateAfter consulting it with the team and tested multiples commands, the same error is found when trying default commands. Wazuh Restart test=================================== FAILURES ===================================
___________________ test_api_endpoints[put_/active-response] ___________________
test_case = {'body': {'command': 'wazuh-restart'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None
@pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
"""Make an API request for each `test_case`.
Args:
test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
set_api_test_environment (fixture): Fixture that modifies the API security options.
api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
"""
base_url = api_details['base_url']
headers = api_details['auth_headers']
response = None
try:
response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
params=test_case['parameters'], json=test_case['body'],
verify=False)
assert response.status_code == 200
assert response.json()['error'] == 0
except AssertionError as e:
# If the assertion fails, and is marked as xfail
if test_case['endpoint'] in xfailed_items.keys() and \
test_case['method'] == xfailed_items[test_case['endpoint']]['method']:
pytest.xfail(xfailed_items[test_case['endpoint']]['message'])
> raise e
tests/performance/test_api/test_api_endpoints_performance.py:50:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_case = {'body': {'command': 'wazuh-restart'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None
@pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
"""Make an API request for each `test_case`.
Args:
test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
set_api_test_environment (fixture): Fixture that modifies the API security options.
api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
"""
base_url = api_details['base_url']
headers = api_details['auth_headers']
response = None
try:
response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
params=test_case['parameters'], json=test_case['body'],
verify=False)
assert response.status_code == 200
> assert response.json()['error'] == 0
E assert 1 == 0
tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.174s
Status code: 200
Full response:
{
"data": {
"affected_items": [],
"total_affected_items": 0,
"total_failed_items": 10,
"failed_items": [
{
"error": {
"code": 1652,
"message": "The command used is not defined in the configuration.",
"remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
},
"id": [
"001",
"002",
"003",
"004",
"005",
"006",
"007",
"008",
"009",
"010"
]
}
]
},
"message": "AR command was not sent to any agent",
"error": 1
}
- generated html file: file:///mnt/efs/CLUSTER-Workload_benchmarks_metrics/B_612/api_performance_tests/result.html -
=========================== short test summary info ============================
FAILED tests/performance/test_api/test_api_endpoints_performance.py::test_api_endpoints[put_/active-response]
============ 1 failed, 54 passed, 57 warnings in 259.29s (0:04:19) ============= I created the following Issue and passed it to the QA Team. In the meanwhile, the |
Description
During the last Workload benchmarks metrics release testing, we noticed that the test of the endpoint
PUT /active-response
is failing because of using a command that is not in the configuration."message": "The command used is not defined in the configuration."
The error seems to be related to the changes introduced in #1266, here is the full trace
Trace
We should fix the test and make sure all API performance tests pass by executing the Workload benchmarks metrics pipeline with 10 agents and 2 workers.
The text was updated successfully, but these errors were encountered: