-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent_simulator response to get_config request #1266
Agent_simulator response to get_config request #1266
Comments
Reopened due to the failure encountered in wazuh/wazuh#24894 |
Update - Workload benchmarks metricsThe error in wazuh/wazuh#24894 is partially related to this Issue but not caused by it. If we analyze the error, we can see that is a
The But was deleted in The configuration of the We should create a new Issue to delete the wazuh-qa/tests/performance/test_api/data/wazuh_api_endpoints_performance.yaml Lines 113 to 119 in 243fb08
|
Update - New IssueThe following Issue was created to address this error #5611. |
ReviewThe error was caused by a removal of one of the endpoint's parameters, that error is not related to this one and will be fixed in #5611. LGTM |
LGTM! |
ReopeningThe test is still failing because of the "message": "The command used is not defined in the configuration." Full error___________________ test_api_endpoints[put_/active-response] ___________________
test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None
@pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
"""Make an API request for each `test_case`.
Args:
test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
set_api_test_environment (fixture): Fixture that modifies the API security options.
api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
"""
base_url = api_details['base_url']
headers = api_details['auth_headers']
response = None
try:
response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
params=test_case['parameters'], json=test_case['body'],
verify=False)
assert response.status_code == 200
assert response.json()['error'] == 0
except AssertionError as e:
# If the assertion fails, and is marked as xfail
if test_case['endpoint'] in xfailed_items.keys() and \
test_case['method'] == xfailed_items[test_case['endpoint']]['method']:
pytest.xfail(xfailed_items[test_case['endpoint']]['message'])
> raise e
tests/performance/test_api/test_api_endpoints_performance.py:50:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_case = {'body': {'command': 'custom'}, 'endpoint': '/active-response', 'method': 'put', 'parameters': {}, ...}
set_api_test_environment = None, api_healthcheck = None
@pytest.mark.parametrize('test_case', test_data['test_cases'], ids=case_ids)
def test_api_endpoints(test_case, set_api_test_environment, api_healthcheck):
"""Make an API request for each `test_case`.
Args:
test_case (dict): Dictionary with the endpoint to be tested and the necessary parameters for the test.
set_api_test_environment (fixture): Fixture that modifies the API security options.
api_healthcheck (fixture): Fixture used to check that the API is ready to respond requests.
"""
base_url = api_details['base_url']
headers = api_details['auth_headers']
response = None
try:
response = getattr(requests, test_case['method'])(f"{base_url}{test_case['endpoint']}", headers=headers,
params=test_case['parameters'], json=test_case['body'],
verify=False)
assert response.status_code == 200
> assert response.json()['error'] == 0
E assert 1 == 0
tests/performance/test_api/test_api_endpoints_performance.py:42: AssertionError
----------------------------- Captured stdout call -----------------------------
Request elapsed time: 0.193s
Status code: 200
Full response:
{
"data": {
"affected_items": [],
"total_affected_items": 0,
"total_failed_items": 10,
"failed_items": [
{
"error": {
"code": 1652,
"message": "The command used is not defined in the configuration.",
"remediation": "Please, visit the official documentation (https://documentation.wazuh.com/4.9/user-manual/capabilities/active-response/how-to-configure.html)to get more information"
},
"id": [
"001",
"002",
"003",
"004",
"005",
"006",
"007",
"008",
"009",
"010"
]
}
]
},
"message": "AR command was not sent to any agent",
"error": 1
} |
This issue requires running the related Workload pipeline and verifying that the test passes. The parameters should be set to the minimum (e.g., 10 agents and 2 workers) since we just want to check that the error does not occur. |
ClosingClosed in favor of #5639. |
Description
Hi team!
During the testing of certain endpoints such as PUT /active-response, the framework first asks the agent what is its
active response
configuration in order to make sure it is enabled. However, when using the agent_simulator tool, it does not respond correctly to requests aimed to get the active configuration of an agent.Currently framework sends to the socket
/var/ossec/queue/sockets/request
this (byte-encoded) command:and should get this response:
The exact method that handles such communication is get_active_configuration(<agent_id>, component="com", configuration="active-response")
Regards,
Selu.
The text was updated successfully, but these errors were encountered: