We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After installing a Wazuh manager, this could not be started due to an error related to the configuration:
[root@ip-172-31-32-95 ~]# systemctl status wazuh-manager ● wazuh-manager.service - Wazuh manager Loaded: loaded (/usr/lib/systemd/system/wazuh-manager.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Mon 2024-01-15 14:45:17 UTC; 9min ago Process: 2317 ExecStart=/usr/bin/env /var/ossec/bin/wazuh-control start (code=exited, status=1/FAILURE) Jan 15 14:45:16 ip-172-31-32-95.ec2.internal systemd[1]: Starting Wazuh manager... Jan 15 14:45:17 ip-172-31-32-95.ec2.internal env[2317]: 2024/01/15 14:45:17 wazuh-csyslogd: ERROR: (1226): Error reading XML file 'etc/ossec.conf': (line 0). Jan 15 14:45:17 ip-172-31-32-95.ec2.internal env[2317]: wazuh-csyslogd: Configuration error. Exiting Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: wazuh-manager.service: control process exited, code=exited status=1 Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: Failed to start Wazuh manager. Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: Unit wazuh-manager.service entered failed state. Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: wazuh-manager.service failed. ``` Reviewing the Wazuh server configuration, I found this block: ```xml <vulnerability-detection> <enabled>yes</enabled> <index-status>yes</index-status> <feed-update-interval>yes</vulnerability_detection_feed_update_interval> </vulnerability-detection> ``` The `vulnerability_detection_feed_update_interval` is not the expected closing tag for `feed-update-interval`. ## Step to reproduce 1. Create the file `/etc/puppetlabs/code/environments/production/manifests/stack.pp` with the following content in the puppet server: ```ruby $puppetmaster = '172-31-44-21' $indexerhost = '172.31.33.220' $serverhost = '172.31.32.95' $dashboardhost = '172.31.43.88' $indexer_node1_name = 'node1' $master_name = 'master' $indexer_cluster_size = '1' $indexer_discovery_hosts = [$indexerhost] $indexer_cluster_initial_master_nodes = [$indexerhost] $indexer_cluster_CN = [$indexer_node1_name] # Define stage for order execution stage { 'certificates': } stage { 'repo': } stage { 'indexerdeploy': } stage { 'securityadmin': } stage { 'dashboard': } stage { 'manager': } Stage[certificates] -> Stage[repo] -> Stage[indexerdeploy] -> Stage[securityadmin] -> Stage[manager] -> Stage[dashboard] Exec { timeout => 0, } node "ip-172-31-44-21.ec2.internal" { class { 'wazuh::certificates': indexer_certs => [["$indexer_node1_name","$indexerhost" ]], manager_master_certs => [["$master_name","$serverhost"]], dashboard_certs => ["$dashboardhost"], stage => certificates } class { 'wazuh::repo': stage => repo } } node "ip-172-31-33-220.ec2.internal" { class { 'wazuh::repo': stage => repo } class { 'wazuh::indexer': indexer_node_name => "$indexer_node1_name", indexer_network_host => "$indexerhost", indexer_node_max_local_storage_nodes => "$indexer_cluster_size", indexer_discovery_hosts => $indexer_discovery_hosts, indexer_cluster_initial_master_nodes => $indexer_cluster_initial_master_nodes, indexer_cluster_CN => $indexer_cluster_CN, stage => indexerdeploy } class { 'wazuh::securityadmin': indexer_network_host => "$indexerhost", stage => securityadmin } } node "ip-172-31-32-95.ec2.internal" { class { 'wazuh::repo': stage => repo } class { 'wazuh::manager': ossec_cluster_name => 'wazuh-cluster', ossec_cluster_node_name => 'wazuh-master', ossec_cluster_node_type => 'master', ossec_cluster_key => '01234567890123456789012345678912', ossec_cluster_bind_addr => "$serverhost", ossec_cluster_nodes => ["$serverhost"], ossec_cluster_disabled => 'no', stage => manager } class { 'wazuh::filebeat_oss': filebeat_oss_indexer_ip => "$indexerhost", wazuh_node_name => "$master_name", stage => manager } } node "ip-172-31-43-88.ec2.internal" { class { 'wazuh::repo': stage => repo, } class { 'wazuh::dashboard': indexer_server_ip => "$indexerhost", manager_api_host => "$serverhost", stage => dashboard } } node "ip-172-31-39-213.ec2.internal" { class { "wazuh::agent": wazuh_register_endpoint => "$serverhost", wazuh_reporting_endpoint => "$serverhost" } } ``` 2. Execute the command in the Wazuh server instance and check the output: ``` # puppet agent -t ``` 3. Check the `wazuh-manager` status ``` systemctl status wazuh-manager ``` ## Evidence The error comes out in the installation output of the following command: ``` # puppet agent -t ``` ``` ... Jan 15 14:45:16 ip-172-31-32-95.ec2.internal systemd[1]: Starting Wazuh manager... Jan 15 14:45:17 ip-172-31-32-95.ec2.internal env[2317]: 2024/01/15 14:45:17 wazuh-csyslogd: ERROR: (1226): Error reading XML file 'etc/ossec.conf': (line 0). Jan 15 14:45:17 ip-172-31-32-95.ec2.internal env[2317]: wazuh-csyslogd: Configuration error. Exiting Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: wazuh-manager.service: control process exited, code=exited status=1 Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: Failed to start Wazuh manager. Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: Unit wazuh-manager.service entered failed state. Jan 15 14:45:17 ip-172-31-32-95.ec2.internal systemd[1]: wazuh-manager.service failed. ... ``` ## Expected results The `wazuh-manager` service starts automatically after installation.
The text was updated successfully, but these errors were encountered:
vcerenu
Successfully merging a pull request may close this issue.
Description
After installing a Wazuh manager, this could not be started due to an error related to the configuration:
The text was updated successfully, but these errors were encountered: