You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[2024-11-05T23:35:45,008][ERROR][o.o.s.a.s.SinkProvider ] [wazuh-indexer-0] Default endpoint could not be created, auditlog will not work properly.
[2024-11-05T23:36:08,815][WARN ][o.o.s.SecurityAnalyticsPlugin] [wazuh-indexer-0] Failed to initialize LogType config index and builtin log types
[2024-11-05T23:36:17,207][ERROR][o.o.s.a.BackendRegistry ] [wazuh-indexer-0] Not yet initialized (you may need to run securityadmin)
[2024-11-06T09:51:06,303][INFO ][o.o.j.s.JobSweeper ] [wazuh-indexer-0] Running full sweep
[2024-11-06T09:51:07,428][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [wazuh-indexer-0] Starting housekeeping task for auto refresh streaming jobs.
[2024-11-06T09:51:07,429][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [wazuh-indexer-0] Finished housekeeping task for auto refresh streaming jobs.
[2024-11-06T09:56:06,305][INFO ][o.o.j.s.JobSweeper ] [wazuh-indexer-0] Running full sweep
[2024-11-06T10:00:15,011][INFO ][o.o.c.m.MetadataUpdateSettingsService] [wazuh-indexer-0] updating number_of_replicas to [0] for indices [wazuh-monitoring-2024.45w]
[2024-11-06T10:01:06,306][INFO ][o.o.j.s.JobSweeper ] [wazuh-indexer-0] Running full sweep
wazuh-manager-master-0
2024-11-05T21:41:19.845Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://indexer:9200)) established
2024/11/06 05:00:29 wazuh-modulesd:vulnerability-scanner: ERROR: Error updating feed: [json.exception.parse_error.101] parse error at line 1, column 15259834: syntax error while parsing object key - unexpected end of input; expected string literal, trying to re-download the feed.
2024/11/06 09:46:35 rootcheck: INFO: Ending rootcheck scan.
2024-11-06T09:49:22.949Z INFO log/harvester.go:333 File is inactive: /var/ossec/logs/alerts/alerts.json. Closing because close_inactive of 5m0s reached.
2024/11/06 10:42:06 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2024/11/06 10:42:10 wazuh-modulesd:syscollector: INFO: Evaluation finished.
wazuh-manager-worker-0
[cont-init.d] done.
[services.d] starting services
starting Filebeat
2024/11/06 03:09:28 wazuh-modulesd: WARNING: Could not connect to socket 'queue/cluster/c-internal.sock': Connection refused (111).
2024/11/06 03:09:35 wazuh-modulesd: ERROR: Could not send message through the cluster after '10' attempts.
2024/11/06 03:09:35 wazuh-modulesd:agent-upgrade: ERROR: (8123): There has been an error executing the request in the tasks manager.
[services.d] done.
2024-11-06T03:09:40.979Z INFO instance/beat.go:455 filebeat start running.
2024-11-06T03:09:41.079Z INFO memlog/store.go:119 Loading data file of '/var/lib/filebeat/registry/filebeat' succeeded. Active transaction id=0
2024/11/06 04:09:24 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2024/11/06 04:09:30 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2024/11/06 04:13:57 wazuh-modulesd:vulnerability-scanner: ERROR: Error updating feed: Invalid line. file: queue/vd_updater/tmp/contents/vd_1.0.0_vd_4.8.0_1052170_1730719775.json, trying to re-download the feed.
2024/11/06 04:13:57 wazuh-modulesd:vulnerability-scanner: INFO: Initiating update feed process.
2024/11/06 04:13:58 wazuh-modulesd:vulnerability-scanner: ERROR: Error updating feed: Unable to find resource., trying to re-download the feed.
For most of the logs it looks like each pod had some hiccups but eventually started properly. However, for the worker pod, the issues appear to be with the vulnerability scanner and I suspect it wouldn't be related to worker startup functions or anything to do with dashboard issue.
As part of me trying to troubleshoot this, I noticed the following when trying to ping indexer from the dashboard pod:
on the dashboard pod:
sh-5.2$ curl https://indexer:9200
curl: (60) SSL certificate problem: unable to get local issuer certificate
logs on indexer pod:
[2024-11-06T12:27:29,550][ERROR][o.o.s.s.h.n.SecuritySSLNettyHttpServerTransport] [wazuh-indexer-0] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:378) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:316) ~[?:?]
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:134) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:310) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1445) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.107.Final.jar:4.1.107.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.107.Final.jar:4.1.107.Final]
at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
Caused by: javax.crypto.BadPaddingException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1864) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:239) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:196) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:159) ~[?:?]
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111) ~[?:?]
... 27 more
[2024-11-06T12:27:29,558][WARN ][o.o.h.AbstractHttpServerTransport] [wazuh-indexer-0] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/20.129.0.244:9200, remoteAddress=/20.129.0.245:43586}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.107.Final.jar:4.1.107.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.107.Final.jar:4.1.107.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.107.Final.jar:4.1.107.Final]
at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:378) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) ~[?:?]
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:316) ~[?:?]
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:134) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:310) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1445) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
... 16 more
Caused by: javax.crypto.BadPaddingException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1864) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:239) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:196) ~[?:?]
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:159) ~[?:?]
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:310) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1445) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387) ~[netty-handler-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.107.Final.jar:4.1.107.Final]
... 16 more
Could this be a potential hint as to what's the problem here?
Please let me know if any of these are hints at what could be the problem or if I should provide further information. Thank you.
The text was updated successfully, but these errors were encountered:
@Sagarsinghinfra360 thank you for the fix. Unfortuantely, that appears to be making wazuh insecure and unsuitable for production environments so would like to find a different solution.
Hello wazuh newbie here. I have done a fresh wazuh installation on bare-metal kubernetes. I've followed the steps from the official documentation here:
https://documentation.wazuh.com/current/deployment-options/deploying-with-kubernetes/index.html
I created self-signed certificates and did the
local-env
Kustomize install.I see all the objects successfully created and running:
kubectl get pods
kubectl get svc
I created an ingress for the dashboard service but cannot access the dashboard. Instead, I get the following error:
I have tried getting the dashboard webpage from within the dashboard pods with the following attempts and responses:
accessing dashboard service from within the indexer pod:
log summar from all pods:
wazuh-dashboard-8664dcb64d-kwr4d
wazuh-indexer-0
wazuh-manager-master-0
wazuh-manager-worker-0
For most of the logs it looks like each pod had some hiccups but eventually started properly. However, for the worker pod, the issues appear to be with the vulnerability scanner and I suspect it wouldn't be related to worker startup functions or anything to do with dashboard issue.
As part of me trying to troubleshoot this, I noticed the following when trying to ping indexer from the dashboard pod:
on the dashboard pod:
logs on indexer pod:
Could this be a potential hint as to what's the problem here?
Please let me know if any of these are hints at what could be the problem or if I should provide further information. Thank you.
The text was updated successfully, but these errors were encountered: