-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat: container logs not skipped if json decoding fails #30191
Comments
Pinging @elastic/integrations (Team:Integrations) |
hi @Poweranimal , as far as I can see it is not an error in parsing the json log entry, but rather the fact that the json contains a field with an empty key ( as a workaround for your specific scenario you can add the following in the processors list:
|
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
Hi @aspacca , Things like this can happen and currently this will result in a situation in which Filebeat doesn’t ship any logs to ElasticSearch and creates tons of trash logs that can lead to high logging cost. Something very similar has already happened to us in an other way in the past. I think the default behavior shouldn't be: |
@Poweranimal could maybe |
Hi, we are experiencing the same issue, but with JSON documents send via the TCP input and decoded via the It would be great if this option can also be added to the processor itself. |
Hi! We're labeling this issue as |
Hi,
Apparently Filebeat doesn't skip logs collected from containers, if parsing the json log fails.
Instead, it retries to parse the log line many times a second infinitely and manages to spam a large amount of logs.
This blocks streaming logs to ElasticSearch entirely and it can cause huge logging costs, if one uses e.g. AWS CloudWatch or similar services to collect the logs of Filebeat,
Filebeat Version:
7.16.2
Filebeat Log
Filebeat Config
The text was updated successfully, but these errors were encountered: