-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat spamming errors when json.keys_under_root: true and message is not json #6045
Comments
Are these messages coming from the same log file? Could you share some log output? |
If your containers logs anything that is not a valid json, filebeat will log this error, our use case is that we log like 10k/s valid jsons and another 10k/s invalid json, and for each of these invalid jsons logged across multiple applications and containers, filebeat logs an error like that, imagine 10k logbeat errors/s... it's very annoying, take logstash json filter for example, when the message you are trying to parse is not a valid json it only tags the record with an "cannot parse" tag |
@felipejfc I definitively see the problem if in a single log file json logs and non json logs are mixed. Unfortunately this can be the case for docker as a docker image outputs logs from 2 different service in one stream. Is that your use case? But if the logs are in files, different prospectors with and without json decoding could be specified for example based on image name in auto discovery. I think I understand the problem and I like your proposal to make it a config option, I just want to make sure we don't add this "feature" if there is an other way to solve it. |
Example. Spring Boot java application running in docker with log-driver json-file logging json to stdout. This means that all output from the process will end up in /var/lib/docker/containers//-json.log We have JAVA_TOOL_OPTIONS env set. This means that the JVM will pick up this env and log that when starting. After that, Spring Boot will send json logs. All these logs are printed on stdout and will end up in the same file... two first rows:
Running filebeat with config:
Expected behaviour would be to try json extraction, and log the message as it is if it fails... imho. Change of prospector from docker to logs and use processor decode_json_fields doesn't work since the when condition isnt evaluated before it tries to parse it as json... it seems. |
Sounds like we should add a config option |
Sounds great. I also get many of these:
But thats the same thing I guess, and will be handled by |
@magnusheino Looks like almost the same error. Do you get them in two different places? |
@ruflin ++ on adding a |
@gmoskovicz You mean |
So this is a regression from 5.x, where this error wasn't thrown.. Not a nice surprise when upgrading filebeat. We have a single prospector deployed as a daemonset (one filebeat per machine picking up all logs for that machine) to a kubernetes cluster, picking up all logs and preprocessing multiline from both json and non-json logs:
This is a direct replacement for fluentd, and allows nice logs to be persisted to kafka for logstash to later process. We cannot really use autodiscovery as we would have to document all components (100+) with whether they log json, and IMHO that's up to the containers, some of which are system components we don't have control over. The only possibility for us is to completely turn off logging in filebeat 😞 @ruflin some way to disable json errors e.g. |
I opened #6547 which introduces |
Nice. Thanks for the heads up. Great addition! |
🎉 Thanks for the quick turnaround! |
@ruflin will that option be released in 6.2.5 or 6.3 or 7.x? |
@Tapppi 6.3 and 7.x |
@ruflin Is there any workaround in 6.2.4? These logs are killing my filebeat instances. I'm using the
I'm not even sure why these errors are being generated because all of my logs are JSON (using the json-files log driver):
If I use the Are Docker images for non-production releases (like 6.3) available? |
@joshuabaird There is unfortunately no workaround for 6.2 here. I'm wondering why you get the error you see. Could you open a topic on discuss for this, share what you shared above and add some example log lines? https://discuss.elastic.co/c/beats/filebeat |
Hi @ruflin. I'm running version 6.4 of filebeat and have added Should the code in the link below be an if-else statement, rather than two separate 'if' statements, to cater for the above scenario? https://github.com/amomchilov/Filebeat/blob/master/harvester/reader/json.go#L32 |
Hi, |
Any fix or update? |
I have tested it and cannot reproduce with the following configuration and input: filebeat.inputs:
- type: log
enabled: true
paths:
- test.log
json.add_error_key: false
json.ignore_decoding_error: true
output.console:
enabled: true
codec.json:
pretty: true test.json:
The code is also correct. My guess the problem is that |
@kvch Thank you for investigating. Our setup is slightly different; both |
This still seems to be an issue with 6.6.1:
This is produced with the following config:
Is the consensus here that |
@joshuabaird You have set On the other hand, I don't think you need the |
@n-forbes-SB No, it is a valid setup. That's why we have two configuration options. Some people prefer to see errors in the event, some would like to see it in the logs. |
I've a situation where I'ld like json logs to be parsed and expanded but non json logs also be logged, fluentd does that but with filebeat it keeps spamming stdout with an error:
Our kubernetes cluster logs a lot and all types of logging, json and not json messages... how do we solve this issue?
thanks
The text was updated successfully, but these errors were encountered: