-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fluentd ignores the buffer queue_length_limit configuration value #1545
Comments
Does this happen with latest fluentd v0.14? |
Error "queue_length_limit" ignore happens even with latest fluent v0.14.20. It happens [warn]: parameter 'queue_length_limit' in |
I already commented at fluent-plugins-nursery/fluent-plugin-bigquery#142.
|
Yes, joker1007 is correct: https://docs.fluentd.org/v0.14/articles/buffer-plugin-overview#configuration-parameters |
Shouldn't fluentd not even start if the config is wrong then? In which version did this name change? All the documentation I used to build our config used the name 'queue_length_limit' and since fluentd started I assumed that was the correct name. Normally config errors mean fluentd won't even start. |
It seems that it was resolved by #1545 (comment) |
fluentd 0.14.9
CentOS 7.3
Ruby 2.3
Config:
Recently ran into an interesting issue with fluentd.
So something strange happened where our SELinux policies got messed up on a customer's machine and fluentd was denied access whenever it tried to read from or write to buffer files. As a result, fluentd infinitely tried to create new buffer files to read from and write to. This occurred on a customer machine that is not frequently used and so we did not notice the problem for over a week, at which point fluentd had created almost 5 million completely empty buffer files. Clearly this is a massive violation of the configuration option. While the full buffer size limit was not reached (500Gb), fluentd should still stop creating buffer files once the number of files created is equal to the queue_length_limit. Especially since fluentd tries opening ALL of the buffer files at once, if I had let fluentd try and open all 5 million buffer files then the system would not have had any left for other tasks.
I'm sorry I don't have the log file right now to show you the exact error message, but it was a permissions error with opening the "current" buffer file for read/writing, then fluentd would try to create a new buffer file and try again with the new buffer file. This repeated until we noticed the problem, fixed SELinux, and deleted all the empty buffer files and then restarted fluentd.
The text was updated successfully, but these errors were encountered: