-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support a max bytes per second on beats protocol #662
Comments
+1 |
+1 this, would be quite handy |
For more discussions also see https://github.com/elastic/filebeat/issues/227 |
+1 for this, it would be nice to have a QOS. |
+1 |
1 similar comment
+1 |
It would be useful to limit the number of events per second that filebeat can send. This would be useful for example if an application server that is monitored by Filebeat starts to generate millions of Errors, which could otherwise overload Logstash and Elastic with a huge spike in traffic. I have seen this exact scenario cause issues while at a client. |
@alexander-marquardt Yep, I had the exact same issue. If the connection was ever interuppted, a backlog built up, which was dumped as fast as FileBeat/Elasticserach could negotiate to go, which was way faster than our network infrastructure could take (over 30 Gb/s, IIRC), which overloaded our load balancers. Sad times. |
I created a new meta issue to track this. Closing this one in favor of https://github.com/elastic/beats/issues/17716 |
@mostlyjason "This issue has been moved to a repository you don't have access to." |
@amomchilov posted here #17775 |
Currently Filebeat seems to push data to Logstash at the rate it can tail a file. For users where the rate of log generation is variable this can result in high bandwidth utilization - sometimes unacceptably high.
This ticket proposes introducing a max_bytes_per_second setting in the beats protocol to Logstash. If the log files being followed are written at a greater rate than this setting value, Filebeat will throttle the rate at which it tails the log file - potentially resulting in it "falling behind" . In this case periods of lower activity would be used to catch up.
Discussion required as to how this would impact other beats. It may be more appropriate for this to be a Filebeat specific setting e.g. max_read_bytes_per_second.
The text was updated successfully, but these errors were encountered: