-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to queue/spool to disk #575
Comments
It is great that you created this ticket. I wanted to do so really soon. It will also be useful in case of D/DoS attack: during an attack, Packetbeat might stop outputing to avoid more traffic congestion, while still capturing. Once the attack is over, Packetbeat will output the recorded data to LS or ES. If we can define more precisely this feature, I would gladly work on it (maybe with someone else if that's achievable?). But anyway, I won't be able to begin working on this potential feature right now as I have DNS over TCP to finish first and other projects to work on. |
+1 for this feature, I think it makes a lot of sense to have it for all Beats. It also came up in a discussion about building a Docker log driver that doesn't lose lines based on libbeat: https://github.com/elastic/libbeat/issues/37 |
This is one of the best features nxlog has. Gives operations a lot of flex regarding availability of the central infrastructure. +1 for beats getting an internal queue. |
+1 For the feature. @McStork did you have a chance to look into it yet? Wondering if I could help .. how would you check if logstash is available? Only thing I found in this regard is https://discuss.elastic.co/t/what-is-a-recommended-healthcheck-to-use-for-logstash/27691 |
@blubbi321 Hi. Well, I looked at ways to implement it.
There are Go libraries that are based on Queue providers (Redis, ...) but that's not suiting beats lightweight expectations.
Instead of going through the hassle of writing a library, some chose to implement it directly in the processing pipeline of events. That's what the developers of Heka, another data collector/shipper written in Go, did using Protobuf: So that's the two main ways to go for it. |
++ this feature |
NSQ's disk queue seems like a good implementation, i'd like to use it directly: |
Using a local nsqd service could also be an option. So having a native nsq output out of all beats is the only thing needed |
@elvarb nsq can be a dedicated option output, just like kafka right now we have, but here the disk queue is used internally to do local safe buffer, directly use a local nsqd will be too heavy i think |
@medcl I have used nsqd as a local queue in metric collection with good success. Uses very little resources, gives me the option of encrypting transfers and to use one of the many nsq utilities (nsq to nsq, nsq to file, nsq to http for example) Though it does depend on the volume of data the host is gathering. Regarding the nsq go-diskqueue package I'm glad to see that it is in its own repo now, there were a few requests to separate it from the main program because of its usefulness. Another interesting solutions I have found that implement a local disk queue in go |
For some use cases we should also consider user modification of spooled data as a potentially bad thing. Controlling it might not be possible but perhaps we can at least monitor for potential modification and report back a chain of custody with the data. We would need to be exactly right when we provide a conclusion of modified or not. We could certainly have an 'unsure' for the many situations where maybe the Beat was off and we can't be sure what happened. |
This feature would be really nice for those that are sending log or event data directly to beats and would like for the service to be more resilient. With an on-disk queue, it would be possible to flush memory to an on-disk queue and restart even though back-end services are currently unavailable or running too slow. (much like rsyslog memory and disk queue mechanism) |
@bfgoodrich , |
I'm closing this issue in favor of the ongoing meta issue. All Beats have support for a configurable queue. For example see filebeat docs: https://www.elastic.co/guide/en/beats/filebeat/current/configuring-internal-queue.html Spooling to disk meta issue: #6859 |
Currently it appears that with packetbeat, if the output destination (logstash, elasticsearch, etc.) is unavailable we will retry, but not queue/store the data locally so that recovery can happen in the case of a network or service outage.
Currently there is a
max_retries
setting that appears to be standard across the output plugins. My suggestion would be to add functionality and appropriate settings for local queueing. For example:max_retires
is met.max_retries
is meant and if the optionally configured memory buffer is full. The user can configuration the max amount of disk space that will be used before the oldest events are dropped. Perhaps default to 100MB for example.This functionality is crucial for shipping and really makes for a flexible deployment topology.
The text was updated successfully, but these errors were encountered: