-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: remove deadlock possibility by adding resend_queue queue #73
Conversation
Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>
Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>
e3acc5f
to
38f0f49
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would love to see some tests for it :)
Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>
Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>
Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>
Instead of doing this, couldn't we just make the requeue non-blocking and drop data if there's no space left in the queue? Do we know what the right thing to do in general is for a Logstash output if it can't send data fast enough? Should it block, return an error, or something else? |
AFAIK if queue is full it is going to backpressure logstash, so no new events are going to be put into queue until it has space
I want to avoid changing architecture or behavior if there is no need to do that |
I'm merging this PR as we need bugfix urgently. We can continue discussion here and add changes as separate PRs |
In order to send data simultaneously, one of two approaches can be used here:
This PR keeps the async approach and prevent deadlock by adding additional queue (sized twice as worker numbers)
I wanted to add minimal set of changes in order to fix #71