-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic: runtime error: send on closed channel #151
Comments
I don't know why your cluster is not electing a new leader for that partition; as you can see from the logs, once the broker goes down we do fetch updated metadata (twice!) which should include the new leader. Perhaps it's just a timing issue (the new leader is not elected yet, so we have to drop messages while we wait to avoid overflowing)? I'm not sure. The panic is due to a subtle race condition; if the |
FWIW, the alternate producer design I played with earlier (#132) shouldn't have this bug; I'm thinking I should pursue that a little more seriously now. |
+1, I think we should adopt the new design, after doing some distributed On Tue, Sep 23, 2014 at 1:49 PM, Evan Huus notifications@github.com wrote:
|
Not at the moment; I think that's a relatively easy fix on either design, I just haven't found the time to write it, sorry :/ |
I think it's "ready" in the sense that all the tests pass, and it should work in normal cases. The open questions about it are:
I definitely wouldn't use it for production right now, but if you want to load it locally to try it out (I assume you have some sort of local harness based on all the "localhost" addresses in your logs) then it should work. The API is slightly different, but not excessively so. |
@eapache I try to understand why current producer drops messages when broker dies... Is it because In large clusters with a lot of partitions, election can take some time. I wonder how to not lose those messages and wait for new leaders... Is #132 better in handling that case? |
Both producers can wait for leader election if so configured (that's in the client config, see The default value for |
This might help with part of #151 by triggering the WaitForElection timeout if we're super-eager and request metadata before the cluster even realizes the broker is gone.
The new producer design has been merged, so this should be fixed. |
Hi
I get
panic: runtime error: send on closed channel
when I kill a broker in a Kafka cluster.It looks like
backPressureThreshold
is exceeded and you want to flush messages but something bad is happening.I also wonder why I get
kafka: Dropped 13803 messages
... What is the reason? I know that one of the brokers is dead, but what about new Leaders for partitions?I use an asynchronous producer. My configuration:
Logs:
Panic:
The text was updated successfully, but these errors were encountered: