-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
librdkafka 0.8 can not produce messages when the leader of the topic partition failed #14
Comments
I will push a fix for this soon. |
Hi, Magnus. |
You are right. |
Config property: Config property: "topic.metadata.refresh.fast.cnt" (def 10) Config property: "topic.metadata.refresh.fast.interval.ms" (def 250ms)v
Give it another shot now on master branch. |
Great! I test for this issue again and find that it is solved. |
Hi All, "https://github.com/edenhill/librdkafka/releases/tag/0.8.6" Regards, |
The release notes indicate what has been fixed, yes. or what are you asking? |
I'm using "librdkafka_2.10-0.8.2.1" and im facing the same problem. Cannot find info on this bug in release notes. I want to know that, if I use "0.8.6" version, then will this solve problem. Below are few logs generated by stub code. geo redundent kafka process (zookeeper + Broker) Machine 1 : advertised.host.name=sysctrl1.vsepx.broker.com % Sent 3 bytes to topic topic1 partition -1, outQlen[94] |
There is no such librdkafka version as "librdkafka_2.10-0.8.2.1". That looks more like a kafka version. It fails to TCP connect to broker sysctrl1.vsepx.broker.com with address 166.45.146.65 on port 9092. Check:
|
I'm using "kafka_2.10-0.8.2.1.tgz" for my C Project which has is needed to support geo redundant kafka process (zookeeper + Broker). Machine 1 : Producer1 : Broker IP"sysctrl1.vsepx.broker.com:9092,sysctrl2.vsepx.broker.com:9092" Machine 2 : Broker1 : advertised.host.name=sysctrl1.vsepx.broker.com Scenario: Now, If I kill and restart Producer1, then all the new messages are succesfully sent to Broker2. |
./rdkafka_exampleUsage: ./rdkafka_example -C|-P|-L -t [-p ] [-b host1:port1,host2:port2,..] librdkafka version 0.8.6 (0x00080600) |
Broker sysctrl2 is returning 0 brokers, 0 topics, in the metadata reply. |
Broker2 Logs : [2015-12-03 09:28:23,842] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector) |
There's your error: |
librdkafka logs (repeat2) % Sent 3 bytes to topic topic1 partition -1, outQlen[127] |
broker logs contd: [2015-12-03 09:28:24,573] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) |
From producer1 logs : log_cb : rdkafka#producer-0: [7] [METADATA] [sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 11 Leader 0] Here, "sysctrl2.." is the leader, but in the next log , "sysctrl1.." is shown as leader |
The last log says that broker 0 is leader for partition 11 |
broker.id=0; |
In your example you have the same hostname for both step 1 and 4, did you mean sysctrl2 on step 4? |
yes sir. sorry for mistake. |
Okay, this is quite an odd thing to do and I dont understand why you are doing it. But what this looks like to the client is:
librdkafka 0.8.6 does not support broker hostname updates, but the master branch does (at least somewhat). Which version of librdkafka are you using? (if you checked out from git please specify the exact git sha by |
I am using only one broker (no cluster). Here, as kafka consumer is dependent on offsets, we need the same "log.dirs" files to restore all the data. I have downloaded librdkafka from the below location Please let me know if there is an alternate way to achive this model. |
You should try using the latest master, it should support broker name changes. Regarding your geo setup, that is a much bigger discussion which is not really related to librdkafka, so I can't help you there. |
Thank you Magnus :) I will get the latest master and give a try. |
* [KIP-848] integration tests passing - Mock handler implementation - Rename current consumer protocol from generic to classic - Mock handler with automatic or manual assignment - More consumer group metadata getters - Test helpers - Expedite next HB after FindCoordinator doing it with an exponential backoff to avoid tight loops - Configurable session timeout and HB interval - Fix mock handler ListOffsets response LeaderEpoch instead of CurrentLeaderEpoch - Integration tests passing with AK trunk - Improve documentation and KIP 848 specific mock tests - Add mock tests for unknown topic id in metadata request and partial reconciliation - Make test 0147 more reliable - Fix test 0106 after HB timeout change - Exclude test case with AK trunk * Trivup 0.12.5 can run a KafkaCluster directly with KRaft and AK trunk * Trivup 0.12.6 build with a specific commit * rebase commit * Rebased * change * changes * changes * Style fix * PR comments * changes * minor * whitespace * pull and push changes --------- Co-authored-by: Emanuele Sabellico <esabellico@confluent.io> Co-authored-by: Anchit Jain <anjain@confluent.io>
Hi, Magnus.
I find that librdkafka 0.8 can not produce messages to the kafka brokers when the leader of the topic partition failed. After I check the new leader is elected, the problem is not yet solved.
I use the rdkafka_example.c in the example folder and create a topic with 3 replicas and 1 partition.
After I restart the producer (rdkafka_example.c, I use it as a producer), the problem is solved.
I am not sure whether it is a bug for librdkafka 0.8.
Thanks.
The text was updated successfully, but these errors were encountered: