-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broke Kafka replication when requesting ack != -1 and a message larger than 1000000 bytes #48
Comments
Make sure to configure In code you would use rd_kafka_conf_set(rk_conf, "message.bytes.max", "NNNNNNNN", ...) |
Ok, but why would all subsequent runs of the binary fail thereafter? On Friday, January 3, 2014, Magnus Edenhill wrote:
|
It sounds to me that one or more of your brokers, but not all, have failed in some way. Did you change message.bytes.max on ALL brokers in your cluster and restart them? |
I'll have my colleague check the broker config. However, since new topics On Friday, January 3, 2014, Magnus Edenhill wrote:
|
Any news on this? |
At this point I'd say just close it. Either I did something wrong with the On Wed, Jan 29, 2014 at 1:00 AM, Magnus Edenhill
|
I'm not sure if this is a librdkafka issue or a kafka general one. I did the following steps:
Any attempt to run the rdkafka_performance binary (regardless of msg size) using an ack of anything other than 1 fails thereafter with a timeout of 5000 ms. The replication appears to be broken hereafter.
Was able to reproduce this at will, using a new topic each time. I wasn't able to reproduce this when using a cluster max message size of 1000000 (the default). So I suspect there's some sort of issue when the msg is > 1000000 but less than the max msg size configured.
Was wondering if you could reproduce this and if so had any thoughts?
The text was updated successfully, but these errors were encountered: