-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot produce message with size nearing message.max.bytes #993
Comments
Thanks for your report and sorry for the slow response time. |
Hi, Thanks |
Any update? Thanks |
Sorry, not yet. PRs are welcome! |
Hi, we met the same problem at version v0.11.5. |
The problem is that the final size of the message can't be fully known at the time of produce() (where the message length is checked first): The workaround for now is to increase message.max.bytes on the producer to allow for slightly larger messages. |
So can we just return this message when second check at making messageSets? just return an error(for example TOO LARGE) to dr_cb to avoid spin, instead of waiting TimeOut. |
Yeah, that sounds like it could work. |
can you fix the spin loop,cpu 100% usage bug(poll (x,x,0)timeout (0) looping) |
It's important to note that this isn't a work-around for the most serious part of this bug. If your message gets very close to the limit we enter an infinite loop. This is very severe. Raising the limit may allow messages to go through, but doesn't protect us from the possibility of an infinite loop. |
Since the final request size can't be known at produce() time we allow ProduceRequests larger than message.max.bytes (overshot by at most one message) and instead rely on the broker enforcing the MessageSet size.
Feel free to try out the fix on the issue993 branch It will go into v1.3.0 (q4) |
Since the final request size can't be known at produce() time we allow ProduceRequests larger than message.max.bytes (overshot by at most one message) and instead rely on the broker enforcing the MessageSet size.
Since the final request size can't be known at produce() time we allow ProduceRequests larger than message.max.bytes (overshot by at most one message) and instead rely on the broker enforcing the MessageSet size.
Description
Using default
message.max.bytes
(1,000,000).Trying to produce a message with size ranges from 999,918 to 1,000,000 gets into a loop.
Here is a sample output of rdkafka_example.exe:
Does
message.max.bytes
refers to the user's message, or to low-level message which contains additional headers?Producing a message with size 1,000,001 or larger, returns the expected error from broker:
Broker: Message size too large
Thanks!
How to reproduce
Checklist
Please provide the following information:
debug=..
as necessary) from librdkafkaThe text was updated successfully, but these errors were encountered: