Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot produce message with size nearing message.max.bytes #993

Closed
3 of 9 tasks
ozac opened this issue Jan 10, 2017 · 11 comments
Closed
3 of 9 tasks

Cannot produce message with size nearing message.max.bytes #993

ozac opened this issue Jan 10, 2017 · 11 comments

Comments

@ozac
Copy link

ozac commented Jan 10, 2017

Description

Using default message.max.bytes (1,000,000).
Trying to produce a message with size ranges from 999,918 to 1,000,000 gets into a loop.

Here is a sample output of rdkafka_example.exe:

Waiting for 1
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-PRODUCE: mybroker:9092/0: No more space in current message (0 messages)
Waiting for 1
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
Waiting for 1
LOG-7-PRODUCE: mybroker:9092/0: No more space in current message (0 messages)
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
Waiting for 1
LOG-7-TOPPAR: mybroker:9092/0: mytopic [0] 0+1 msgs
...

Does message.max.bytes refers to the user's message, or to low-level message which contains additional headers?

Producing a message with size 1,000,001 or larger, returns the expected error from broker:
Broker: Message size too large

Thanks!

How to reproduce

Checklist

Please provide the following information:

@edenhill
Copy link
Contributor

Thanks for your report and sorry for the slow response time.
I'll make a test-case for this to find out what is going on.

@edenhill edenhill added this to the 0.9.5 milestone Mar 7, 2017
@ozac
Copy link
Author

ozac commented May 25, 2017

Hi,
I see that 0.9.5 is ready. Was this issue fixed?

Thanks

@edenhill edenhill modified the milestone: next feature Jun 29, 2017
@ozac
Copy link
Author

ozac commented Nov 9, 2017

Any update?

Thanks

@edenhill
Copy link
Contributor

Sorry, not yet. PRs are welcome!

@xzxxzx401
Copy link

Hi, we met the same problem at version v0.11.5.
We set message.max.bytes to 1024, then send a message with lenth 1023(user message, not including kafka headers)
then we find No more space in current MessageSet (0 message(s), 111 bytes) log.
What`s more, we also find in this situation, broker thread seems spinning, making CPU usage to 100% per thread.
It seems that this bug has not fixed?

@edenhill
Copy link
Contributor

The problem is that the final size of the message can't be fully known at the time of produce() (where the message length is checked first):
message headers and framing variations depending on Kafka message version may lead to messages larger than the configured max size.

The workaround for now is to increase message.max.bytes on the producer to allow for slightly larger messages.

@xzxxzx401
Copy link

So can we just return this message when second check at making messageSets? just return an error(for example TOO LARGE) to dr_cb to avoid spin, instead of waiting TimeOut.

@edenhill
Copy link
Contributor

Yeah, that sounds like it could work.

@yulifengli
Copy link

can you fix the spin loop,cpu 100% usage bug(poll (x,x,0)timeout (0) looping)

@fmstephe
Copy link

fmstephe commented Sep 2, 2019

The workaround for now is to increase message.max.bytes on the producer to allow for slightly larger messages.

It's important to note that this isn't a work-around for the most serious part of this bug. If your message gets very close to the limit we enter an infinite loop.

This is very severe.

Raising the limit may allow messages to go through, but doesn't protect us from the possibility of an infinite loop.

edenhill added a commit that referenced this issue Sep 7, 2019
Since the final request size can't be known at produce() time
we allow ProduceRequests larger than message.max.bytes (overshot by at
most one message) and instead rely on the broker enforcing the
MessageSet size.
@edenhill
Copy link
Contributor

Feel free to try out the fix on the issue993 branch

It will go into v1.3.0 (q4)

edenhill added a commit that referenced this issue Oct 14, 2019
Since the final request size can't be known at produce() time
we allow ProduceRequests larger than message.max.bytes (overshot by at
most one message) and instead rely on the broker enforcing the
MessageSet size.
edenhill added a commit that referenced this issue Oct 16, 2019
Since the final request size can't be known at produce() time
we allow ProduceRequests larger than message.max.bytes (overshot by at
most one message) and instead rely on the broker enforcing the
MessageSet size.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants