-
Notifications
You must be signed in to change notification settings - Fork 662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rdkafka#consumer Invalid response size -2147483555 (0..1000000000) #100
Comments
Here is complete configmap, left side is the override value if has, otherwise use default: |
Do note that
Response sizes are sent in a signed 32 bit integer, when it becomes negative it means it overflowed (which I would say is a broker issue):
There is not really any reason to have such a high |
Thank you for the prompt response! |
Hi @edenhill How it seems, whatever receive.message.max.bytes we set, it always has some overhead more than what we config. One of our Topic we use key to carry as the MessageHeader as your current version haven't support header yet. My Java client didn't threw any exception when consumed the same cluster. (Maybe my messages have all TTL) |
Hi @edenhill -- Any updates? |
I've tried to reproduce this to no avail. |
This is our Producer config, we use default for all the rest of field if no override: |
Do you know the typical message size for that topic? |
from 1KB up to 2-3MB (various Topics) |
Okay, so I think I know what the issue is. The workaround for now is to set fetch.message.max.bytes to a reasonable value given the number of partitions you consume, and then making sure that receive.message.max.bytes is at least fetch.message.max.bytes*numPartitions + 5% |
I see, thank you so much for the prompt reply, let me try it! |
@edenhill -- we are testing the config changes per your suggestion.
Thanks! |
Hi @edenhill , |
Fixed on librdkafka master |
great charts, how did you create them? |
Description
I have kafak 0.10.2.0 server, and having setting: message.max.bytes=1500012 (1.5MB), and I have Go client: librdkafka-dev_0.11.0 0n Ubuntu 14.04 (Debian package). Go version 1.9.0
I got two kinds of wired error on Go client side:
%3|1506735207.752|FAIL|rdkafka#consumer-6| [thrd:ps6655.prn.parsec.abc.com:9092/2]: ps6655.prn.parsec.abc.com:9092/2: Receive failed: Invalid response size 1000000040 (0..1000000000): increase receive.message.max.bytes
ERROR|rdkafka#consumer-6| Receive failed: Invalid response size -2147483555 (0..1000000000):
Why response size over my 1.5MB server allowed received thresold ?
Why response size is negative number????
Thanks,
~Jing
How to reproduce
run consumer_channel_example.go with multiple groutines.
Checklist
Please provide the following information:
LibraryVersion()
):librdkafka-dev_0.11.0
0.10.2.0
ConfigMap{...}
"consumers": {
"consumerProfiles": [
{
"bootStrapServers": "kafka-zkb0001.lab.parsec.abc.com:9092,kafka-zkb0002.lab.parsec.abc.com:9092,kafka-zkb0003.lab.parsec.abc.com:9092",
"enableAutoCommit": false,
"autoOffsetReset": "earliest",
"topic": "activity",
"consumerGroupID": "JetTestConsumergroup",
"go.application.rebalance.enable": true,
"go.events.channel.enable": true,
"go.events.channel.size": 100000
}
]
}
Ubuntu 14.04
"debug": ".."
as necessary)The text was updated successfully, but these errors were encountered: