-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consumption underflow error #2090
Comments
Thanks for a great report! These two lines are very interesting:
Underflow (e.g., short responses) are expected due to the way the broker uses sendfile. Both these lines are while parsing the Message Key. Could you try enabling crc checks? Do you know what client implementation (e.g., Kafka's official Java client) and version that was used to produce these messages? What is the broker |
Adding
I can paste the entire log if needed, I'm just being lazy about scrubbing all the internal data from it. The messages were produced using KStreams v0.11.0.3, which I believe uses the same version of Kafka producer under the covers. We are currently using |
I'm suspecting this is an issue with KStreams producing transaction control messages to a 0.10.1-format topic. |
Ok. Do you think I should report it over there instead? For posterity, I did manage to work around this in my code by setting |
What is your KStreams config? Specifically, what is |
We do not set |
Okay, then it is probably not an issue with transactional messages. Are these, from the client's perspective, badly formatted messages a one time thing or do they keep occurring in these partitions? |
Unfortunately I can't give you a packet capture - our corporate security people would have my head. Is there anything less detailed that would help? |
What I need is a binary dump of the FetchResponse to understand how it is encoded. |
We have a support contract with Confluent - they saw this and reached out on a separate channel. Since we have a contract with them, I was able to give them a full tcpdump. I believe they were going to coordinate with you on this, but if not please ping me or this thread again and I'll try to help. |
Perfect, we'll continue the discussion there instead. |
Read the FAQ first: https://github.com/edenhill/librdkafka/wiki/FAQ
Description
I have run into two partitions on a particular server that only consume to a certain point, and then stop. Other partitions on the same topic consume fine. The problem manifests in
kafkacat
v1.3.1 (using librdkafka v0.11.4) and also the Confluent .NET consumer v0.11.6. It does not happen in Java consumers, hence why I am reporting it here.How to reproduce
The problem is reproducible on two particular partitions of a topic in one environment, but nowhere else that I've found. I'm not sure how those partitions got into that state.
The attached logs are produced via
kafkacat
because I could get full debug logging from it more easily.Full logs in kcat.log.zip, but interesting excerpt:
kcat.offsets.log shows the offsets of consumed messages starting from
234611
as specified in the initial command, and ending at243804
, which is presumably where we hit the underflow.Checklist
IMPORTANT: We will close issues where the checklist has not been completed.
Please provide the following information:
0.11.4
and0.11.6
4.1.2, => Kafka
1.1.1`<REPLACE with e.g., message.timeout.ms=123, auto.reset.offset=earliest, ..>
debug=..
as necessary) from librdkafka (via kafkacat)The text was updated successfully, but these errors were encountered: