-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sarama committed offsets don't match java consumer offsets #705
Comments
What are you using to commit offsets? Sarama does not commit offsets by default. |
Using PartitionOffsetManager, committing manually. |
Huh, dug into this a bit more and found https://github.com/apache/kafka/blob/fa32545442ef6724aa9fb5f4e0e269b0c873288f/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L287-L288. So in Java-land, you are expected to commit I'm not sure what the correct solution is. We probably shouldn't be adding one in |
@eapache I think we should follow what java consumer does, that would potentially allow to mix/interchange consumers, also kafka's built-in reporting tools would look better. |
Upstream requires you mark last-consumed+1 and then returns that value directly. We were requiring you mark last-consumed and then adding one to the returned value. Match upstream's behaviour so that our offset tracking is interoperable. Fixes #705.
Upstream requires you mark last-consumed+1 and then returns that value directly. We were requiring you mark last-consumed and then adding one to the returned value. Match upstream's behaviour so that our offset tracking is interoperable. Fixes #705.
Nevermind, this is already discussed in #713. |
When we consume everything from kafka topic, kafka's tool (bin/kafka-consumer-groups.sh) still shows lag of 1 message. If we run java consumer with the same group it consumes 1 remaining message (which was already consumed by sarama consumer).
Does sarama use own offset schema (one less than java)?
Versions
Sarama Version: master
Kafka Version: 0.10
Go Version: 1.6.2
The text was updated successfully, but these errors were encountered: