-
Hello team, I am mainly interested in key-level parallelism of parallel consumer however our application is data critical and hence would like to ask a simple question on below scenario and how it is handled to ensure no event loss in parallel consumer: event1 and event2 are produced on partition1 in given order
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 8 replies
-
Sorry for the delay in answering our question! It's a great question and really gets at the heart of what this library provides. However, that's not how offset committing works :) Have you had a chance to look at the readme?
Hopefully that clears things up for you? Let me know if you have any further questions! Let me know how you get on... You can actually try this out- send some messages, thread.sleep on message 30, complete message 40, ask the PC to commit, then kill the process. Start up again, and observe that message 40 will be skipped, while message 30 will be skipped. |
Beta Was this translation helpful? Give feedback.
-
Thanks Antony for detailed explanation..
Is there integration of parallel consumer API with spring-kafka? Or any
example how it can be used if existing code is using spring-kafka API and
we want to use Parallel consumer in that code?
…On Mon, Jul 26, 2021 at 3:21 PM Antony Stubbs ***@***.***> wrote:
Sorry for the delay in answering our question! It's a great questions and
really gets at the heart of what this library provides.
However, that's not how offset committing works :) Have you had a chance
to look at the readme?
1. Let's insert event key1 with has offset 29, which was previously
complete and committed.
2. Offset 40 will not be committed, as there are offsets previous to
it, which have not completed (offset 30). What will happen is, offset 29
will be committed again, and in it's commit message there is a metadata
payload - that payload will carry the information that offset 30 is not
complete, while offset 40 is complete.
3. instance goes down, power cut
4. partition is reassigned to another consumer
5. consumer with new assignment, downloads the offset data from broker
6. consumer unpacks offset meta data payload, and decodes the
information: offset is 29, payload says: 30 not complete, 40 compete
7. consumer polls partition, downloads offset 30 and 40
8. consumer knows 40 is previously completed, so skips processing
message with offset 40
9. consumer processes message 30 concurrently as it has a different key
10. message 30 completes
11. consumer begins an offset committing process in background
12. consumer sees that 30 and 40 are now complete, commits offset 41
with empty metadata
Hopefully that clears things up for you? Let me know if you have any
further questions! Let me know how you get on...
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#135 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AU4J3ZYPGSFCVODBJJGT37TTZUVZNANCNFSM5AR4PIZQ>
.
|
Beta Was this translation helpful? Give feedback.
Sorry for the delay in answering our question! It's a great question and really gets at the heart of what this library provides.
However, that's not how offset committing works :) Have you had a chance to look at the readme?