You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of my configuration for parallel consumers:
created new JStreamVertxParallelEoSStreamProcessor<>(vertx, wc, options) to permit to invoke vertxHttpWebClientStream to send a http POST request to the endpoint
Use Vertx as HTTP engine
Set commit mode to CommitMode.PERIODIC_CONSUMER_SYNC and setTimeBetweenCommits to 5 seconds
Ordering by ProcessingOrder.KEY
I have created an integration test to push events with specific sequence and during the consuming and sending events to a testing endpoint, is simulated a rebalance process simply connect another kafka consumer with the same groupId and after a while, disconnecting and closing the new kafka consumer.
So when I query my testing endpoint I can observe a lots of data duplication. I would expect to have a persisting strategy in the onPartitionsRevoked callback that commit the current offset map, but looks like the commit occurs only at scheduled time (in my case 5 sec). Could you please confirm this behaviour ? Is it possible to reach Exactly Once Semantics at the http sender level for example waiting that all in-flight events are received by the Vertx and therefore commit the offsets inside the onPartitionsRevoked callback ?
The text was updated successfully, but these errors were encountered:
Discussed in #360
Originally posted by Alessandrovito July 20, 2022
The text was updated successfully, but these errors were encountered: