You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scenario: publish 200 messages on input topic and check keda scaling.
Actual results: when lag > 100, a new consumer pod was started, but some of the messages were processed by both consumers (screenshot attached)
Expected: the consumer should handle kafka re-balancing and ensure single message processing.
Found this issue reported on sarama library: IBM/sarama#1516 and one of the comments mentions:
The above is a symptom of not looping on Consume(). Consume() will exit without error when a rebalancing occurs and it is up to the user to call it again when this occurs.
Under the hood it seems like when a re-balance occurs all sessions are torn down completely (briefly no members exist and therefore no partitions are handled by anyone) and when you re-call Consume() a new session is brought up which should get its share of the partitions.
I'm not familiar with Go. Please check the code and asses if there is room for improvement regarding the consumer handling.
Additionally, maybe update sarama library to latest version.
The text was updated successfully, but these errors were encountered:
Using fission
with keda
I have a MessageQueueTrigger defined:
Scenario: publish 200 messages on input topic and check keda scaling.
Actual results: when lag > 100, a new consumer pod was started, but some of the messages were processed by both consumers (screenshot attached)
Expected: the consumer should handle kafka re-balancing and ensure single message processing.
Found this issue reported on sarama library: IBM/sarama#1516 and one of the comments mentions:
I'm not familiar with Go. Please check the code and asses if there is room for improvement regarding the consumer handling.
Additionally, maybe update sarama library to latest version.
The text was updated successfully, but these errors were encountered: