[EventHubs] Buffered Producer back-pressure handling strategy #23909
Labels
Client
This issue points to a problem in the data-plane of the library.
Event Hubs
Messaging
Messaging crew
Milestone
Issue: Back-pressure
How should the
EventHubProducerClient
in buffered mode behave against the scenario of enqueuing events when the buffer is full/not having enough room for the coming events.(Originally from comment in PR)
Anna's comment:
sample cases:
max_buffer_length is 100, there's 50 events in the buffer, and 60 coming
Scope
Current implementation/behavior:
when EventHubProducerClient works in buffered mode, and.send_event/batch is called with timeout, timeout is used to control the enqueue operation, not the send operation.
if there're fewer slots in the buffer than the incoming events, flush first with the given timeout, then check timeout, if times out then we raise error, else we put events into buffer.
pseudocode:
Other option
EventHubProducerClient.send_event/batch
in buffered mode to flush the buffer queue.if not via proactive call, we could use flag/condition to decouple
send_event/batch
and the flush operation.we could add another flag and reuse the
check_max_wait_time_future
thread for flushing, it's almost the same effect as the current implementation, but decoupleflush
fromenqueue_events
. but this way, the timing is not perfectly controlled ascheck_max_wait_time_future
will sleep periodically or we change the sleep period simultaneously.having another background task monitoring the load of the buffer, if it's 70% full, then we flush
option2 still doesn't handle well with corner cases like if the buffer is 65% full, however the incoming events will take more 35% of the capacify, in this scenario, we still need flush.
Summary
the proactive flush should be good to start with, it's easy to explain and the implementation straightforward.
However, depending on the changing customer requirements, we might want to consider other options.
The text was updated successfully, but these errors were encountered: