-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to configure the max size for unbounded queue #2093
Conversation
IMO adding a bound to the unbounded queue seems wrong. What might be more reasonable is adding a flag to the bounded queue to make it return |
Alternatively there could be a |
I see what you mean. This PR comes mostly from the way they are being used in practice. Unbound allows for a fire-and-forget-pattern, thus can be used from outside future-driven-code, while the channel-require you to deal with back-pressure and effectively forces you to also run a futures-poll. Arguably the later is better pattern. But with the code not being ready to do that, the choice is between having something that potentially eats up all your memory or stalls a thread because the chosen number is too small – and neither case is easy to detect. The main difference between the Edit: Looking at that a bit more higher level, I could see other usage patterns as well – like dropping the first or the new items when being full, offering ring- or dropping-buffers as overflow pattern alongside the currently default backpressure-pattern and send would depend on that stragtegy. In this scenario the current |
This now takes a newer, different approach a I had outlined before. For full-on backwards compatibility it still exposes the This removes a lot of (almost duplicate) code and combines the entire system into one common type. Unfortunately the exact strategy checking is a bit divided up in a few places – multiple approaches on having them combined in one place failed because many tests expect the exact order of task parking and wake up to happen in that very specific order (running from the same tasks) or they'd stall forever. While I could have changed the tests, I expect that this might be something other people rely upon and to not break anything downstream went for slightly more complicated code instead. If this is an approach you consider worth while, I am happy to:
What do you think? |
We are using the unbounded queue for its nice non-blocking behavior, but it being a potentially-endless sink for data and thus memory leaks, if the other end isn't polled properly makes it harder to reason about the code.
This adds a
buffer
parameter analogous as to the bounded channel to allow the user of the API to ensure the queue is never overflowing more than a predefined number of items.Right now this just errors with
Full
if an attempt tosend
is made but the max number is reached. I am thinking about allowing to define other strategies, like dropping that new entry or dropping the first entry to create ring-queues – depending on the usage scenarios, these might be good coping strategies.