-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce backpressure to webserver #3108
Introduce backpressure to webserver #3108
Conversation
78d31ee
to
ae0c91f
Compare
@danielkec We should probably create another issue to run the perf tests after this change. |
Signed-off-by: Daniel Kec <daniel.kec@oracle.com>
0e2b684
to
4a62ab4
Compare
Signed-off-by: Daniel Kec <daniel.kec@oracle.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not too sure what this change achieves.
Ie how is the backpressure introduced here. The old version effectively said: "push as much data as you want, we'll push it to channel
as soon as we can". The proposed version effectively says: "push at most two items, and we'll tell you when to push more, and we do that as soon as we pushed it to channel
as soon as we could".
I would understand if subscription.request(1)
were invoked after channel.write
completed - that is definitely allowing more data to arrive after it has been committed to the channel buffers - presumably, Netty won't complete the related Future, if write buffers are full.
Another thing that makes sense - count the bytes that go through here, and allow up to a given amount to be buffered. Eg onNext
counts how many bytes were in the buffers it has seen, and requests one more chunk if it has seen below watermark. Then channel.write
completion Future subtracts the bytes it has just written. If this results in crossing the "watermark", then request 1. (because onNext
stopped requesting more)
Watermarking approach is more predictable. Just deferring to channel.write
completion would work poorly, if in the common case socket write completion does not happen during onNext
, but will work better than watermarking, if there is this guarantee. (Ie that onNext
attempts to call channel.write
, through a few calls of intermediate routines, and that Netty completes the future before onNext
returns, if the write fully fits in socket send buffers)
webserver/webserver/src/main/java/io/helidon/webserver/BareResponseImpl.java
Outdated
Show resolved
Hide resolved
webserver/webserver/src/main/java/io/helidon/webserver/BareResponseImpl.java
Outdated
Show resolved
Hide resolved
Signed-off-by: Daniel Kec <daniel.kec@oracle.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try...finally
is a safer pattern for "always request in onNext
, unless explicitly delegated to orderedWrite
", but ok, we can live with this implementation.
Thank you for taking care of my comments. Let's implement better backpressure as a separate ticket that you created.
There are basically 2 possibilities how to solve the problem with upstream being faster than the Netty event loop.
Both solutions are introducing back-pressure.
This PR fixes #3092 by requesting one-by-one item.
Introduction of the symmetrical back-pressure is expected to bring some small performance penalty:
JMH comparison
Compensation is possible by buffering on the publisher's size, this could be achieved by introducing buffering reactive operator.
Signed-off-by: Daniel Kec daniel.kec@oracle.com