-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve HTTP/2 performance by using Channels #30235
Comments
Thanks for contacting us. |
Copied from #30829
|
Note: The HTTP/2 FlowControl logic relies on being synchronized by the _writeLock. aspnetcore/src/Servers/Kestrel/Core/src/Internal/Http2/Http2FrameWriter.cs Lines 701 to 715 in c925f99
We can instead move this logic into I don't think this causes a real problem for this approach. It's just a slight complication. I really like using Channels for this. |
We've moved this issue to the Backlog milestone. This means that it is not going to be worked on for the coming release. We will reassess the backlog following the current release and consider this item at that time. To learn more about our issue management process and to have better expectation regarding different types of issues you can read our Triage Process. |
Evidence of this issue from gRPC benchmarks. They show:
(h2 RPS is the yellow dot at the bottom) |
Closing this as fixed. |
Today Kestrel locks and copies buffers into the connection pipe when multiplexing output from multiple streams. We could improve this by using a channel to queue writes to the pipe instead of locking (See
aspnetcore/src/Servers/Kestrel/Core/src/Internal/Http2/Http2FrameWriter.cs
Line 265 in 9a87636
This samples shows the difference between a
SemaphoreSlim
and aChannel<byte[]>
and the performance difference is almost 3x with 0 allocations https://gist.github.com/davidfowl/8af7b6fa21df0fe1f150fb5cfeafa8f7.There was still lock contention in Channels itself but it was pretty optimized.
We should prototype and measure this change.
cc @JamesNK @halter73 @Tratcher
The text was updated successfully, but these errors were encountered: