-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUIC] Consider implementing send buffering on .NET side #73691
Comments
Tagging subscribers to this area: @dotnet/ncl Issue DetailsWe currently utilize send buffering on msquic side. Msquic will copy app buffer and return as soon as copy is done. We had to (temporarily) introduce an additional copy in #72746 to overcome a too-early-user-buffer-release problem on abort. If we will not be able to solve this problem in a different way, it doesn't make sense to have 2x copying, so we should consider switching to app-level buffering. Using our own send buffering would also allow to optimize the buffering for our usecases better than a general-case msquic buffering. We can also benefit from copying happening in parallel on managed threads in comparison to copying happening sequentially on msquic thread. Note: in case of app level buffering, msquic will hold onto the app buffer and return only after ACK is received. Note 2 from msquic docs: To fill the pipe in this mode, the app is responsible for keeping enough sends pending at all times to ensure the connection doesn't go idle. MsQuic indicates the amount of data the app should keep pending in the QUIC_STREAM_EVENT_IDEAL_SEND_BUFFER_SIZE event. The app should always have at least two sends pending at a time. If only a single send is used, the connection can go idle for the time between that send is completed and the new send is queued. Some related discussions:
|
While discussing this improvement and the issues discovered in #72696, one thought was to tie send buffer memory release to |
Closing based on the almost 50% perf regression on the basic send buffering implementation from this PR: #88320 IMHO, it'll be really hard to beat already optimized msquic buffering, especially with all We can always re-open if we decide to give this another spin. |
We currently utilize send buffering on msquic side. Msquic will copy app buffer and return as soon as copy is done.
We had to (temporarily) introduce an additional copy in #72746 to overcome a too-early-user-buffer-release problem on abort. If we will not be able to solve this problem in a different way, it doesn't make sense to have 2x copying, so we should consider switching to app-level buffering.
Using our own send buffering would also allow to optimize the buffering for our usecases better than a general-case msquic buffering. We can also benefit from copying happening in parallel on managed threads in comparison to copying happening sequentially on msquic thread.
Note: in case of app level buffering, msquic will hold onto the app buffer and return only after ACK is received.
Note 2 from msquic docs: To fill the pipe in this mode, the app is responsible for keeping enough sends pending at all times to ensure the connection doesn't go idle. MsQuic indicates the amount of data the app should keep pending in the QUIC_STREAM_EVENT_IDEAL_SEND_BUFFER_SIZE event. The app should always have at least two sends pending at a time. If only a single send is used, the connection can go idle for the time between that send is completed and the new send is queued.
Some related discussions:
microsoft/msquic#1602 (comment)
#44782 (comment)
The text was updated successfully, but these errors were encountered: