-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiencing lower throughput the higher the latency #1320
Comments
Thank you for the issue! I am able to reproduce this locally with |
Thank you very much for looking into it :) |
Sorry for the delayed response; it's been a busy couple of weeks. So I actually think this behavior is to be expected. As RTT increases, the required amount of buffering on the sender also increases. We have mechanisms in place to prevent the sender from buffering too much and running out of memory, and they appear to be working as intended. In the presence of buffering limits, the relationship between thoughput and RTT ends up fitting a This means every time RTT doubles, throughput is halved. It's also good to keep in mind that all of the delayed queues on the network's path need to increase in order to sustain throughput at higher RTT values. In fact, the
In #1345, I added a perf client implementation to s2n-quic-qns with the option of specifying these buffer limits: s2n-quic/quic/s2n-quic-qns/src/perf.rs Lines 101 to 109 in d6d1446
These values can be increased, which should remove the buffering limits as the bottleneck in your scenario. |
Thanks for the detailed reply, it's been pretty busy here as well. I'll probably run the experiments again next week with your new client and report back :) |
Sorry for the delay, I was busy with other areas of my thesis before getting back to experiments. I have now re-run the experiments with At RTTs of the 250ms, throughput has improved over the previous experiment results, but still doesn't reach the level of msquic (~110Mbit/s vs ~175Mbit/s), even if the The queue limits are only reached in the msquic case, so I assume they don't limit the performance of s2n-quic. The following data is from a run run with the before mentioned overblown limit values, Graphs s2n-quic-qnsGraphs secnetperf (msquic)Output s2n-quic-qnsSenderCommand
Output
ReceiverCommand
Output
Output secnetperf (msquic)SenderCommand
Output
ReceiverCommand
Output
|
It looks like the limits weren't being applied to the remote value in the qns configuration. It should be a lot better after #1447 is merged. |
#1447 has been merged, let us know how things look! |
Problem:
As part of my thesis, I'm using a self-written
perf
client available here based ons2n-quic
withs2n-quic-qns
as server in a testbed setup with configurable bandwidth limit and link delay at a router. I experience a reduction in throughput the higher the configured delay, whereasMsQuic
'ssecnetperf
tool doesn't suffer this reduction.While limited to
200Mbit/s
at20ms
delay, I'm able to achieve~190Mbit/s
throughput, at250ms
delay the throughput falls below20Mbit/s
average.I'm also logging
RecoveryMetrics
and the CWND behavior looks odd to me, where it ramps up to an upper limit and then doesn't seem to exhibit the CUBIC behavior I would have expected.Testbed
Note that the bandwidth limit is imposed on the path to the receiver, while the delay is applied in the direction from receiver to sender.
Visualization 20ms
Please disregard the empty graphs in some of the figures, they are only populated in some other diagram configurations if data was available.
My client:
MsQuic:
Visualization 50ms
My client:
MsQuic:
Visualization 250ms
My client:
MsQuic:
Solution:
I don't have a solution for the issue yet. I'm looking for feedback on whether this is a known / expected behavior at the moment, whether this is user-error on my part and if and how I can help troubleshoot this.
To rule out an error in my client, an official
perf
client implementation might be helpful.Does this change what s2n-quic sends over the wire? Probably not
Does this change any public APIs? Probably not
Requirements / Acceptance Criteria:
The text was updated successfully, but these errors were encountered: