You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes the application knows about network path changes that Quinn may not detect immediately to. If the network changed significantly it can often be faster to reach optimal throughput and latency again by resetting things like the congestion controller, rtt estimator and mtu detection to initial states.
This was explored a bit in #1842 but that was ultimately closed. However this topic was recently brought up again in another issue and there was an indication that something like this could belong in Quinn. So this is an attempt to find out if we can get any closer to that.
It seems the biggest difference in thinking is whether this should be per Connection or for the entire Endpoint.
The motivation for per-connection is that these details are stored per-connection since each connection has its own network path.
While for the entire endpoint the argument is that an endpoint only has a single socket and this is probably the most common way for an application to know the network path changed.
Thanks for the summary, and for digging up the old discussion! The proposal sounds good to me. Even if we only wanted to expose this on the endpoint, quinn would still need to iterate over all connections and reset those states, which is exactly what the proposed interface allows.
whether this should be per Connection or for the entire Endpoint.
I don't think these are mutually exclusive. An endpoint bound to a wildcard address can have connections with different source addresses, and an endpoint bound to a specific address can be rebound. It's also not an obviously complex interface or implementation, so I'm happy to have both, or whichever is of most interest to you.
Connection-granularity active migration may require a little extra testing and debugging since we don't currently use explicit source addresses for outgoing connections, but doing so should fit gracefully into the existing architecture. It's not necessary to implement that if all you actually want is the state reset, though.
Sometimes the application knows about network path changes that Quinn may not detect immediately to. If the network changed significantly it can often be faster to reach optimal throughput and latency again by resetting things like the congestion controller, rtt estimator and mtu detection to initial states.
This was explored a bit in #1842 but that was ultimately closed. However this topic was recently brought up again in another issue and there was an indication that something like this could belong in Quinn. So this is an attempt to find out if we can get any closer to that.
It seems the biggest difference in thinking is whether this should be per
Connection
or for the entireEndpoint
.Perhaps it is easier to agree on what the API and implementation in quinn-proto should be, because there an API on the
Connection
seems better suited. Currently we have https://github.com/n0-computer/quinn/blob/02f3b33de039951bfaf4cc383d9f0ec6cdf8dcb1/quinn-proto/src/connection/mod.rs#L1317 as the entrypoint to this in the fork for iroh. The implementation currently does 3 things:RttEstimator
from the originalTransportConfig
settings.TransportConfig
settings.What is the opinion on adding this in some form to quinn-proto?
The text was updated successfully, but these errors were encountered: