-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
swarm: better backoff logic #1554
Comments
Can we expose |
Fair enough. Also, it looks like our backoff aren't actually exponential... |
This will be fixed in large refactor/simplification that's coming down the pipe. |
Note to self: Refund backoff "tries" after a period of time. Currently, if we go to max-backoff, wait an hour, and then fail a single dial, we'll wait the max backoff again. We should, instead, notice that an hour has passed and forget all the previous failures. Code: now := time.Now()
if sinceLast := now.Sub(bp.until); sinceLast > 0 {
// Refund backoff time at the same rate.
refund := int(math.Sqrt(float64((sinceLast - BackoffBase) / BackoffCoef)))
if refund < bp.tries {
bp.tries -= refund
} else {
bp.tries = 0
}
} Not going to do this now because we have so many other changes in the pipeline and we may want to discuss this. |
Sounds good, thanks. |
Working through all the different backoff cases:
|
This tries to provide a simple-to-reason-about solution to the list of problems in https://github.com/libp2p/go-libp2p-swarm/issues/37
This tries to provide a simple-to-reason-about solution to the list of problems in https://github.com/libp2p/go-libp2p-swarm/issues/37
Status: While @petar's patches are likely the right way to go in the future, they introduce quite a few new interfaces that'll need to be discussed. In the interest of getting a fast fix in, @willscott is implementing (#191) a dumb version that just backs off full addresses inside the swarm itself without changing core libp2p interfaces. That gives us some breathing room. |
Came up in: libp2p/go-libp2p-kad-dht#96
The text was updated successfully, but these errors were encountered: