-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry backoff not reset on successful re-connection #43
Comments
What you're seeing is technically "correct" behavior, but I agree it's worth modifying the code to account for this case. In short, the connection management considers anything other than an explicit request to shutdown the connection as a failed attempt. That explicit request only happens in three cases:
The current code has a very broad definition of failure. Even though your devices eventually reestablish a healthy connection, they are still disconnecting in an "unhealthy" way. As a result, the failure count keeps incrementing, and the backoff delay increases to its maximum. The reason for this logic is that a successful dial alone does not indicate a healthy state. It's possible we connect to something at the given address and port, but it's not actually an LLRP Reader. Even if it is an LLRP Reader, it still very well may not be a healthy connection. For instance, your logs show several semi-successful connection attempts which ultimately failed because the Reader reported it was already connected to a different client. You could implement the logic "reset the Here's a more detailed explanation of the code, along with a few other suggestions for potential improvements, especially if you want to change this logic to handle other cases. The relevant code starts here and continues to the end of the closure. The code uses three nested loops, each for a different purpose:
As an aside, nesting it like that isn't great, but the locking/state management going on there is a bit tricky, and this makes it harder to break (because most of the state is not accessible elsewhere). Looking back, I think we could clean this up by simply assigning the functions explicitly, e.g. connectToReader := func(ctx context.Context) (bool, error) {
// lines 128-174 here...
}
notifyIfDisabled := func(ctx context.Context) (bool, error) {
err := retry.Quick.RetryWithCtx(ctx, maxConnAttempts, connectToReader)
// lines 177-203
}
// Until the service shuts down or the device is removed,
// attempt to maintain a connection to the reader;
// when not possible, notify EdgeX that the device is Disconnected.
for ctx.Err() == nil {
_ = retry.Slow.RetryWithCtx(ctx, retry.Forever, notifyIfDisabled)
} Hopefully this helps explain the issue and potential remedies. |
@ajcasagrande , is this a critical issue that should be resolved for Jakarta release? |
@lenny-intel not a critical issue. It really only affects the simulator (when disconnecting and reconnecting a lot), and devices which are constantly disconnecting. In the case of the latter, the re-connection timeout is probably the least of your concerns (why is it disconnecting in the first place? network issues? power failures? etc.) I wouldn't necessarily close the issue, but its probably of low priority. |
@ajcasagrande , Thanks! |
I started noticing that the time to retry reader connections was becoming increasingly long the more times you connect/disconnect cycle the readers without restarting the device service.
I added extra printouts to the following code in order to track it down a little better
Snippet of interest which shows what i am referring to:
Explanation:
Notice that 2 mins go by waiting for attempt 5 (
retry.Slow
), when it is able to successfully reconnect.A little while later the connection is dropped and immediately, the
retry.Quick
attempts to reconnect and fails (retry attempt: 1, wait: 0s).After this the
retry.Slow
waits for attempt 6 for over 3 minutes.retry.Slow
should be reset back to attempt 1, or at least the back-off time should be reset.Full log here
The text was updated successfully, but these errors were encountered: