Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry backoff not reset on successful re-connection #43

Open
ajcasagrande opened this issue Jun 30, 2021 · 5 comments
Open

Retry backoff not reset on successful re-connection #43

ajcasagrande opened this issue Jun 30, 2021 · 5 comments
Labels
1-low priority denoting isolated changes hold Intended for PRs we want to flag for ongoing review

Comments

@ajcasagrande
Copy link
Contributor

I started noticing that the time to retry reader connections was becoming increasingly long the more times you connect/disconnect cycle the readers without restarting the device service.

I added extra printouts to the following code in order to track it down a little better

Snippet of interest which shows what i am referring to:

2021/06/29 16:53:42 retry attempt: 5, wait: 2m5s
level=DEBUG ts=2021-06-29T23:55:47.365787007Z app=edgex-device-rfid-llrp source=device.go:151 device=SpeedwayR-11-25-D6 msg="Attempting LLRP Client connection."
level=INFO ts=2021-06-29T23:55:47.366315604Z app=edgex-device-rfid-llrp source=device.go:470 device=SpeedwayR-11-25-D6 msg="Device connection restored."
...
level=ERROR ts=2021-06-29T23:56:45.720013566Z app=edgex-device-rfid-llrp source=device.go:166 error="failed to get next message: failed to read header: EOF" device=SpeedwayR-11-25-D6 msg="Client disconnected unexpectedly."
2021/06/29 16:56:45 retry attempt: 1, wait: 0s
level=DEBUG ts=2021-06-29T23:56:45.720256459Z app=edgex-device-rfid-llrp source=device.go:139 address=10.0.0.53:5084 device=SpeedwayR-11-25-D6 msg="Attempting to dial Reader."
level=ERROR ts=2021-06-29T23:56:45.720555502Z app=edgex-device-rfid-llrp source=device.go:144 error="dial tcp 10.0.0.53:5084: connect: connection refused" address=10.0.0.53:5084 device=SpeedwayR-11-25-D6 msg="Failed to dial Reader."
2021/06/29 16:56:45 retry attempt: 6, wait: 3m15s

Explanation:

  • Notice that 2 mins go by waiting for attempt 5 (retry.Slow), when it is able to successfully reconnect.

  • A little while later the connection is dropped and immediately, the retry.Quick attempts to reconnect and fails (retry attempt: 1, wait: 0s).

  • After this the retry.Slow waits for attempt 6 for over 3 minutes.

    • Because the connection was restored after attempt 5, the retry.Slow should be reset back to attempt 1, or at least the back-off time should be reset.

Full log here

@ajcasagrande
Copy link
Contributor Author

@saites

@saites
Copy link
Contributor

saites commented Jul 1, 2021

What you're seeing is technically "correct" behavior, but I agree it's worth modifying the code to account for this case.

In short, the connection management considers anything other than an explicit request to shutdown the connection as a failed attempt. That explicit request only happens in three cases:

  • If the device address changes
  • If the device is removed from the device service (in EdgeX, that is)
  • If the device service shuts down

The current code has a very broad definition of failure. Even though your devices eventually reestablish a healthy connection, they are still disconnecting in an "unhealthy" way. As a result, the failure count keeps incrementing, and the backoff delay increases to its maximum.

The reason for this logic is that a successful dial alone does not indicate a healthy state. It's possible we connect to something at the given address and port, but it's not actually an LLRP Reader. Even if it is an LLRP Reader, it still very well may not be a healthy connection. For instance, your logs show several semi-successful connection attempts which ultimately failed because the Reader reported it was already connected to a different client.

You could implement the logic "reset the Slow retry loop if the connection was reestablished" by adding return true, nil in the same block that notifies EdgeX when a device moves from enabled -> disabled. That property is reset if the connection is successfully restored. Technically, you'd be telling the Retry code that this attempt was successful (when it was really the one immediately before it), but the behavior will be the same: it'll reset the failure count and attempt to reconnect right away. If that fails, enabled will still be false, and it'll return true, err signaling that it should increment the failure count, delay using exponential back off, then reattempt the connection.


Here's a more detailed explanation of the code, along with a few other suggestions for potential improvements, especially if you want to change this logic to handle other cases.

The relevant code starts here and continues to the end of the closure. The code uses three nested loops, each for a different purpose:

As an aside, nesting it like that isn't great, but the locking/state management going on there is a bit tricky, and this makes it harder to break (because most of the state is not accessible elsewhere). Looking back, I think we could clean this up by simply assigning the functions explicitly, e.g.

connectToReader := func(ctx context.Context) (bool, error) {
    // lines 128-174 here...
}

notifyIfDisabled := func(ctx context.Context) (bool, error) {
    err := retry.Quick.RetryWithCtx(ctx, maxConnAttempts, connectToReader)
    // lines 177-203
}

// Until the service shuts down or the device is removed,
// attempt to maintain a connection to the reader;
// when not possible, notify EdgeX that the device is Disconnected.
for ctx.Err() == nil {
    _ = retry.Slow.RetryWithCtx(ctx, retry.Forever, notifyIfDisabled)
}

Hopefully this helps explain the issue and potential remedies.

@lenny-goodell
Copy link
Member

@ajcasagrande , is this a critical issue that should be resolved for Jakarta release?

@ajcasagrande
Copy link
Contributor Author

@lenny-intel not a critical issue. It really only affects the simulator (when disconnecting and reconnecting a lot), and devices which are constantly disconnecting. In the case of the latter, the re-connection timeout is probably the least of your concerns (why is it disconnecting in the first place? network issues? power failures? etc.)

I wouldn't necessarily close the issue, but its probably of low priority.

@lenny-goodell lenny-goodell added the 1-low priority denoting isolated changes label Feb 17, 2022
@lenny-goodell
Copy link
Member

@ajcasagrande , Thanks!

@lenny-goodell lenny-goodell added the hold Intended for PRs we want to flag for ongoing review label Apr 11, 2022
@github-project-automation github-project-automation bot moved this to New Issues in Technical WG Jul 30, 2024
@jumpingliu jumpingliu moved this from New Issues to Icebox in Technical WG Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1-low priority denoting isolated changes hold Intended for PRs we want to flag for ongoing review
Projects
Status: Icebox
Development

No branches or pull requests

3 participants