You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We were using the Vault LifetimeWatcher from the api package in an internal project and noticed an issue with the backoff behavior of token renewal that was causing a bunch of our tests to fail when we upgraded to a new version of Vault.
sleepDuration appears to be the time.Duration used prior to re-running the renewal loop. In the case when errBackoff is nil, then a simple backoff duration is calculated based on the call to calculateSleepDuration. If errBackoff is not nil then sleepDuration is never set and the timeout in the following select block immediately fires again.
In our testing environment this was caught because our mock Vault server was returning an invalid response, so the renew operation was failing and we were getting an inordinate amount of immediate retries.
The fix is just refactoring the above block to capture the errBackoff.NextBackoff() value as sleepDuration. I'll open a PR shortly.
The text was updated successfully, but these errors were encountered:
Describe the bug
We were using the Vault
LifetimeWatcher
from theapi
package in an internal project and noticed an issue with the backoff behavior of token renewal that was causing a bunch of our tests to fail when we upgraded to a new version of Vault.The bug is here:
vault/api/lifetime_watcher.go
Lines 348 to 354 in 1274f2d
sleepDuration
appears to be thetime.Duration
used prior to re-running the renewal loop. In the case whenerrBackoff
isnil
, then a simple backoff duration is calculated based on the call tocalculateSleepDuration
. IferrBackoff
is not nil thensleepDuration
is never set and the timeout in the following select block immediately fires again.In our testing environment this was caught because our mock Vault server was returning an invalid response, so the renew operation was failing and we were getting an inordinate amount of immediate retries.
The fix is just refactoring the above block to capture the
errBackoff.NextBackoff()
value assleepDuration
. I'll open a PR shortly.The text was updated successfully, but these errors were encountered: