-
-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression of 429 response handling in 3.6.0 #1805
Comments
matthias-bach-by
added a commit
to matthias-bach-by/jira
that referenced
this issue
Feb 26, 2024
The time Jira sends in the Retry-After header is the minimum time Jira wants us to wait before retrying our request. However, the former implementation used this as a maximum waiting time for the next request. In result, there was a chance that we reached three retries without reaching the time that Jira expected us to wait and our request would fail. This implementation does also affect the other retry cases, as while previously we jittered our backoff between 0 and the target backoff, we now only jitter between 50% and 100% of the target backoff. However, this should still protect us from thundering herds and safes us from introducing a new minimum backoff variable for the retry-after case. This solves one of the issues reported in pycontribs#1805.
matthias-bach-by
added a commit
to matthias-bach-by/jira
that referenced
this issue
Feb 26, 2024
The time Jira sends in the Retry-After header is the minimum time Jira wants us to wait before retrying our request. However, the former implementation used this as a maximum waiting time for the next request. In result, there was a chance that we reached three retries without reaching the time that Jira expected us to wait and our request would fail. This implementation does also affect the other retry cases, as while previously we jittered our backoff between 0 and the target backoff, we now only jitter between 50% and 100% of the target backoff. However, this should still protect us from thundering herds and safes us from introducing a new minimum backoff variable for the retry-after case. This solves one of the issues reported in pycontribs#1805.
matthias-bach-by
added a commit
to matthias-bach-by/jira
that referenced
this issue
Feb 26, 2024
When rejecting request with a 429 response, Jira sometimes sends a Retry-after header asking for a backoff of 0 seconds. With the existing retry logic this would mark the request as non-retryable and thus fail the request. With this change, such requests are treated as if Jira had send a retry-after value of 1 second. This solves one of the issues reported in pycontribs#1805.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Bug summary
While the improved handling of the retry-after header that got introduced in 3.6.0 via #1713 is an important improvement, it sadly introduced two regressions in the behaviour when handling 429 responses.
With the previous plain backoff you had a good chance of backing off enough to happen to evade the rate limiting as long as you weren't running parallel requests. With the new behaviour we are a lot more aggressive and even with a single client easily run into the case of not even retrying.
Is there an existing issue for this?
Jira Instance type
Jira Server or Data Center (Self-hosted)
Jira instance version
9.12.2
jira-python version
3.6.0
Python Interpreter version
3.12
Which operating systems have you used?
Reproduction steps
Stack trace
Expected behaviour
I'd expect the value of the retry-after header to be interpreted as a lower bound for the backup. I.e., doing something along the following:
Furthermore, we should also retry requests that have a
suggested_delay
of 0. If that would indicate an error for 503 requests, we might need to make the decision depend on the return code, too.Additional Context
No response
The text was updated successfully, but these errors were encountered: