-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Honor the rate limit headers in "hub api" #2317
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi thank you for your contribution!
This is a very interesting feature. Since the rate limit reset window is 1 hour, this functionality could theoretically make the hub api
execution last for up to an hour if the rate limit was exhausted immediately at the beginning of the hour. I'm not sure if it's good for this to be on by default, because I am sure that most users, particularly when running the hub api
script manually in their own terminal, would perceive this delay as hub getting "frozen" and terminating the process.
However, I do see value in this being optionally enabled per-invocation. How would you imagine the flag for hub api
to control this be called?
commands/api.go
Outdated
@@ -222,7 +223,7 @@ func apiCommand(cmd *Command, args *Args) { | |||
fi, err := os.Open(fn) | |||
utils.Check(err) | |||
body = fi | |||
defer fi.Close() | |||
defer utils.Check(fi.Close()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we couldn't close a file for whatever reason, I think that it would be a much better idea to allow the program to continue (it's not a big deal, after all) rather than to halt it.
commands/api.go
Outdated
if rateLimitLeft <= 1 { | ||
rollover := time.Unix(int64(rateLimitResetMs)+1, 0) | ||
_, _ = fmt.Fprintf(ui.Stderr, "Pausing until %v for Rate Limit Reset ...\n", rollover) | ||
time.Sleep(time.Until(rollover)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, elegant implementation of this feature! 👍
That wasn't my experience, but I was also using Anyway, with my token set, the
FWIW, the change does include a message to stderr about why it is sleeping and for how long, but I hear you. I stopped short off adding a flag for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The diff looks way cleaner, thank you! Some questions/ideas that I have:
-
Considering that this might be opt-in via flag (I will keep thinking about what the appropriate CLI API for this could be), would it make sense for this behavior to also handle API error responses that are the direct result of the rate limit? E.g. if the very first
hub api
request results in an HTTP error because of the rate limit being hit, we could handle that error by waiting until reset. How difficult would it be to extend your current approach to cover that? -
If this is opt-in, should we also apply the overall behavior to all
hub api
requests, not just in pagination mode?
I'm also wondering whether the appropriate place to implement the wait-until-ratelimit-reset behavior would be in a RoundTripper https://github.com/github/hub/blob/cd81a7bada3d2986f213db424acaa570523a9c9f/github/http.go#L51 This behavior could be off-by-default, but activated on-demand via command-line flag or environment variable. Just thinking out loud |
Just to follow up so you don't think I have forgotten about this, I haven't yet made the time to try out the suggestions (about whether rate limiting is zero or one based, trying to catch the actual rate-limit status code, and trying out the |
By coincidence, I actually experienced rate-limit exhaustion today, so I can report back some specifics: Sadly, the API responds with I also discovered that it is zero indexed, and it does actually deliver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for the updates!
I think that it won't be too hard to detect the HTTP 403 that happened as a result of rate limiting. You can use ErrorInfo to inspect the message returned from API: https://github.com/github/hub/blob/03533a167e09c53323b892c38e3257bdfdb47a6d/github/http.go#L490
if resp.StatusCode == 403 {
if info, err := resp.ErrorInfo(); err == nil {
if strings.Contains(info.Message, "rate limit exceeded") { ... }
}
}
I think that hub api --rate-limit
should obey rate limits even --pagination
wasn't used and even if the 1st request already failed due to rate limiting.
It's safe to repeat GET requests. I would strictly avoid repeating any requests other than GET or HEAD.
commands/api.go
Outdated
var atoiErr error | ||
if xRateLimitLeft := response.Header.Get(rateLimitRemainingHeader); len(xRateLimitLeft) != 0 { | ||
rateLimitLeft, atoiErr = strconv.Atoi(xRateLimitLeft) | ||
utils.Check(atoiErr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather that we ignore errors instead of aborting the process on malformed response header values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am strongly against the very idea of "ignoring errors," as one can tell from my previous attempt to actually trap them before being asked to roll that back. If the project really feels strongly that hub
should swallow errors, I'll capitulate to get this PR in, but I can assure you that as an end user of software, I have been bitten more times by swallowed errors than I have ever thought to myself "wow, this software talks about errors too much"
Also, this feature could use a test! See We try to avoid slow tests, so if a scenario is triggering the |
Great, now I need to do ruby :-( That The
and it also erroneously assumes the project is in a After straightening all that out, |
So, as I asked previously: is that extra work mandatory to land this, or a nice-to-have? Because this started out as scratching an itch of mine, and finding how to trap rate limiting in all of those scenarios without inadvertently introducing weird bugs feels like Real Work™ |
I'm sorry you had a bad experience running tests. The Vagrantfile is misleading; it was for a different purpose than running tests in and is no longer accurate. Don't worry about the rest of the functionality that I proposed. I can take over from here and get it over the finish line. Thank you for the work you've put into this. |
That's awesome that there is now a docker container; thank you for getting that in And my thanks for all your continued patience with this. I'm sorry I didn't have the mental bandwidth right now to chase this issue all the way down to its ideal solution |
Previously, calling `hub api --paginate` with any substantial list of results would exhaust the Ratelimit allocation, since `hub api` did not pause before requesting the next page. With this change, `hub api` will check the rate limit headers and sleep until "Reset" if calling for the next page would exceed the limit.
@mdaniel Thank you for your work so far. I've added some changes:
Please review! 🙇 |
Previously, calling
hub api --paginate
with any substantial list of results would exhaust the Ratelimit allocation, sincehub api
did not pause before requesting the next page. With this change,hub api
will check the rate limit headers and sleep until "Reset" if calling for the next page would exceed the limit.