Skip to content
This repository has been archived by the owner on Dec 10, 2024. It is now read-only.

Does not handle retries or backoff for Gitlab.com API endpoints #630

Closed
bleggett opened this issue May 28, 2019 · 7 comments · Fixed by #800
Closed

Does not handle retries or backoff for Gitlab.com API endpoints #630

bleggett opened this issue May 28, 2019 · 7 comments · Fixed by #800

Comments

@bleggett
Copy link

bleggett commented May 28, 2019

For those of us using Gitlab SaaS, we're quite frequently hitting the Gitlab rate limits.

Gitlab returns the following "standard" rate limit retry headers in 429 Too Many Request responses:

< RateLimit-Limit: 600
< RateLimit-Observed: 29
< RateLimit-Remaining: 569
< RateLimit-Reset: 1558720990

It would be ideal if go-gitlab properly interpreted these and implemented standard backoff/retry logic for Gitlab requests, as node-gitlab does

Obviously it can be reimplemented every time in the surrounding code that invokes go-gitlab, but I think logic this basic belongs in this library.

Am going to start work on a PR for this, however if there is disagreement about whether this functionality belongs in go-gitlab upstream or not, I'd like to hear it.

@svanharmelen
Copy link
Member

@bleggett I'm not against adding this, but I think it would be good to first describe the approach you want to take before spending too much time on it.

I recently implemented something similar for another Go client, so I have some thoughts about how I would like to add support for this.

@bleggett
Copy link
Author

bleggett commented May 29, 2019

@svanharmelen Sure.

Since the response headers for 429 Too Many Requests aren't really standardized/the ones above are Gitlab specific, I just going to do something simple like

  • add some logic (probably in gitlab.go/Do()) that checked RateLimit-Remaining <= 5
  • and if so sleep for RateLimit-Reset - currentTime + minorRandomPositiveJitter

If you've got something planned or can just port over something smarter, that would be preferable.

@vmarkovtsev
Copy link

Please note that some usage patterns require to limit the sleep time, e.g. the user specifies a hard maximum amount of time which the call can elapse. We had similar problems with GitHub API, and we wrapped around the rate limit error by operating on a Context which can be cancel()-ed. So probably this feature should be implemented outside the low level machinery in the form of an extra user wrapper.

@vmarkovtsev
Copy link

This is what I mean:

result, _, err := m.client.Search.Users(ctx, email+" in:email", searchOpts)
if err != nil {
	if rl, ok := err.(*github.RateLimitError); ok {
		logrus.Warnf("rate limit was hit, waiting until %s", rl.Rate.Reset)
		select {
		case <-time.After(rl.Rate.Reset.Sub(time.Now().UTC())):
		case <-ctx.Done():
			return "", "", context.Canceled
		}

		continue
	}
	return "", "", err
}

This works perfectly for GitHub. IDK much about GitLab, but I guess it works similar.

@svanharmelen
Copy link
Member

I have a solution for this that I can port from another project I've build recently. I just need to find a little bit of time to do so, but hopefully I can add this in the coming week of so.

@totobest
Copy link

totobest commented Mar 8, 2020

I'm getting 429s while using terraform gitlab provider, which uses go-gitlab.
It can be implemented using https://github.com/hashicorp/go-retryablehttp to handle 429 and maybe other temporary errors at the same time?

@svanharmelen
Copy link
Member

@bleggett and @vmarkovtsev I just created PR #800 in order to solve this. Would be great if you can confirm that the PR doesn't break anything and fixed the issue... Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
4 participants