-
Notifications
You must be signed in to change notification settings - Fork 961
Conversation
49e8726
to
8540e40
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome!
@veverkap it would be super nice if you could verify this solution? I have limited access to a GitLab instance and don't actually use GitLab much these days, so I'm going to wait with merging this one until 2 individuals have confirmed that this change doesn't break anything for them and that the rate limiting and retry logic work as expected. So if you can give the code in this PR a go, that would be very much appreciated! |
This adds both a rate limiter in order to try and prevent hitting the limit all together, and in addition it adds some retry logic.
8540e40
to
d8bb0b4
Compare
Went ahead and tried this locally and couldn't break it. 👍 |
Thanks for testing it @veverkap! If anyone else can find some time to also test it, then I will merge it right after that... Thanks again! |
Tested with upstream project |
Cool... In that case I am going to merge this one today. Thanks for testing it! |
I reckon it would be nice to be able to configure this behaviour, the currently hard coded solution returns non 2xx responses in ~5 minutes. This might not be ideal for everyone. |
The 0.29.0 version adds some auto-retry capabilities which makes some of our tests take ~5min 🤦 xanzy/go-gitlab#800 (comment)
Care to share some more details about that comment @mvisonneau? As that would be a bug in my opinion, so wondering what you observed... |
This is for instance what I experience in this test : https://github.com/mvisonneau/gitlab-ci-pipelines-exporter/blob/master/cmd/gitlab_test.go#L180 I am not convinced that I always want to be stuck in a retry-loop for 5m if the GitLab API returns me either a |
It should be simple enough to put in a global switch to disable the retry logic. Would that solve your worries? Hmm... Actually have a better idea to make this configurable using options when creating the client. Let me wipe up a PR (next week probably) and see if we can find a nice solution. |
Yeah I really like the idea of having this retry capability as well and thanks for the implementation btw 🙇. I believe that being able to customize the behaviour at the client config level would be great indeed 👍 |
So... This turned out to be a backwards incompatible change, but considering most (if not all) people only create 1 client in their code it's a super small change to make your code work with this new logic. The big advantage is that now we introduced client options in this way, we should be pretty future proof and flexible going forward. The PR (#813) is pretty big, but the things that matter are:
In your code this should be a working solution: func getMockedGitlabClient() (*http.ServeMux, *httptest.Server, *Client) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
c := &Client{
Client: gitlab.NewClient(&http.Client{}, ""),
RateLimiter: ratelimit.New(100),
}
if err := gitlab.WithBaseURL(server.URL)(c); err != nil {
panic(err) // Or adapt the helper function to return an error
}
if err := gitlab.WithoutRetries()(c); err != nil {
panic(err) // Or adapt the helper function to return an error
}
return mux, server, c
} Would this be a workable solution for you @mvisonneau? |
That would be perfectly fine for me, thanks a lot @svanharmelen! 🙇 🙏 |
This adds both a rate limiter in order to try and prevent hitting the
limit all together, and in addition it adds some retry logic.
Fixes #630