Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of HTTP error response codes in fetcher #746

Closed
kdhageman opened this issue Dec 14, 2020 · 5 comments
Closed

Handling of HTTP error response codes in fetcher #746

kdhageman opened this issue Dec 14, 2020 · 5 comments

Comments

@kdhageman
Copy link

When using the fetcher, all failed HTTP requests are dumped to glog, without any opportunity for the developer to act on those HTTP messages. For instance, we are running into large amounts of 429 response codes, polluting the logs, and there is no option for us to build some backoff policy ourselves.

The current code base states a TODO for a backoff policy, but it might be useful to let developers can implement their own solution to HTTP response codes for the sake of flexibility.

mhutchinson added a commit to mhutchinson/certificate-transparency-go that referenced this issue Dec 14, 2020
This is a huge improvement from retying without any backoff. However, this could still benefit from being tuneable, supporting hard failing if requests can't be completed within the backoff configuration, and resetting the backoff if requests start succeeding faster.
mhutchinson added a commit to mhutchinson/certificate-transparency-go that referenced this issue Dec 14, 2020
This is a huge improvement from retying without any backoff. However, this could still benefit from being tuneable, supporting hard failing if requests can't be completed within the backoff configuration, and resetting the backoff if requests start succeeding faster.
@pav-kv
Copy link
Contributor

pav-kv commented Dec 14, 2020

Hello. Could you describe your use-case please to help us find the best solution?

Re custom solutions for HTTP response codes and retry policies. Note that Fetcher accepts a client.LogClient struct. We could make it an interface, which would allow you to implement a decorator with your policies. You would then need to pass this decorated LogClient to the Fetcher, instead of the current undecorated one.

mhutchinson added a commit that referenced this issue Dec 14, 2020
Added backoff to fetcher worker to partially address #746.

This is a huge improvement from retrying without any backoff. However, this could still benefit from:
 * being tunable
 * supporting hard failing if requests can't be completed within the backoff configuration
@mhutchinson
Copy link
Contributor

I've submitted #747 to at least make this perform backoff on each attempt. This is healthier for client side log spam, and also protects servers from client abuse.

@pav-kv
Copy link
Contributor

pav-kv commented Dec 14, 2020

I'm also preparing PR #748 which allows decorating requests. @kdhageman Will it help you to handle retries / errors in a way you want?

@kdhageman
Copy link
Author

@pavelkalinnikov I think PR #748 will work fine, thanks!

To elaborate on the use case, in our current specific case we are running into Too Many Request, for which a back-off mechanism would probably be a good solution. There might be a future case though (for instance a CT log returning 5xx response codes), for which a different action is required (e.g. shutting down fetching altogether).

@pav-kv
Copy link
Contributor

pav-kv commented Dec 15, 2020

You might find Trillian's client/backoff package useful (we use it to implement retry / backoff strategies in this repo, as, for example, in #747).

I'm closing this issue for now. Feel free to reopen if it requires any more action.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants