-
-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retries #1141
Comments
Thanks very much for considering this again. Sorry I've been such a squeaky wheel about it. 🙃 I'm definitely a 👍 on this MVP, which will bring forth closer parity with class urllib3.util.retry.Retry(
total=10,
connect=None,
read=None,
redirect=None,
status=None,
method_whitelist=frozenset(["HEAD", "TRACE", "GET", "PUT", "OPTIONS", "DELETE"]),
status_forcelist=None,
backoff_factor=0,
raise_on_redirect=True,
raise_on_status=True,
history=None,
respect_retry_after_header=True,
remove_headers_on_redirect=frozenset(["Authorization"]),
): OR documenting use of the Middleware implementation that @florimondmanca has done the initial legwork on in #1134 |
Ah good reminder there - I've missed off doing any sleep / back off in that example, which it probably should include. |
Reminder of the proposed API from #784 (which I think pretty much fits the bill for what we're trying to do, actually, even implementation wise) - items in bold not present in issue description here:
client = httpx.Client(retries=<int>)
client = httpx.Client(retries=httpx.Retries(<int>[, backoff_factor=<float>]) Notes:
(P.S.: @StephenBrown2 Note that I'm not planning we merge the middleware idea just yet. :-)) |
We also had some discussion around I don't think we should plan for this in core (as noted in the comment, there are some tricky design considerations to prevent users from shooting themselves in the foot), but that's only my current impression. |
As long as further configuration can be added to the |
Yes, we can imagine making a main method for "decide whether we should retry and when" public at some point to allow for extension, a bit similar to our |
Oh I didn't mean user-extensibility. I meant adding more well-thought out knobs to the core |
My 2c having used requests and various API client libraries: a couple of things I'd like to see considered, but aren't necessarily have-to-haves:
|
@JayH5 There's a really interesting observation there, in particular wrt. timeouts and retries. The implication that follows is that for connection retries what we really want is retries managed at the Transport API level, attempts = 0
start_time = time.time()
elapsed = 0.0
next_timeout = timeout
while attempts <= retries and next_timeout > 0.0:
try:
connect(..., timeout=timeout)
except ConnectFailed:
next_timeout = timeout - elapsed
elapsed = time.time() - start_time
attempts += 1 That's quite nice because we're always strictly observing the connection timeout, while also adding retry support within the available time window. The jitter idea is interesting. I guess it's not always going to be as relevant in the HTTP context, because most resources you're accessing will have a whole range of different clients connecting, and you're getting a much more messy skew of connection times, retries, etc. rather than eg. in the database case where you've got a bunch of connections all doing exactly the same thing, and it's easier to see how jitter could help there. However, the same can be true in the HTTP context sometimes. (Say for inter-service communications.) |
Couple of data points from big open source Python services... PyPI - Uses a mounted Session with
At this point in time I'd be leaning towards We can also still expose more complex retry configuration further down the road. |
@tomchristie Are you meaning - add a I've played with this idea a bit and it's a bit awkward. Eg for testing we'd like to mock the network calls, but since the retry behavior would be tightly coupled with the "open a socket" operation, it seems pretty hard to test, which seems to be a big smell. Instead I'd be okay with a |
Are there any plans to support read/write retries too? |
It seems to me that retry on failure is not supported. Maybe I misunderstood, but I think that the current mechanism only adds connect retries. I would like to combine timeout with retries on failure, so that a request is retried max_retries with a backoff factor, where every request has a defined timeout. Would it be possible? As a reference, the behaviour would be similar to what can be achieved in requests, as described in this Any help on this will be welcome. |
Our yardstick for 1.0 has generally been to achieve feature parity with requests, plus the following additional support...
Something we've so far omitted is
retires=...
, which are not included in either the requests QuickStart guide, or in the Advanced Usage section. Howeverrequests
does have some retry support, can be enabled by mounting a custom adapter... https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapterI'd suggest that if we do want to add retry support we should start off by doing so in a limited fashion, and only provide a simply
retries=<int>
argument on the client, which matches the same "only retry on connection failures" behaviour thatrequests
defaults to for integer retry arguments.That doesn't necessarily preclude that we could consider more complex retry behaviour at some point in the future, but I'm prefer we push back on that for as long as possible.
Having more dials for configuration is something we should generally avoid where possible. Also, we've gone to a great deal of effort over nicely spec'ing our Transport API, and it's perfectly feasible for developers to build against that to deal with any more complex behaviours that they'd like to see.
With all that in mind, I'd suggest something we might consider would be...
With the implementation handled inside our
_send_single_request
method. Something like...I'm still not 100% sure that we want this, but perhaps this is a decent low-impact feature.
Any thoughts?
The text was updated successfully, but these errors were encountered: