-
-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection timeout (not Read, Wall or Total) is consistently taking twice as long #5773
Comments
When using >>> import urllib3, time
>>> http = urllib3.PoolManager()
>>> try:
... start = time.time()
... http.request('GET', 'http://google.com:81', timeout=1)
... except urllib3.exceptions.MaxRetryError:
... print(time.time() - start)
...
8.030268907546997
>>> try:
... start = time.time()
... http.request('GET', 'http://google.com:81', timeout=2)
... except urllib3.exceptions.MaxRetryError:
... print(time.time() - start)
...
16.021981477737427
>>> try:
... start = time.time()
... http.request('GET', 'http://google.com:81', timeout=3)
... except urllib3.exceptions.MaxRetryError:
... print(time.time() - start)
...
24.041129112243652 |
Urllib3's documentation on retrying requests say:
3 retries = 4 connection attempts, taking that into account still leads us to the same issue: each attempt is taking twice as long to timeout |
And, regarding the last comment on duplicate issue #5760 by @sigmavirus24:
I'm not asking for a wall clock timeout, nor a total request timeout. I'm asking for a fix in Connection timeout. That double time is a bug.
True, and my tests show that connection timeouts are not behaving as the documentation says it should. Where in the documentation this double time behavior is explained? |
I think I've found the culprit: IPv6! It seems requests/urllib3 is automatically trying to connect using both IPv4 and IPv6, and that accounts for the doubled time. I'll do some more tests to properly isolate the problem, as it seems requests is trying IPv6 even when it's not available, raising a |
I'm not sure that issue is related to this one. I'm not experiencing connections that are slower/faster depending on IP family, I'm experiencing an exact double timeout time due to (trying to) connect using both IPv4 and IPv6, and failing in both families. Is this "if IPv4 fails, retry using IPv6" a known/expected behavior? Is this documented anywhere? |
After more tests, the issue is really the dual IPv4/IPv6 connection attempts. Using the workaround proposed at by a Stackoverflow answer to force either IPv4 or IPv6 only, timeout behaves as expected: # Monkey-patch urllib3 to force IPv4-connections only.
# Adapted from https://stackoverflow.com/a/46972341/624066
import socket
import urllib3.util.connection
def allowed_gai_family():
return socket.AF_INET
import urllib3
import requests
import time, os, sys
# Using a know URL to test connection timeout
URL='http://google.com:81'
http = urllib3.PoolManager()
def test_urllib3_timeout(timeout, url=URL):
start = time.time()
try:
http.request('GET', url, timeout=timeout, retries=0)
print("OK!")
except urllib3.exceptions.MaxRetryError:
print('{}: {:.1f}'.format(timeout, time.time()-start))
def test_requests_timeout(timeout, url=URL):
start = time.time()
try:
requests.get(url, timeout=timeout)
print("OK!") # will never reach this...
except requests.ConnectTimeout: # any other exception will bubble out
print('{}: {:.1f}'.format(timeout, time.time()-start))
def test_timeouts():
print("\nUrllib3")
for i in range(1, 6):
test_urllib3_timeout(i)
print("\nRequests")
for i in range(1, 6):
test_requests_timeout((i, 1))
print("BEFORE PATCH:")
test_timeouts()
urllib3.util.connection.allowed_gai_family = allowed_gai_family
print("\nAFTER PATCH:")
test_timeouts() Results:
|
Still, I believe at least
|
@MestreLion I would accept a PR adding an explanation of what can go wrong with timeouts like this |
Just popping in here to say thanks for digging into this. This would explain my double timeout question on the other issue I opened too. I just tested by forcing IPv4 and the timeout is no longer doubled |
This comment has been minimized.
This comment has been minimized.
…t-note Add note on connection timeout being larger than specified. Fix #5773
I'm aware that several issues related to timeout were opened (and closed) before, so I'm trying to narrow this report down to a very specific scope: connection timeout is behaving in a consistent, wrong way: it times out at precisely twice the requested time.
Results below are so consistent we must acknowledge there is something going on here! I beg you guys not to dismiss this report before taking a look at it!
What this report is not about:
Total/Wall timeout:
That would be a nice feature, but I'm fully aware this is currently outside the scope of Requests. I'm focusing on connection timeout only.
Read timeout:
All my tests were using http://google.com:81, which fails to even connect. There's no read involved, the server exists but never responds, not even to refuse the connection. No data is ever transmitted. No HTTP connection is ever established. This is not about
ReadTimeoutError
, this is aboutConnectTimeoutError
.Accurate timings / network fluctuations:
Not asking for millisecond precision. I don't even care about whole seconds imprecision. But, surprisingly,
requests
is being incredibly accurate... to twice the time.Expected Result
requests.get('http://google.com:81', timeout=(4, 1))
should take approximately 4 seconds to timeout.Actual Result
It consistently takes about 8.0 seconds to raise
requests.ConnectTimeout
. It always takes twice the time, for timeouts ranging from 1 to 100. Exception message clearly says in the end:Connection to google.com timed out. (connect timeout=4)
, a very distinct message from read timeouts.Reproduction Steps
Results:
System Information
It seems there is a single, "hidden", connection retry, performed by either
requests
orurllib3
, somewhere in the line. It has been reported by other users in other platforms too.The text was updated successfully, but these errors were encountered: