-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stall when connector hits limit from several connections to same bad hostname #835
Comments
I don't think it is only bad hostname which leads to this situation. With the following code:
I see that request to yahoo receives timeout exception (catched by try block) but in the tcpdump trace I see actual response from yahoo (it responses with 301).
When I remove cnn from the url list and remove also 443 then I see:
running curl -vv https://yahoo.com:443/ or curl -vv https://yahoo.com returns 301 to www.yahoo.com. |
@unixsurfer Try removing compress=True from the request since you are not sending a body. |
@danielnelson, compress=False did the trick: with compress=True python ./aiosyncissue835.py Do you have any explanation about this behavior ? |
Setting compress=True adds the Content-Encoding and Transfer-Encoding headers, and yahoo is taking a long time to respond when these are set, when it does respond it seems to close the connection. The equivalent curl command would be |
Without |
yahoo doesn't like chunked Transfer-Encoding
Do we know why aiohttp uses chunked Transfer-Encoding when compression is enabled? A far as I know when both these headers are set then web server has to compress all payload before it sends out the compressed response in chunks. |
reason for chunking exactly this, we can start compressing and send compressed chunks immediately, we do not need to compress whole blob. |
I think this issue should be closed since the original report is addressed. If we want to turn off chunked transfer with compression that should be a separate issue. |
Confirming that my test case works with the current release (1.1.5)--thanks! |
Long story short
When a session's connector has a connection limit, multiple requests for a URL with a bad hostname never complete.
Expected behaviour
Script should run to completion
Actual behaviour
The second connection neither completes nor throws an error, and the script hangs.
Steps to reproduce
Your environment
python 3.4.2 on Debian jessie
aiohttp 0.21.4
The text was updated successfully, but these errors were encountered: