-
Hi all, While investigating on a global performance issue on my aiohttp app (Python 3.10, aiohttp 3.9.3) I noticed that a significant amount of time is spent in the client session request() call when concurrent request are made. This was confirmed by enabling Tracing on some requests. So I've developed a little docker app performing concurrent request to an Nginx server. It is available here: https://github.com/hlecnt/aiohttp-load-test-2/tree/master By default it will generate 2 iterations of 500 concurrent requests to the Nginx server. But I observed that the first request is really sent only when all connections to the upstream server are established. And these connections are established sequentially (confirmed by the tcpdump). I have tested on a Google VM and a WSL intance with the same result. I am wondering if this sequential creation of TCP connections is an expected behavior? The second point is the total time spent to process the request, while it takes less than 1ms on the nginx side, it take about 500ms on my WSL to complete the 500 request. Any comment will be appreciated. Best regards, Please find attached an archive with 2 tcpdump (50 requests sent concurrently versus sequentially) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 14 replies
-
When you do a gather there is a race to make connections to the remote host. The performance is frequently better when the server supports keep alive by awaiting requests in sequence (which has less task overhead for small requests) or using a semaphore to reduce the concurrency of tasks since it's more likely to see connection reuse. |
Beta Was this translation helpful? Give feedback.
Hi,
Just a word before closing the discussion. Moving to Python 3.11 and some optim in my code have fixed performance issues.
Thanks for your help @bdraco.