You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have the following piece of code where I initialize 100 workers and a connection pool of size 10. Each worker loops and trys to acquire a connection and hold it for 200ms.
importasyncioimportasyncpgasyncdefworker(pool: asyncpg.Pool):
whileTrue:
asyncwithpool.acquire(timeout=10):
# hold the connection for 200msawaitasyncio.sleep(0.2)
asyncdefmain():
asyncwithasyncpg.pool.create_pool(max_size=10) aspool:
asyncwithasyncio.TaskGroup() astg:
for_inrange(100):
tg.create_task(worker(pool))
asyncio.run(main())
When running on my laptop, this code reliably crashes with a TimeoutError after a few minutes, meaning that at least one worker failed to acquire a connection after 10 seconds. Given that there're 10 workers per connection and each worker holds the connection for 200ms, I would expect workers to be able to acquire a connection every 2 seconds.
This looks like starvation to me. Are there fairness guarantees regarding connection pools? Is there anything I could do other than just increasing the max pool size?
The text was updated successfully, but these errors were encountered:
I have the following piece of code where I initialize 100 workers and a connection pool of size 10. Each worker loops and trys to acquire a connection and hold it for 200ms.
When running on my laptop, this code reliably crashes with a
TimeoutError
after a few minutes, meaning that at least one worker failed to acquire a connection after 10 seconds. Given that there're 10 workers per connection and each worker holds the connection for 200ms, I would expect workers to be able to acquire a connection every 2 seconds.This looks like starvation to me. Are there fairness guarantees regarding connection pools? Is there anything I could do other than just increasing the max pool size?
The text was updated successfully, but these errors were encountered: