-
Notifications
You must be signed in to change notification settings - Fork 335
Garbage collector cleans up pool during creation in ASGI server #878
Comments
@MatthewScholefield this is something I've run into as well, and was resolved by using explicit connections from the pool during the app lifetime, then explicitly closing the pool on app shutdown. |
@MatthewScholefield We also got an exception in Sanic backend:
py code:
@seandstewart Is there any example code about how to resolved? |
I'm getting something similar to this error using FastAPI with gunicorn and uvicorn workers which use uvloop It raises randomly |
@waketzheng Try creating a pool instead and acquire connections from the pool. You can use the @ndavydovdev I think that could just be because you're not explicitly closing the pool on shutdown. |
@Andrew-Chen-Wang My app is working while these errors are raising and my pool is a singleton object which will be closed only after my GAE instance is down, so it can be 30-50 times per day, but now I'm getting 1500 errors like that per day |
@ndavydovdev Are you using a framework? When I mean shutdown, I guess I meant 1 request/response cycle. Even though it's a singleton object, I know some Python frameworks will use workers like gunicorn to close a thread at the end of a request. That means your singleton object is being destroyed but your connection isn't. Try explicitly closing your connection before the end of each request and see if it's raising any more errors. Singleton objects are useful when it comes to having to use a pool multiple times or even concurrently, but something like Django is destroying all Python objects at the end of a request. (You can take a look at Django's cache handler if you're interested; they have a signal that at the end of a request, they close all connections). |
@Andrew-Chen-Wang Thanks for your advice! I'm calling |
@ndavydovdev hi, did you actually got it to work?, cause I'm closing all my connections doing the |
@leoswaldo Hi, no, I didn't, but I see that my asyncio tasks which were scheduled by |
@leoswaldo also I'm sure that my FastAPI under gunicorn server with uvicorn workers doesn't restart the same time as errors are being raised in logs |
I've just realized this is actually a bug in uvicorn and have created a PR as shown above to fix it. def on_startup():
asyncio.create_task(some_function()) Since the task wasn't assigned to anything, the entire partially executed coroutine could be garbage collected. So, TL;DR:
|
Would you be willing to test with the latest master? #891 was just merged and is a complete overhaul of the library. I'd be curious to see if the new implementation helps with this issue... |
Actually, the original issue actually was determined to be a problem with other libraries so I'll close this. If anyone else here finds that their issue is not related to external code, feel free to create a new issue. Note, that for anyone still experiencing this problem and trying to track the root cause, the method described in this PR is a reliable way to trigger it. |
I've been tracking down an exception in my FastAPI backend that occurred roughly once every 20 startups. This led me to a huge rabbit hole of debugging that ended up uncovering the following error when using aioredis with an ASGI server:
example.py
:When running this with
uvicorn example:app
it seems like everything works (the app starts up correctly), but if we force garbage collection on a specific line within the event loop, we consistently encounter the following error:While a bit hacky, we can force this garbage collection in the event loop as follows:
/usr/lib/python3.*/asyncio/base_events.py
and withindef _run_once
, near the bottom immediately within thefor i in range(ntodo):
, addimport gc; gc.collect()
.asyncio
event loop so that it uses this modified code by running with:uvicorn example:app --loop asyncio
After doing this, we see the above error every single startup.
Notes
gc.collect()
modification if we writeawait create_redis_pool(...)
instead ofloop = await create_redis_pool(...)
. I think this might be expected though because we are awaiting an rvalue.I'm hesitant to say it's an error with hypercorn, however, because hypercorn isn't doing anything but calling the handlers. Perhaps hypercorn holds on to some extra references which is why the error doesn't happen there?
Does anyone have any insight on this (specifically,
RedisConnection._read_data() running at .../site-packages/aioredis/connection.py:186...
)? According to this SO post, there can be some weirdness on awaiting futures without hard references. If it seems to be an issue with Uvicorn, I can close this and create it there.The text was updated successfully, but these errors were encountered: