-
-
Notifications
You must be signed in to change notification settings - Fork 30.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
asyncio.Server.wait_closed() appears to hang indefinitely in 3.12.0a7 #104344
Comments
Why aren't you closing the writer in |
For python 3.6 through 3.11 you don't have to close the writer to stop the server: you can accept a single peer, stop the server, and continue communicating with the peer. |
So something like this? #!/usr/bin/env python3
import asyncio
async def cause_problems():
accepted = asyncio.Queue()
async def _incoming(reader, writer):
print("entered _incoming")
print("signaling outer context; we accepted a client")
await accepted.put((reader, writer))
print("left _incoming")
print("starting server")
server = await asyncio.start_unix_server(
_incoming,
path="problems.sock",
backlog=1,
)
print("created server; waiting on a client")
reader, writer = await accepted.get()
print("got a client")
print("closing server")
server.close()
print("issued close()")
await server.wait_closed()
writer.write(b"finished waiting on server to close\n")
if __name__ == '__main__':
asyncio.run(cause_problems()) Indeed the docs don't say that |
cc @gvanrossum bisect points to 5d09d11 |
Looks like this was done on purpose for GH-79033, though. |
So I think indeed that the better semantics are for I'm a little surprised that the request handler ( In a production version of @jnsnow's example, wouldn't it be leaking sockets if the handler never closed its writer? |
(Unless there's a better argument why GH-79033 is wrong, we should just document things better.) |
In the production code, we save the reader/writer for communicating with the peer. The intent of the code as it stands in the actual library is: "Accept precisely one peer, stop accepting new peers, then communicate via reader/writer." (My example in the report was just the shortest reproducer I could distill.) Presumably we don't leak a socket because we close the reader/writer on disconnect. |
I agree that using On the other hand the usage is exactly what you would expect when reading the documentation for What about deprecating |
Maybe, but looking at the implementation of |
I understand that, but as users of Python are we supposed to read the documentation or the implementation? The documentation for IMHO the only possible reading of the documentation is "you need It was indeed a bug that |
No, I think you misread the documentation for |
I see. OK - that is a bit surprising given how the documentation reads (and what has historically happened) - but if that's the intent, then:
...and I'll just update the lib not to do that anymore. (...While I have your attention, could I ask for someone to take a peek at #90871 ? It's dubiously related to my efforts to try and accept precisely and only one client using asyncio. Sorry for the hijack, I'm new here. 😩) |
Yes, we'll update the docs and "what's new in 3.12". If you want it to happen soon, you could volunteer a docs PR.
Yes, that should be 100% backwards compatible. :-) PS. I think there is a scenario possible where in 3.11 or before (Oh, and it looks like the waiters are woken up using |
(Me)
To the contrary, I forgot about cancel semantics. This was just explained on Discourse. |
Python 3.12 has changed its behavior[1] such that wait_closed() doesn't return before all open connections are closed -- and we used to block on that before we even attempted to shut down the individual connections. [1]: python/cpython#104344
This patch is a backport from https://gitlab.com/qemu-project/python-qemu-qmp/-/commit/e03a3334b6a477beb09b293708632f2c06fe9f61 According to Guido in python/cpython#104344 , this call was never meant to wait for the server to shut down - that is handled synchronously - but instead, this waits for all connections to close. Or, it would have, if it wasn't broken since it was introduced. 3.12 fixes the bug, which now causes a hang in our code. The fix is just to remove the wait. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-id: 20231006195243.3131140-3-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
This patch is a backport from https://gitlab.com/qemu-project/python-qemu-qmp/-/commit/e03a3334b6a477beb09b293708632f2c06fe9f61 According to Guido in python/cpython#104344 , this call was never meant to wait for the server to shut down - that is handled synchronously - but instead, this waits for all connections to close. Or, it would have, if it wasn't broken since it was introduced. 3.12 fixes the bug, which now causes a hang in our code. The fix is just to remove the wait. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-id: 20231006195243.3131140-3-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com> (cherry picked from commit acf8738) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This patch is a backport from https://gitlab.com/qemu-project/python-qemu-qmp/-/commit/e03a3334b6a477beb09b293708632f2c06fe9f61 According to Guido in python/cpython#104344 , this call was never meant to wait for the server to shut down - that is handled synchronously - but instead, this waits for all connections to close. Or, it would have, if it wasn't broken since it was introduced. 3.12 fixes the bug, which now causes a hang in our code. The fix is just to remove the wait. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-id: 20231006195243.3131140-3-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com> (cherry picked from commit acf8738) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
There is a Python bug with .wait_closed(); just remove this invocation. * python/cpython#104344 * python/cpython#109538
There is a Python bug with .wait_closed(); just remove this invocation. * python/cpython#104344 * python/cpython#109538
Semantics for server.wait_closed changed in 3.12, and it was always a no-op in our usage in prior versions. see python/cpython#104344
Next step for someone interested in docs to make Guido's suggested change. |
This is needed for wait_closed to terminate starting from 3.12 Before, wait_closed was a noop and did not check for all clients to be closed. see python/cpython#104344
This is needed for wait_closed to terminate starting from 3.12 Before, wait_closed was a noop and did not check for all clients to be closed. see python/cpython#104344
This effectively re-implements asyncio.Server::close_clients which is only available in python3.13. In python3.12 asyncio.Server::wait_closed will hang unless these sockets are closed: python/cpython#104344
This effectively re-implements asyncio.Server::close_clients which is only available in python3.13. In python3.12 asyncio.Server::wait_closed will hang unless these sockets are closed: python/cpython#104344
This effectively re-implements asyncio.Server::close_clients which is only available in python3.13. In python3.12 asyncio.Server::wait_closed will hang unless these sockets are closed: python/cpython#104344
Bug report
I have some code in the qemu.qmp package that appears to work correctly in Python 3.6 (RIP~) through to Python 3.11. I've started testing against 3.12 and I'm finding that waiting on
asyncio.Server.wait_closed()
is creating a deadlock/hang.Here's a minimal-ish reproducer:
You can test a simple connection using whatever you'd like;
socat
works to trigger the accept code, e.g. from the same directory that you run the above code, try:Here's what happens in python 3.11.3
And here's 3.12.0a7:
When I issue ^C, we get this traceback:
Your environment
Linux 6.2.12-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 20 23:38:29 UTC 2023
The text was updated successfully, but these errors were encountered: