You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't understand this at all: the test opens a SocketListener, then uses open_stream_to_socket_listener to connect to it. Somehow the connection attempt times out (at the OS level -- notice that pytest says the test suite took almost 90 seconds). This should not be possible. If the socket is closed, then either it wasn't bound yet, or it was already closed.
It can't not be bound yet, because we have an address to connect to (('::1', 49734, 0, 0) in that log), and we get that from the open socket. I guess it could be bound but not listening? But open_tcp_listeners always does both before creating a SocketListener object, so that doesn't make sense.
Could the socket have been closed already? The test spawns a call to serve_tcp in the nursery fixture, and doesn't do anything to cancel it. So it should still be running, unless there was some kind of error, or the fixture was cancelled for some reason. But I can't think of any reason it would be cancelled except for an error – we do call nursery.cancel_scope.cancel(), but not until after the test function has finished, so that doesn't apply here. And if there were an error, then it should be showing up in the test log!
So... I'm baffled.
It didn't appear when I ran the tests a second time.
This failure is worrisome, because this test does exactly what we recommend users do in order to avoid this kind of failure :-(.
The text was updated successfully, but these errors were encountered:
Perhaps that there is a bug in async fixtures which causes the nursery to never be cancelled when the test scope exits?
I've been observing mysterious hangs on tests, and happen to be on OS X.
The one I'm banging my head against now involves a custom async fixture and the nursery fixture. The code in the test function runs fine and exit of the function is reached, yet pytest hangs on the test forever. If I unroll all the async fixtures to equivalent code within the test function, everything is fine.
I've poked it and observed further oddness:
adding await sleep(.1) at the end of the test allows it to pass (but sleep(0) still produces hang)
adding assert False at the end of the test still hangs! I can only assume the exception is being deferred by the pytest-trio machinery, and the hang happens before the exception can be surfaced.
trio-websocket also has an open mystery on hanging tests, in that case it's specific to python 3.5 / Linux / Travis. It would be interesting to check whether that's the same failure mode.
See here: https://travis-ci.org/python-trio/pytest-trio/jobs/406738301
I don't understand this at all: the test opens a
SocketListener
, then usesopen_stream_to_socket_listener
to connect to it. Somehow the connection attempt times out (at the OS level -- notice that pytest says the test suite took almost 90 seconds). This should not be possible. If the socket is closed, then either it wasn't bound yet, or it was already closed.It can't not be bound yet, because we have an address to connect to (
('::1', 49734, 0, 0)
in that log), and we get that from the open socket. I guess it could be bound but not listening? Butopen_tcp_listeners
always does both before creating aSocketListener
object, so that doesn't make sense.Could the socket have been closed already? The test spawns a call to
serve_tcp
in the nursery fixture, and doesn't do anything to cancel it. So it should still be running, unless there was some kind of error, or the fixture was cancelled for some reason. But I can't think of any reason it would be cancelled except for an error – we do callnursery.cancel_scope.cancel()
, but not until after the test function has finished, so that doesn't apply here. And if there were an error, then it should be showing up in the test log!So... I'm baffled.
It didn't appear when I ran the tests a second time.
This failure is worrisome, because this test does exactly what we recommend users do in order to avoid this kind of failure :-(.
The text was updated successfully, but these errors were encountered: