-
-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graceful handling of sockets (or whatever) getting closed while in use #36
Comments
I fixed some of these issues while working on other things -- not sure if there's any more to do here or not. Definitely still need tests added. |
On further thought, behaving better on Unixes is probably doable; something like, having a |
If/when this is implemented, we should add checks to the generic stream tests that make sure that if you call a close method on a stream that's blocked inside |
Oh look, apparently there is some obscure private API in However, I'm not sure: maybe it only allows to keep the socket object open until a |
Still todo: - full test coverage - updating the stream layer to match - is InterruptedByCloseError the best name? Should it inherit OSError? - Which layers should use which exception? Fixes python-triogh-36, python-triogh-459
Still todo: - full test coverage - updating the stream layer to match - is InterruptedByCloseError the best name? Should it inherit OSError? - Which layers should use which exception? Fixes python-triogh-36, python-triogh-459
Still todo: - full test coverage - updating the stream layer to match - is InterruptedByCloseError the best name? Should it inherit OSError? - Which layers should use which exception? Fixes python-triogh-36, python-triogh-459
Suppose we have one task happily doing its thing:
and simultaneously, another task is a jerk:
It would be nice to handle this gracefully.
How graceful can we get? There is a limit here, which is that (a) it's Python so we can't actually stop people from closing things if they insist, and (b) the OS APIs we depend on don't necessarily handle this in a helpful way. Specifically I believe that for epoll and kqueue, if a file descriptor that's they're watching gets closed, they just silently stop watching it, which in the situation above would mean
task1
blocks forever or until cancelled. (Windowsselect
-- or at least theselect.select
wrapper on Windows -- seems to return immediately with the given socket object marked as readable.)As an extra complication, there are really two cases here: the one where the object gets closed just before we hand it to the IO layer, and the one where it gets closed while in possession of the IO layer.
And for sockets, one more wrinkle: when a stdlib
socket.socket
object is closed, then itsfileno()
starts returning -1. This is actually kinda convenient, because at least we can't accidentally pass in a valid fd/handle that has since been assigned to a different object.Some things we could do:
In our
close
methods, first check with theIOManager
whether the object is in use, and if so cancel those uses first. (On Windows we can't necessarily cancel immediately, but I guess that's OK b/c on Windows it looks like closing the handle will essentially trigger a cancellation already; it's the other platforms where we have to emulate this.)In
IOManager
methods that take an object-with-fileno()-or-fd-or-handle, make sure to validate the fd/handle while still in the caller's context. I think on epoll/kqueue we're OK right now because thewait_*
methods immediately register the fd, and on Windows theregister_for_iocp
method is similar. But for Windowsselect
, the socket could be invalid and we won't notice until it getsselect
ed on in theselect
thread. Or it could become invalid on its way to the select thread, or in between calls toselect
... right now I think this will just cause the select loop to blow up.The text was updated successfully, but these errors were encountered: