-
-
Notifications
You must be signed in to change notification settings - Fork 31.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unhandled BrokenPipeError in asyncio.streams #104340
Comments
Ugh. I traced the code for a bit. From Maybe someone else wants to take over from here? The question is what to do about it. Should Maybe this is expected behavior after all? There's a comment in # Raise connection closing error if any,
# ConnectionResetError otherwise It would seem that "connection closing error" here might include |
Good Lord. Thanks for looking into this. If it makes the job any easier, I'm personally fine with the BrokenPipeError itself; the main reason I raised the issue was this message:
Maybe the best solution would be to retrieve the exception from |
:-)
Yeah. The confusing thing is that it appeared during my debugging explorations that the future was being awaited. But I noticed in passing that the same exception is also being set on a different future:
Let's first determine which of the two futures isn't being awaited. (Do you feel like digging into this yourself a bit? I didn't use anything more sophisticated than |
To be honest, I'm very new to asyncio (just started using it last week), so it seems strange to me that the exception wouldn't include information on where it was thrown. Is this really the case? EDIT: awaiting async def main():
proc = await asyncio.create_subprocess_exec("sleep", "999", stdin=asyncio.subprocess.PIPE)
try:
for _ in range(10000):
i = b"www.blacklanternsecurity.com\n"
proc.stdin.write(i)
proc.kill()
await proc.stdin.wait_closed() # <----
await proc.stdin.drain()
except BrokenPipeError:
print(f"Handled error: {traceback.format_exc()}") The question now is where would be the best place to await that future. My first instinct would be to do something like this in async def main():
proc = await asyncio.create_subprocess_exec("sleep", "999", stdin=asyncio.subprocess.PIPE)
for _ in range(10000):
i = b"www.blacklanternsecurity.com\n"
proc.stdin.write(i)
proc.kill()
for t in asyncio.as_completed([proc.stdin.wait_closed(), proc.stdin.drain()]):
exceptions = []
try:
await t
except Exception as e:
exceptions.append(e)
if exceptions:
raise next(iter(exceptions)) This feels a bit icky to me but then again I'm new to error handling in asyncio. Do you see a better solution? |
For this you can blame asyncio's technique for making sure that exceptions are generally raised where the user expects them. So if you create a task and the task raises, and then you await the task, the awaiter should get the exception. But at the point where the exception happens, the awaiter isn't even on the stack -- the event loop is. So the exception is caught and stored as an attribute on the Task object, and the logic invoked by The general problem with this technique is that a task or future is never awaited, the exception is just sitting there until the object is GC'ed; at that point a finalizer runs that logs the exception preceded by an extra message ("Future exception was never retrieved"), which hopefully gives the user a hint about some task or future they should have awaited but didn't. (It's kind of annoying that you don't get such a warning when the future or task completes normally and is GC'ed without anyone awaiting it -- if we did it that way, these things would probably be more easily debugged, because you'd get the warning for any task that you don't await, rather than only for those tasks that fail. But we didn't choose to do this and it's probably too late to change course -- users would hate us for those warnings about background tasks.) The problem in our case is that we have two Futures that both get the same exception attached to them, in the assumption that someone is going to wait for each of them. That's not entirely unreasonable, but it means you can't just do The problem is then, how on earth would you know to wait for Thoughts? @kumaraditya303 Unless you have an objection against this solution, I'll implement it. |
Oh, and thanks for the research, @TheTechromancer! |
…d pipe stdin (pythonGH-104586) (cherry picked from commit 7fc8e2d) Co-authored-by: Guido van Rossum <guido@python.org>
Closing -- the 3.11 backport will take care of itself. |
Bug report
Kind of a weird one here, been running into it a for a while but just recently figured out how to reproduce it reliably.
Basically, if an async process is killed while a large amount of data remains to be written to its stdin, it fails to throw a
ConnectionResetError
and instead experiences aBrokenPipeError
inside the_drain_helper()
method. Because the exception happens inside an internal task, it evades handling by the user.Minimal reproducible example:
Tested on CPython 3.10.10 on Arch Linux, x86_64
Linked PRs
The text was updated successfully, but these errors were encountered: