-
-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using one process's output as another's input creates challenges with non-blocking status #1707
Comments
Huh, that's an interesting case! I'm really not sure what the correct behavior here is. What makes it tricky is that:
One option: have a special case where if you pass in an This feels... odd, but also convenient. |
...That might also let us simplify the fd creation code a bit, because |
When you say I'm not sure how best to resolve this. As a workaround, you could use PIPE for both processes and spawn a Trio task to shovel data from one to the other. That's somewhat clumsy, though. Thoughts on a real solution...
@njsmith, thoughts? Code so others don't have to unpack the zip file: trio_launch_pipe.py:
desc_stdin.py:
|
Huh, it seems like we independently converged on the same solution. :-) Probably special handling for FdStream arguments is the way to go then! |
Portability is an interesting point... on Windows this problem doesn't happen at all, and on Linux like you note there's a simple workaround (modulo any exotic environments where |
It's not documented but it does seem to work:
|
Following your discussion, Using socket.socketpair() instead of a pipe may be a problem for some specific applications (splice system call may not work for example). |
Right, that's a good workaround for right now, but it's (1) awkward to force users to do that, (2) if you then try to access
I guess one hacky but workable option would be to put Some other cases worth at least thinking about:
|
...and we could probably use the same code for #174, now that I think about it. |
Does it even make sense to keep the stdin or stdout opened in the python process once it has been used to spawn another process ? |
We ran into this at $work today. In our situation, explicitly setting the pipe back to blocking mode helped, as would the "trio magically sets the stream back to blocking mode and turns it invalid" converged-solution. So my vote would be for that approach. Another possible idea would be to add some special constant to |
When launching piped subprocesses, trio provides the stdin of the 2nd process (at least, I have not checked if others are concerned) as O_NONBLOCK while the shell or Python subprocess.Popen() do not (on a Linux platform).
See attachment pipe_nonblock.zip for the scripts used.
I was expecting that by default, for a new process, all IOs were blocking. As a workaround, I will have to explicitly clear the O_NONBLOCK flag in my spawned exe, but I believe it can create problems with a variety of applications (that are expecting blocking IO by default).
The text was updated successfully, but these errors were encountered: