-
-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
asyncio: support multiprocessing (support fork) #66285
Comments
On non-Windows platforms, if a user attempts to use asyncio.get_event_loop() in a child process created by multiprocessing.Process using the fork context, and an asyncio event loop is also being used in the main process, the same _UnixSelectorEventLoop object will be used by both processes. This, of course, won't work properly; the child will raise a "RuntimeError: Event loop is running" exception as soon as it tries using the loop object. However, this may or may not actually make it back to the parent: If the parent is expecting to get items from a queue from that child publishes to, rather than yielding from it immediately, the program will deadlock. Even if the child is yielded from, it may not be immediately obvious why "Event loop is running" was raised, and the behavior is inconsistent with the behavior if a method other than os.fork is used to create the child process, since the child will get a new event loop in that case. So, it'd be better if _UnixDefaultEventLoopPolicy detected that get_event_loop was being called in a child process, and either
I've attached a test script that demonstrates the different between forked/spawned processes, and a patch that implements #1 above. |
Good point. Asyncio definitely should not share event loops across forked processes. However, I don't like the dependency on multiprocessing (even though it's in the stdlib) -- can't the policy just use os.getpid()? Also, I've got a feeling that maybe the pid should be part of the policy state instead of the loop state? The policy could just reset self._local when the pid doesn't match. |
Yep, agreed on both points. The latter suggestion also has the benefit of not requiring any test changes. Here's an updated patch. |
I think there should still be a new unittest -- we're adding a behavior we're promising, so we should test it. |
See aslo issue bpo-21998: "asyncio: a new self-pipe should be created in the child process after fork". |
I've added a unit test that spawns a new forked process via multiprocessing, and verifies that the loop returned by get_event_loop is not the same as the one we have in the parent. |
A simple pid check in the policy should be enough. |
Hmm, I'm not sure what you mean. What check in the policy would prevent this issue you described in bpo-21998?: import asyncio, os
loop = asyncio.get_event_loop()
pid = os.fork()
if pid:
print("parent", loop._csock.fileno(), loop._ssock.fileno())
else:
print("child", loop._csock.fileno(), loop._ssock.fileno()) Output: |
Are any other changes needed here? I'm still not completely clear on what Victor meant with his last comment. |
This issue looks to be a duplicate of bpo-21998. handle-mp_unix2.patch looks more to a workaround than a real issue. When I write asyncio code, I prefer to pass explicitly the loop, so get_event_loop() should never be called. IMO the methods of the event loop should detect the fork and handle the fork directly. |
See also the https://pypi.python.org/pypi/mpworker project |
I believe this is now worse due to python/asyncio#452 before I was able to simply create a new run loop from sub-processes however you will now get the error "Cannot run the event loop while another loop is running". The state of the run loop should not be preserved in sub-processes either. |
The approach in Dan's patches looks sane to me and I agree the issue needs fixing. Dan, would you like to submit a PR for this? We're using Github for patch submissions now. |
Mind that the patch is very outdated and we already have these checks in get_event_loop. |
It seems there's a check in the top-level get_event_loop() function but not in the DefaultEventLoopPolicy.get_event_loop() method. (also: wow, that stuff is complicated) |
Yes, as users aren't supposed to work with policies directly.
Yep. Please assign me to review if you submit a PR. |
another related issue: https://bugs.python.org/issue33688 |
Even though I committed a version of Dan's patch to 3.7 and 3.8, I've finally decided to revert it and do it properly instead. We should strive to implement a proper solution, not commit some half-working code. A concrete plan (for Python 3.8 probably):
|
See #99539 The PR does two things with a fork handler:
I have added two tests, first tests that event loop is not shared across forked processes and the second tests that the wakeup fd is reset. |
Is the intention of this PR that after the fork() call the child can use all asyncio functionality and everything should work? Maybe we should discuss what exactly we guarantee after this PR (and perhaps what guarantees already hold -- I presume if the forked child creates a brand new loop everything did work already except for signals?) |
In particular the PR's title seems to claim that everything related to forking and asyncio will be fixed. How much do you yourself believe that claim? |
Everything should work in forked process as it normally does without fork, I have added test for this in the latest commit. |
Let's keep this open until the buildbot results come in. Lukasz is bisecting them as we speak. |
… md5 hash (python#99745)" This reverts commit 679d963.
This reverts commit 0c1fbc1.
python#99745) Such buildbots (at the time of writing, only "AMD64 RHEL8 FIPS Only Blake2 Builtin Hash 3.x") cannot use multiprocessing with a fork server, so just skip the test there.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
asyncio
#99539time.sleep
fromtest_fork_signal_handling
#99963The text was updated successfully, but these errors were encountered: