-
-
Notifications
You must be signed in to change notification settings - Fork 31.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing.Pool(64) throws on Windows #89240
Comments
Similar issue as the previous bpo-26903.
|
The argument-less instantiation also fails, which is worse.
|
See bpo-26903 for a similar problem in concurrent.futures.ProcessPoolExecutor. It was resolved by adding a limit constant, _MAX_WINDOWS_WORKERS == 61. WaitForMultipleObjects() can wait on up to 64 object handles, but in this case 3 slots are already taken. The pool wait includes two events for its output and change-notifier queues (named pipes), plus the _winapi module always reserves a slot for the SIGINT event, even though this event is only used by waits on the main thread. To avoid the need to limit the pool size, connection._exhaustive_wait() could be modified to combine simultaneous waits on up to 63 threads, for which each thread exhaustively populates a list of up to 64 signaled objects. I wouldn't want to modify _winapi.WaitForMultipleObjects, but the exhaustive wait should still be implemented in C, probably in the _multiprocessing extension module. A benefit of implementing _exhaustive_wait() in C is lightweight thread creation, directly with CreateThread() and a relatively small stack commit size. |
…x parallel test suite on Windows.
Note that the bug title is not accurate: on Windows calling E.g. the script
import multiprocessing
a = multiprocessing.Pool(62) never returns back to the OS on Windows, when invoked with What happens on macOS and Linux if one attempts to create a multiprocessing pool with 100, 1000 or 10000 processes? I.e. what is the failure behavior on non-Windows platforms when too many are requested? I wonder if instead of throwing, it would make sense to either silently cap the number of workers in a pool to the max limit? Otherwise, how will users know what the limit currently is so creating a Pool will not throw? E.g. should there be a function call to ask |
Due to python/cpython#89240, any use of mp.Pool with a number of workers greater than 60 fails. This means that by using cpu_count(), any system with more than 60 logical cores will crash when attempting to run. Solve this by adding a flag to allow limiting the number of workers for users with systems with that many cores
python/cpython#89240 Any use of mp.Pool with a number of workers greater than 61 fails on Windows machines. Twofold solution is introduced: - add option for users to explicitly change number of workers through -w/--workers, this is similar to solution used in black - add limitation in the code for Windows machines Co-Authored-By: Blank Spruce <32396809+BlankSpruce@users.noreply.github.com>
python/cpython#89240 Any use of mp.Pool with a number of workers greater than 60 fails on Windows machines. Twofold solution is introduced: - add option for users to explicitly change number of workers through -w/--workers, this is similar to solution used in black - add limitation in the code for Windows machines Co-Authored-By: Blank Spruce <32396809+BlankSpruce@users.noreply.github.com>
GH-107873) We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles. This is an internal module, hence undocumented, and should be used with caution. Check the docstring for info before using BatchedWaitForMultipleObjects.
…s pools (pythonGH-107873) We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles. This is an internal module, hence undocumented, and should be used with caution. Check the docstring for info before using BatchedWaitForMultipleObjects.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
The text was updated successfully, but these errors were encountered: