-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cooperative Parallelism #10443
Comments
Thanks for creating this issue! Since @jmmv specifically asked for it to be created I am removing it from the untriaged label and giving it an initial P2 priority |
Just one more addition since I just closed #11275: if we do this and explicitly tell an action that we should use X threads, we also have to go the other way and ensuring the action doesn't use more than X threads (1 in the general case!) when told so. |
I don't see this as a requirement, at least not until someone provides a use case. We have a local patch that lets us set concurrency and we have cases where an action may peek briefly at 4 threads but empirically has a steady state of 2 threads, for example. A multi-threaded process does not usually consume cores in 100% core increments. |
Consider
I would like to tell Bazel that "this rule uses N-1 workers, where N is the number of available cores"; I would still like to leave 1-2 slots, in case they are IO bound. |
Note for comparison: GNU make supports this via its "jobserver": https://www.gnu.org/software/make/manual/html_node/Job-Slots.html |
|
GNU make recently added support for jobservers via named pipes (previously they had to be passed around via file descriptors). Could we have bazel create named pipes for its own jobserver, and provide some kind of mechanism to provide those pipes to actions (variable substitition?)? Someone made a workaround which literally just has a jobserver service running in the background for rules_foreign_cc's make, but it'd be nice if we could have this run with fully self-contained builds. |
One thing you can do now is provide better estimates of the #CPUs your jobs will use. @wilwell submitted d7f0724, which allows specifying the expected amount of CPU/RAM depending on the number of inputs. And I have work in progress to use cgroups for sandboxes, which would allow more flexible limits. Neither of those are as powerful as negotiating with the processes, but I want to see a strong need for that power before complicating matters even more. |
We have a very heavy compute build action that would like to use the most cores it can. We have no option to break it down into smaller chunks. Having the cooperative parallelism would go a long way for us as right now there is no obvious way to obtain the number of jobs that Bazel has access to within a rule context such that we can return it via We could use a workaround with a repository rule calling out to In the short term it would be useful if the # This breaks the current API but it demonstrates what we would like to be able to do.
def _resources(default, limit, platform):
default["cpu"] = _jobs(max = limit["cpu"], min = 1, diff = -2)
return default The idea for us is on a machine with 56 cores we could reserve some cores for other smaller actions to trickle through. Alternatively a tag to mark all actions of a certain type to be exclusive similiar to how tests can be marked with the |
With This doesn't allow any kind of |
Description of the problem / feature request:
This is an umbrella issue of problems that arise from using build tools that have their own internal parallelism.
In this Google Groups thread, @jmmv asked to file an issue about this:
A little context:
swiftc
is the swift compiler driver. It's a non-traditional compiler, it doesn't build one source file at time, it builds one module of N source files at time.swiftc
spawns "swift frontend" invocations, and the number of spawned processes is very often >1.There are two related problems:
In the first case, it would be good if the action API could express to Bazel how much parallelism is used by an action. This avoids the problem of N bazel actions each running some M sub-actions each.
In the second case, it would be good if the action API could express a range of parallelism an action is capable of using. This would really help the performance of bottleneck actions in the critical path. For example, Bazel could see that it's not using its full amount of jobs, and donate the extra parallelism to the bottleneck action. We see this as particularly useful at the tail end of builds, where there are fewer targets left to build. This problem shows up even more in incremental builds, where the action graph is often much more flat, even linear.
As @allevato pointed out in the google groups thread, this would require some way for actions to pass args that are known not to affect output, such as a
-j<N>
flag. This would also need to preserve the cache keys.Feature requests: what underlying problem are you trying to solve with this feature?
This feature allows us to avoid two current problems:
The first issue can happen with any swift module over 25 files. The default batching logic creates one swift frontend for each group of 25 files. A swift module with 100 files will spawn 4 sub-actions, unbeknownst to Bazel.
As mentioned, the second case is something that causes slowdowns for incremental development builds.
Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
If needed, I can make a rules_swift project that demonstrates the issue.
We see the problem in our build by looking at
--experimental_generate_json_trace_profile
and by comparing to Xcode's builds, which can sometimes be faster due to its seemingly hard code use of-j8
.What operating system are you running Bazel on?
macOS
What's the output of
bazel info release
?release 1.2.0
Have you found anything relevant by searching the web?
As mentioned above, a small amount of discussion happened on Google Groups:
I've also posted a general (non-bazel) question to the Swift Forums.
The text was updated successfully, but these errors were encountered: