You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Triage in #13779 showed that materializing the inputs for nailgunned JVM processes represented up to a 300ms constant factor.
But because those inputs currently include both the use_nailgun: Digest and the input_files: Digest fields (because the use_nailgun digest must be a subset of the input_digest: see), a lot of that work is completely redundant. On top of that, because we have materialized more (unnecessary) stuff into the sandbox, we have more to clean up afterwards.
This hits source analysis processes for Java/Scala particularly hard: in some cases, it represented ~500ms of total overhead on ~150ms processes.
The text was updated successfully, but these errors were encountered:
As discussed in #13787, the `input_files` digest must currently include the `use_nailgun` digest, and that means that the input files for the server are materialized for every run.
* Add an `InputDigests` struct to encapsulate managing the collection of input digests for a `Process` (which will soon also include the immutable digests from #12716), and split the `use_nailgun` digest from the `input_files` digest.
* Move nailgun spawning (which uses `std::process` and `std::fs`, and so is synchronous) onto the `Executor`.
* Adjust the (still hardcoded) nailgun pool size to keep idle servers beyond the number that are currently active (e.g. to allow for idle `javac` processes while `scalac` processes are active).
Collectively, these changes make compilation 40% faster.
Fixes#13787.
Triage in #13779 showed that materializing the inputs for nailgunned JVM processes represented up to a 300ms constant factor.
But because those inputs currently include both the
use_nailgun: Digest
and theinput_files: Digest
fields (because theuse_nailgun
digest must be a subset of theinput_digest
: see), a lot of that work is completely redundant. On top of that, because we have materialized more (unnecessary) stuff into the sandbox, we have more to clean up afterwards.This hits source analysis processes for Java/Scala particularly hard: in some cases, it represented ~500ms of total overhead on ~150ms processes.
The text was updated successfully, but these errors were encountered: