Skip to content

Commit

Permalink
Merge branch 'master' into pr/cross-record
Browse files Browse the repository at this point in the history
  • Loading branch information
mergify[bot] authored Apr 27, 2021
2 parents d9d0ce1 + 6c99963 commit ff7ed3f
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 0 deletions.
14 changes: 14 additions & 0 deletions patches/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,17 @@ $ time node_modules/.bin/lerna exec pwd
lerna success exec Executed command in 219 packages: "pwd"
1.11 real 0.99 user 0.82 sys
```

## jsii-rosetta+1.28.0.patch

jsii-rosetta uses multiple worker threads by default to speed up sample extraction.
It defaults to spawning workers equal to half the number of cores. On extremely
powerful build machines (e.g., CodeBuild X2_LARGE compute with 72 vCPUs),
the high number of workers (36) results in thrash and high memory usage due to
duplicate loads of source files. This was causing the v2 builds to fail with:
"FATAL ERROR: NewSpace::Rebalance Allocation failed - JavaScript heap out of memory"

The patch simply limits the top number of worker threads to an arbitrarily-chosen
maximum limit of 16. We could simply disable the worker threads, but this takes much
longer to process. With single-threading, rosetta takes ~35 minutes to extract samples
from the CDK; with 16 workers, it takes ~3.5 minutes.
14 changes: 14 additions & 0 deletions patches/jsii-rosetta+1.28.0.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
diff --git a/node_modules/jsii-rosetta/lib/commands/extract.js b/node_modules/jsii-rosetta/lib/commands/extract.js
index e695ea9..539038e 100644
--- a/node_modules/jsii-rosetta/lib/commands/extract.js
+++ b/node_modules/jsii-rosetta/lib/commands/extract.js
@@ -104,7 +104,8 @@ exports.singleThreadedTranslateAll = singleThreadedTranslateAll;
async function workerBasedTranslateAll(worker, snippets, includeCompilerDiagnostics) {
// Use about half the advertised cores because hyperthreading doesn't seem to help that
// much (on my machine, using more than half the cores actually makes it slower).
- const N = Math.max(1, Math.ceil(os.cpus().length / 2));
+ // Cap to a reasonable top-level limit to prevent thrash on machines with many, many cores.
+ const N = Math.min(16, Math.max(1, Math.ceil(os.cpus().length / 2)));
const snippetArr = Array.from(snippets);
const groups = util_1.divideEvenly(N, snippetArr);
logging.info(`Translating ${snippetArr.length} snippets using ${groups.length} workers`);

0 comments on commit ff7ed3f

Please sign in to comment.