-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jest leaks memory from required modules with closures over imports #6814
Comments
@mjesun i think that's it. i ran into the same issue when updating jest internally at facebook |
Wow, that's an awesome report, @pk-nb! @aaronabramov can we use |
I'm not sure I follow this repro. You're doing this:
It seems expected to have a leak, since you patch When I look with Jasmine, the memory does go up too, just much slower for two reasons:
I also tried your code on 23.0.0, and it seems to also reproduce the behavior, suggesting it always worked this way. Am I missing something? |
Hi @arcanis —thanks for taking the time to look at this. I will certainly admit the example given is not great code, but is an example of dependencies found in the wild that aim to mock or extend, including graceful-fs, node-agent, and supertest. I assume there are quite a few packages like this in the wild. We are just porting to Jest now and not comparing this to previous versions of Jest—I'm not sure what version @aaronabramov is updating from. What's interesting is that I could not repro with the buffer suggestion you posted. Even though it is in the closure, it is being released in a sawtooth. const https = require('https');
let buffer = Buffer.alloc(1000000000)
https.request = (options, cb) => {
console.log(buffer.byteOffset, nodeBuffer.length);
return cb(buffer.byteOffset);
}; I updated suite 01 and 03 in this repro branch if you want to check. https://github.com/pk-nb/jest-memory-leak-repro/tree/external-buffer. I'm not sure why this would happen—maybe v8 can be more intelligent with optimizing out the call or something. Ignoring the buffer—with respect, I think the Jest require system should aim to support all of the node require ecosystem, including these "singleton" packages that assume a require cache. I imaging this is tricky with the separate Jest environments but hopefully could be supported by cleaning / flushing the requires in Jest. Otherwise the above node packages end up unusable in Jest. |
We tried getting rid of the leak of modifying global objects in #4970, but that caused massive failures. Node has a very particular process of booting up, which includes a stateful ping-pong between the JS and the Native side, which made this impossible. That said, as per @aaronabramov's comment, we know we've recently introduced a leak somewhere else too. |
Thanks for the context @mjesun, didn't realize this had been attempted before. For anyone checking on the issue, here's the revert PR #5009. Maybe an obvious / bad idea, but could we simply replace the core module objects following each test to release any global modifications? Something like: coreModules.forEach(() => delete require.cache[key]) This in addition to setting all custom requires to |
Because in this new code you're always replacing the previous hook: https.request = (options, cb) => {
console.log(buffer.byteOffset, nodeBuffer.length);
return cb(buffer.byteOffset);
}; Since you overwrite let originalHttpsRequest = https.request;
let buffer = new Buffer(1024 ** 3);
https.request = (... args) => {
return originalHttpsRequest.call(... args, buffer);
}; |
Wanted to leave an update—we found our accidental capture of I can close this if you'd like as this was a user issue (I hope this is a good google reference for others who made the same mistake we did). It's still a bummer that libraries that don't have a mechanism for releasing global modifications will be a blocker for others. I'm not sure what is actionable by Jest outside of moving to a environment reset per test suite (maybe by resetting / reimporting all built in modules?) or running in separate threads so that leaks can't continue. |
We have the same issue with graceful-fs not releasing fs. It would indeed be awesome tom somehow force drop or at least flag it when stuff like this happens |
@pk-nb do u have a workaround for that? |
Facing the same issue with |
Seeing that graceful-fs fixed this in 4.1.12, is it somehow possible to roll out a hotfix for jest < 24.x as this is preventing my company from transitioning our test suite to jest from jasmine. |
We use |
BTW, can anybody confirm if this indeed fixes the issue and we can close it? |
@thymikee Thank you for the quick answer. Sadly it appears to not fix the issue. After ~1000 tests I start maxing out my remaining RAM which is about 10 - 12 GB. The trend of ever-increasing memory-usage is strictly linear which should be further proof that this is not a memory leak on the tests end. |
I'm pretty sure jest itself doesn't leak. Do you have some setup/teardown you run for every single test? Or are you able to put together a reproduction? |
It is an Angular project, so in most of the test files I use the TestBed to configure my testing module. Other than that we don't use any setup/teardown. Am I potentially not respecting any special rules for Angular unit tests with Jest?
Most (70%) of our tests look somewhat like this. |
@markonyango any non-native node module which decorates a native module will leak in |
@lev-kazakov Sadly, implementing the jest-leak-fixer according to your README did not solve the problem. I would help you running your tests against our dependencies graph if you could instruct me how to do it. If you could send me a message with instructions, I could help you there. |
Possibly related to this: nock/nock#1448 |
any updates? We still face memory leaks from time to time, especially in a memory-constrained CI environment (leaks add up and crash at some point) |
this issue has also been surfacing in GitHub Actions in this project here: https://github.com/lifeiscontent/realworld/pull/229/checks?check_run_id=826716910 |
In my team we have the same issue, causing our tests failing with a 'out or memory' error. |
We run our tests using plain old unix tools to avoid running into these memory leak issues. In case it helps someone: It's a combination of find ./tests -type f \( -name \*.test.ts -o -name \*.test.tsx \) | sort -R | xargs -L 5 -P 4 jest --runInBand --logHeapUsage |
*** EDIT *** ignore the below the issue is back :( TLDR: using a My two cents on this, I was using a https://jestjs.io/docs/en/configuration#testenvironment-string In here I was setting up the database and exposing through globals, dropping it etc. before each of my tests ran. In the end I ditched this in favour of having a helper class which I just instantiate in the set up of each of my test files. This instance gets garbage collected without any issues. Realise this probably isn't the most explicit comment but can't post any code sadly. |
Ok so got a work around which I hope may be useful to others. You can use this solution to run your tests in batches, one jest process per batch. The processes are run sequentially so should keep the memory usage down. Not ideal but functional and I needed a quick fix as this was issue was preventing us merging any PRs. // run-tests-ci.ts import {execSync} from "child_process";
import glob from "glob";
const shell = (cmd) => execSync(cmd, {stdio: "inherit"});
const run = async () => {
logger.info(
"Running tests in batches to avoid memory leak issue... this means you will see jest start up multiple times.",
);
const files = await glob.sync("./test/**/*.test.ts");
const batchSize = 20;
let batch: string[] = [];
const runBatch = () => {
if (batch.length) {
shell(`jest ${batch.join(" ")}`);
batch = [];
}
};
for (const file of files) {
batch.push(file);
if (batch.length === batchSize) {
runBatch();
}
}
runBatch();
};
run().catch((err) => {
logger.error(err);
process.exit(1);
}); package.json
then run with |
@stuart-clark-45 how do you combine the output from mulitiple --coverage's? |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
I think as mentioned a few times, this is not a bug in Jest per say, but the way Jest works (its own module cache, different globals per test file etc.) can sometimes violate assumptions made by libraries when they patch/override/add to globals (either core modules or Not sure what we can do in Jest to detect this and warn, instead of just leak. One thing is #8331 for modules and potentially trying to add a |
@SimenB If it's innate to the implementation of Jest that a lot of larger code bases might run into these problems then I think it would be very helpful to actually have more details in the docs on why this would happen and common ways of debugging it (beyond the existing CLI args). As it currently stands there's not a lot of detail on how the module loading works in the docs. And that's understandable in the sense that users "shouldn't have to know how it works". However if it is the case that having somewhat of an understanding of at least the general working of Jest can help users write more performant tests and understand why memory leaks can easy become an issue, then it's perhaps worth reconsidering. |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days. |
Oh no, you don't. |
for the // package.json "setupFilesAfterEnv": [
"./test/setup-after-env.ts"
], // test/setup-after-env.ts
jest.mock('agent-base/patch-core', () => ({})); |
🐛 Bug Report
We are running into a issue in Jest 23.4.2 where Jest leaks and runs out of memory when running a moderately hefty test suite (~500 to 1000 test files). I believe I have isolated this to the
require
system in Jest and it is not the fault of other packages. Even with the most minimal recreation it leaks around 2-6MB per test.This is very similar to #6399 but I opted to make a new issue as I think it's not specific to packages or the node environment. I think it's the also the source or related to the following issues as well but didn't want to sidetrack potentially different issues and conversations.
#6738
#6751
#5837
#2179
This is my first time digging into memory issues, so please forgive me if I am focusing on the wrong things!
Link to repl or repo
I have created a very minimal reproduction here: https://github.com/pk-nb/jest-memory-leak-repro. You should be able to run and see heap grow and also debug it with the chrome node devtools. With the reproduction, we can see this happens in both
JSDOM
andnode
environments in Jest.To Reproduce
Simply run a test suite with tests that require in a file that creates a closure over an imported variable:
While every worker leaks memory and will eventually run out, it is easiest to see with
--runInBand
.Note that we are not doing anything with
require
to force a reimport—this is a vanillarequire
in each test.When run with
jasmine
, we can see the issue go away as there is no customrequire
implementation for mocking code. We also see the issue disappear if we release the variable reference for GC by setting tonull
.I believe the closure is capturing the entire test context (which also includes other imports like
jest-snapshots
) which quickly adds up.Expected behavior
Hoping to fix so there is no leak. This unfortunately is preventing us from moving to Jest as we cannot run the suite on our memory bound CI (even with multiple workers to try to spread the leak).
I'm hoping the reproduction is useful—I spent some time trying to fix with some basic guesses at closures but ultimately am in over my head with the codebase.
You can see the huge closure in the memory analysis so I'm inclined to think it's some closure capture over the
require
implementation and/or the jasmine async function (promise).Some leak suspects:
next
, preventing it from releasing? https://github.com/facebook/jest/blob/master/packages/jest-runner/src/run_test.js#L218-L220These are educated guesses, but there are quite a few closures within the runtime / runner / jasmine packages though so it's very difficult (as least for me being new to the codebase) to pinpoint where the capture lies. I'm hoping that there's a specific point and that each closure in the runtime would not present the same issue.
Our suite
I have ensured the issue stems from Jest and not our suite—I ran the old suite (mocha) and saw a healthy sawtooth usage of heap.
Run
npx envinfo --preset jest
Please let me know if I can help in any way! I'd really love to get our company on Jest and am happy to help where I can. Thanks @lev-kazakov for the original isolation repro.
The text was updated successfully, but these errors were encountered: