-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it easier to write reliable cleanup tasks #928
Comments
Agreed that |
Another vote for |
I prefer Can we make it cleanup even on |
I lean toward The only reason is that I think it makes it more obvious what is going on without reading the AVA docs. The notion that cleanup is going to happen both before and after is going to surprise people. They are going to write My concern with
We can try. There is no guarantee it would work. The only way to guarantee cleanup works would be to kill the first process immediately, then relaunch and run just the cleanup method. That falls apart if you use something like |
Hmm, wasn't |
@vdemedes I would prefer that too, but I remember there being some good points about why |
Not just that, but now we're introducing combinatory methods, making the test syntax harder to understand. Starting to get bit too DSL'y. |
That was our assumption, yes. The question though is how After an uncaught exception there are no guarantees as to what code may still be able to run. Here too we end up forcibly exiting the process. Note that in both cases we first do IPC with the main process before actually exiting. I'm not quite sure how much code still has a chance to run. It's possible We should decide what guarantees we want from Also, our understanding of "test cleanup" has evolved to the point where we know see the best strategy of ensuring a clean test environment is to clean up before you run your test, and then clean up after to avoid leaving unnecessary files and test databases lying around. Hence the proposal for a Note that |
Maybe we could rerun the process and only run the |
See #928 (comment)
|
Just my two cents, since I was welcomed to comment. I have only used ava for one project so far, but I immediately ran into this issue and it confused me very much as a user and avid code tester. My expectation, when splitting my code up into a Also, I see some bad advice and rhetoric happening here and in #918, suggesting that cleanup should happen before and after, allowing you to re-run tests even in a dirty environment. While I don't disagree that it is a good idea to check that your environment is exactly as desired before a test, there has to be a stick in the ground saying that unit tests must not permanently alter the environment, even if there are other tools in the toolchain (such as With that said, I would support the option of a |
A crash due to a bug in AVA would be something we'd fix, yes. But what if the crash is caused by the code being tested? There is no 100% reliable way to recover from that and undo changes to the environment. Similarly the In other words, unless your tests always pass, there will be moments where a test run leaves the environment in a different state. AVA can't know that's the case since it doesn't understand
We should clearly document the intent of the hook. But AVA is not afraid of being unorthodox 😉 |
I have never seen another test framework do this, and I have used quite a few. I have had code throw errors, both intentional and unintentional, both synchronously and asynchronously, have typos, and flat out be running invalid JavaScript, and the test framework catches that and still runs the AVA may not understand the
I have actually seen this cleanup problem in test suites written by highly active contributors to AVA. 😉 Being unorthodox is great as long as it adds value. In this case, it looks like it's actually taking away value. |
@catdad It's only |
This bit me today. A test failed and temporary files were left all over simply because I had
Yikes. That seems like a major mixing of concerns. This feature is named |
@sholladay I like your suggestion of having a |
I think
If it would be totally up to me, I'd consider removing |
We previously discussed this in #474 (comment). The concern was that the |
Yeah... Looks like there's no win-win solution at the moment for this. |
I would ask "why", but it's going off on a bit of a tangent. I haven't personally needed something like that. But it does exist elsewhere and cleanly solves the "leave the environment in the state in which the failure occurred" story, even though I think that is a bad idea 95% of the time. If it needs to be a thing, it can be an option. I don't think it should be default.
This sounds hypothetical to me. I would bet money that the vast majority of cleanup hooks are doing the equivalent of I know AVA likes to pave its own path, and to good effect. But I think Intern really gets this right. It makes strong guarantees that if a test runs, its hooks run. |
Yea that makes sense to me too. #840 discusses providing the test status to an
I think this is related, actually. We have issues with bringing test execution to a halt when I like how this simplifies the mental model for most use cases. |
Is there any chance this could make it in for 1.0.0? Needing to remember |
I'd like to focus on outstanding Babel interoperability issues. But don't worry we won't shy away from making breaking changes when necessary, and we can support a deprecation path. |
Okay, that makes sense. The Babel interop is clearly pretty complex and a lot to keep track of. FWIW, though, it's been working pretty well for my own use cases. 😃 I'll see if I or someone on my team can contribute to this when you think the time is right. Seems like a good first step is to expose the test pass/fail/error status to the hooks? That doesn't even have to be a breaking change necessarily.
That is great to hear. 🎹 👂 |
@issuehuntfest has funded $80.00 to this issue. See it on IssueHunt |
@rororofff has funded $2.00 to this issue.
|
Meanwhile has somebody found a workaround for this missing feature? Like doing a global It's very annoying to do manual cleanup 😊 |
Tests start running asynchronously so that won't work unfortunately. |
I wonder if |
You'd have to assign all those promises to an array and await them at the end, though, since AVA requires you to declare all tests at once. It also means errors are attributed to "well the process had an exception" rather than a specific cleanup task. |
@novemberborn Due to the age of this issue and the introduction of |
I've been thinking about this for a while now. I think as part of #2435 I'd like to have a "set-up" lifecycle thing which can return a teardown function. It's a different use case from "run this before" and "run this after". I've assigned this to myself for now. |
@novemberborn Is this still relevant? Curious why it was removed from priorities. I'm considering doing a PR with a I'm still struggling to have cleanup code run consistently. I would like cleanup to be run on test success, test failure, uncaught exceptions, timeout, and SIGINT (ctrl-c) - and ideally all the other kill signals. I have tried after.always, afterEach.always and t.teardown. My use case is closing webdriverio clients (that eat lots of ram) when tests aren't running. I've considered doing a temporary workaround by catching the node process exit signal, but how would I get access to the ava context from there which has the webdriverio client instances?
I think this is unnecessary as it could be done trivially in user-space with an if statement in the |
I don't know @mikob, that was 3 years ago! 😄
I think this is the tricky bit. At some point the worker process/thread needs to exit, especially if there's been a fatal error. Within the API that AVA provides there'll always be ways in which cleanup cannot occur.
I wonder if AVA 4's shared workers feature could be used for this. It can track the comings and goings of test workers and runs separate from the tests. The tricky thing may be to expose the clients to the tests.
Could you hook that up when you create the clients? |
Haha, fair enough!
I don't think we need a 100% guarantee. It's more a convenience and responsibility thing. I feel it's fairly common for a test to throw unhandled exceptions, after all, they are testing. And SIGINT when we quit tests prematurely.
It would make more sense to have the setup/cleanup handled by AVA apis symmetrically IMO. Also I'm doing 1 client per test. Since I create a webdriverio client in beforeEach, a cleanup or afterEach should tear it down.
I'm talking about process.on("SIGINT" and the clients are created in beforeEach (need one per test) not sure how to access all the workers contexts from within the process.on callback. |
@mikob re-reading the conversation here I think #928 (comment) sums up a direction we can take this. But I don't think that would solve your problem. Do you get PIDs for the clients? You could send those to a shared worker that makes sure they get shut down, yet still instantiate within the test worker itself. |
To add another case here: these end to end tests spawn multiple instances of the tested software to test how they interact. There's significant setup, and tests are all serial. If a test times out, the test process ends but leaves these spawned processes running (which might continue logging to files, holding ports open, and otherwise interfering unpredictably with future test runs) – this happens even though Execa should be cleaning them up – I think because AVA is killing the timed-out tests with Even disabling test.after.always(async () => {
const processes = [p1, p2, p3];
processes.forEach((p) => {
if (p !== undefined) {
p.kill('SIGKILL');
}
});
}); The current behavior is pretty surprising – I've thought these were unrelated issues with the tested software for a while now, and I only just now realized it's all related to this AVA issue. A more predictable behavior here would be much appreciated 🙏 |
Child processes are killed with Clean-ups when tests timeout are pretty tricky since the worker thread itself may be locked up. |
See: #918 (comment)
after.always
has some issues when used as a cleanup task. Specifically, it won't run if:--fail-fast
is used.I've advocated using
.before
or.beforeEach
to ensure state is clean before running, but that means state is left on disk after the test run. It's easy enough to get around that:Still, it might be nicer if we had a modifier that allowed you to do it a little cleaner:
Or maybe we introduce a
.and
modifier:I think the second gives you a little more flexibility and is clearer without reading the docs. The first is probably simpler to implement (though I don't think the second would be very hard)
The text was updated successfully, but these errors were encountered: