-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Bug: Hook failures change the shape of the test suite #1955
Comments
Any thoughts on this? I'm cc/ing @travisjeffery @boneskull and @danielstjules since you all seem to be most active in the issues. As mentioned, I'd be willing to take a stab at implementing this if I thought that the PR were likely to be accepted. Cheers! |
if the code to implement the fix isn't too crazy i'd likely accept it |
I'm also experiencing this. I work around it by running mocha programmatically, and manually marking any tests that match my current grep but have no |
@cletusw can you please post an example on how you:
|
I use my own var tests = [];
// rootSuite is from Mocha
if (rootSuite.suites) {
// grep is provided by the user and also passed to Mocha
var userGrep = new RegExp(grep);
tests = flatten(rootSuite).filter(function (test) {
// Remove tests that didn't run because of a grep
return userGrep.test(test.fullTitle);
});
tests.forEach(function (test) {
// When a beforeEach fails, mocha skips the remaining tests in the
// suite but doesn't mark them as failed, which makes the PASS/FAIL
// numbers weird. If a non-pending, grep-matching test didn't run, we
// just mark it as failed.
test.pending || test.state || (test.state = 'failed');
});
}
function flatten(suite) {
suite.fullTitle = ((suite.parent && suite.parent.title) ? (suite.parent.title + ' ') : '') + suite.title;
suite.topSuiteTitle = (suite.parent && suite.parent.topSuiteTitle) ? suite.parent.topSuiteTitle : suite.title;
if (suite.tests) {
suite.tests.forEach(function(test) {
test.fullTitle = suite.fullTitle + ' ' + test.title;
test.topSuiteTitle = suite.topSuiteTitle;
});
}
return suite.suites.map(flatten).reduce(function (a, b) {
return a.concat(b);
}, suite.tests || []);
} |
@cowboyd Are you working on this by any chance? This is a huge problem for us and I'd really like to remove the hack I noted above. |
Hi guys, I would second the priority of this also. |
@cletusw I'm not working on this no. It's in that category of papercuts that we're willing to cope with because the solution seems like a lot of work. I'd be willing to collaborate, pair program etc.. , but can't take the lead implementing this directly. |
Haven't got a chance to look at this. doesn't sound too bad, unless the reporters wouldn't understand. |
Even if this required changes to reporters it would be worthwhile |
Any update on this issue? I'm also facing the similar problem |
We are also having the same issue, for us it would be logical to see the test cases as failed if the before or beforeEach hook fails, now they just disappear |
Not sure if semver major or minor 🤔 Edit: ...just to be sure, let's go with semver major. |
If a
beforeEach
hook fails, then all subsequent tests in a suite and all sub-suites are not run. For example:reports only a single testcase, even though there are three defined:
As outlined in #1043, this is the intended behavior for both
beforeEach
as well as other hooks. This makes sense from an efficiency prespective; after all, there is no point in actually running the testcases when it is assured that they are going to fail.The problem with this is that when you're refactoring a large codebase, or doing something like upgrading an underlying library, or rehabilitating a codebase that has allowed its test suite to get out o sync.... work that might take days or weeks, this behavior alters how many total testcases are reported. So as you make changes, the reporting varies widely. We've seen the pass/total numbers of test cases jump from
95/220
, to4/8
, then down to0/1
in the case of a global hook failure, then back up to35/160
in a matter of minutes.This can be very disorienting, and it obscures your overall progress towards your goal which is a completely green suite where all tests are passing. The fact that a test is not run, does not mean that it doesn't exist, and that it is not important.
Rather than exclude a test completely from the output, it makes more sense to not run it, but still report it as a failure. That way, the shape of the test suite remains constant. If I have 225 testcases, then that's what I have, even if only 26 of them are passing. I at least know that I'm 12% there, which test suites are related by the same failure, and I can track total progress.
If this makes sense, it would be something I'd be happy to help implement.
The text was updated successfully, but these errors were encountered: