-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for --tags to include/exclude tests based on some optional tagging #1445
Conversation
As you wrote, So if this is going to get merged, and someone is going to maintain it, it has to be extremely concise. I like the idea of tagging my tests, and keeping that meta-info separate from the title. Please see my notes in 953f018c6de6cdf3b3443eec143a8f58ca7701fa |
Yes this is definitely aimed at meta-tagging for selective automated testing, more than For example, use cases we've always wished we could handle are:
So far we've kind of managed to handle these by using |
Thanks a lot for all the feedback @boneskull. I simplified the PR to support the following:
I've also removed any third-party dependencies, and added tests. |
Hi @boneskull, I have another proposal. I re-implemented it with a slightly different API, which I believe makes the implementation a lot cleaner (see Files changed now - I kept both commits but will squash them later). To use tags: describe('my thing', function() {
this.tag('integration', 'slow');
describe('some tests', function() {
this.tag('ie8');
it('...', function() {});
});
it('...', function() {});
}); To filter at runtime: mocha --tags "browser,integration" --skip-tags "slow"
I believe that's much more in line with the rest of Mocha, and easier to understand... thoughts? |
I prefer the new implementation describe('my thing', function() {
this.tag('integration', 'slow');
describe('some tests', function() {
this.tag('ie8');
it('...', function() {});
});
it('...', function() {});
}); It's a bit more restrictive in that you can only add it to suites. But that can be worked around by nesting tests in suites if the tagging needs to be different. Which is the same workaround you need to do currently for timeouts |
Yes, I like the new implementation. |
@@ -36,6 +36,11 @@ exports.create = function(parent, title){ | |||
return suite; | |||
}; | |||
|
|||
Suite.prototype.tag = function(tags) { | |||
var tags = Array.prototype.slice.apply(arguments); | |||
this.ctx._tags = this.parent.ctx._tags.concat(tags); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these two lines can be rewritten as
this.ctx._tags = Array.prototype.concat.apply(this.parent.ctx._tags, arguments);
I'd like to merge this when my comments are resolved unless @travisjeffery declines |
Thanks! I made changes according to your comments, and will add add unit tests around
For now, there is a new function Runner(suite) {
this._filter = new Filter();
// depending on command line args we can set
// this._filter.grep = /myregex/
// this._filter.invertGrep = false
// this._filter.tags = ['integration']
// this._filter.skipTags = ['slow']
this.total = this._filter.total(suite);
}
Runner.prototype.runSuite = function(...) {
// check how many tests would be run given the current filter
var total = this._filter.total(suite);
if (!total) return fn();
}
Runner.prototype.runTests = function(...) {
// if this test shouldn't run, skip it
if (!this._filter.run(test)) return fn();
} |
A nice side effect of making You can just set I just pushed a working / unit-tested version as a new commit. |
@rprieto Can you try rebasing and squashing please? |
Done! |
Are you happy to merge it @boneskull? I'm not sure how the docs get updated though, since they're not part of this repo. |
Docs are in the |
Should I raise a separate PR for the docs? |
I think it's more consistent to add both the feature, the tests and the docs on the same PR/commit. |
@dasilvacontin I agree. But how can I raise a PR that spans across 2 branches? ( |
@rprieto, ouch, good point. No, a PR can't target two branches. Separate PR then. |
Hmm, I can't find it. Was it on diff comments? Maybe I'm just blind. 😄
I agree. Simpler tends to be easier to understand and to use, and it's less code to maintain.
Indeed, it can be solved with more custom tags. Unless you had 3 or more tags and now you have to write all combinations, which is a pain. We can just sit back and wait for people to request the feature, though. There's no much point on extending it if no one's going to use it. |
You're right it must have been on a commit that got squashed. The only thing I'd keep in mind is that moving from If we do decide to implement I'm happy either way. |
+1, getting this in would be a huge benefit to my testing suite! |
+1 |
Do we have a way in this configuration to run all the tests without a tag and all the tests with a particular tag? I could run multiple passes of Mocha but that could potentially change the test behavior (it shouldn't, but that's very different from "it won't") |
If you mean tests without a particular tag, then yes you can do |
Besides things like 'slow', 'fast', 'phantomjs-broken', the next logical place to need tags is for feature toggles. Feature toggles are not particularly interesting until you have at least 1 of them, and to have one toggle you need to mark your tests as unaffected by the toggle, only working when disabled, or only working when enabled. To run a full tests suite you need to run all of the unaffected tests and the affected tests that apply to the current configuration. That would be group A and B, or A and C depending on which way it's configured at present. Now add a second flag, and you see the --skip-tags flag doesn't scale very well without the catchall. In order to run all of the unaffected tests you have to update your --skip-tags arguments to include all feature toggles in the project. If there were a special case for 'none' (empty string perhaps?) then you could run mocha twice to get any combination of tests to run. That may be good enough, but as hard as we try, tests do tend to behave differently when executed in a different order. |
I'd love to see this. FWIW, no one has addressed this in a while. |
@MatthewRalston please use the thumbs up emoji reaction rather than spamming people with a message like that |
Hi @jcrben, any thoughts about this PR? Or more specifically, what do you think is missing so there can be a merge/close decision? |
@rprieto I'm not a team member. It looks like @dasilvacontin who is a member gave me a reaction, tho, so maybe we should ask him 😉 looks like he was waiting on word from @boneskull who never got around to it. @travisjeffery wanted to stick with grep but grep really doesn't handle this in a truly clean way |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tested this, but I have some comments about the code. Good job, ty!
var include = (!this.tags.length) || matchTags(test.ctx._tags, this.tags); | ||
var exclude = this.skipTags.length && matchTags(test.ctx._tags, this.skipTags); | ||
// final decision | ||
return grepMatch && include && !exclude; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could early return once we calculate a condition that fails so that we skip unnecessary calculus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. I thought grepMatch && include && !exclude
was simple and easy to understand, and chose that over performance. Do you think performance matters a lot here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my testing, the performance gain is negligible, and this reads easier. LGTM, good call.
(like 12ms per 100 000 tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sharing so that someone points out my test is bad or wrong:
const filterTags = ['ci']
const testTags = ['slow', 'vendor', 'thing', 'ci']
function matchTags (actualTags, against) {
return against.some((tag) => {
return actualTags.indexOf(tag) !== -1
})
}
console.time('shouldRun')
let tests = 100 * 1000
while (--tests) matchTags(testTags, filterTags)
console.timeEnd('shouldRun')
Runner.prototype.skipTags = function(tags){ | ||
debug('skip tags: %s', tags); | ||
this._filter.skipTags = tags; | ||
this.total = this._filter.count(this.suite); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing this line quite a few times – might be worth refactoring into Runner#updateTotal
.
total
is not very descriptive imo, but we can leave renaming that to an additional issue/PR and move on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point thanks, I'll change extract & for a more descriptive name.
} | ||
} | ||
|
||
return retLines; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't build mocha.js
. That's done for releases – it's confusing and it clutters the PR / commit. No worries though!
Please rebase since we added a linter to CI recently. 👌 |
Also, since we are now under the JS Foundation, you'll need to sign the JS Foundation CLA before this gets merged. Just pointing it out, in case it's missed between the other CI checks. :) |
Hey folks, I'm just a regular Mocha user, and I don't feel strongly one way or the other about this feature. I just wanted to understand: it seems all the mentions of E.g. examples from our real-world code: // This test suite relies on hitting third-party services,
// so skip it if we're running offline.
describe('Foo bar #skipoffline', function () {
// This individual test can be pretty slow,
// because we have to wait for yada yada,
// so skip it if we're running fast tests only.
it('lorem ipsum #skipfast', ...);
}); Then we can do e.g. Is the difference between Thanks! |
Funny story: I added a feature today where I found it much easier to use a conditional everywhere (with Specifically, I could have done So +1 to this feature now. =) Thanks! |
I wonder if there is an opportunity to do something more along the lines of RSpec's "User-defined metadata" tooling: https://www.relishapp.com/rspec/rspec-core/docs/metadata/user-defined-metadata. It seems like tags provide a fair amount of power, but have some limitations:
|
that could be useful, but rspec doesn't seem to have any filtering functionality w the metadata (though i only skimmed). "tags" are absolutely for filtering, and expanding that to user-defined metadata isn't too bad--except we'd need to get clever on the CLI end. |
|
wrong button. anyway this is on hold until we hammer down the roadmap. we will have a forum for discussion about what goes in the roadmap. |
+1 |
this would be a really nice feature to have, we are in need to run tests based on inverted form factor '@not_mobile', '@not_tablet' for test that don't support all form factors and also other tags, like slow/fast etc. I have tried to use grep but can't find any good solution to achieve what I wan't to do above. So this feature will sort out that problem. |
I am a bot that watches issues for inactivity. |
the suggested tagging API has changed since the initial proposal, see new comments further down
As discussed in #928, this is a tentative PR for tagging support in the
bdd
interface. I hope this is not against the guidelines (PR for a new feature), but it will be easier to discuss what's feasible with some visibility on the code changes.describe
andit
--tags
to run a subset of the testsmocha
runs everythingmocha --tags "not:integration"
runs test1
mocha --tags "is:integration"
runs tests2
and3
mocha --tags "is:integration not:slow"
runs test2
You can also programmatically update the filter with
If this looks good, happy to look into how to unit test & document it.
Cheers