-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test modifiers #9
Comments
What is
So the test never runs? Or do I miss something here? |
@jenslind it won't output his title and exits immediately when it's called. |
@jenslind It's supposed to skip the assertion, but right now it tries to skip the whole test, which is clearly wrong. Going forward:
|
What's the advantage of using |
Yeah, I'm not totally sold on its merits either, but here's the reasoning I've heard:
As for skipped assertions; they're better than commenting out as you won't have to change the Happy to hear arguments either way. |
I suppose I'm more of an orthodox "no useless crap in Git" kind of person. If you take it out, it's clear what happens in the diff, and you can put it back in whenever you need to thanks to a thing called Git. It's the same reason why I don't commit commented out code to begin with. That's just me, though. However, The big reason behind my conclusion on this is Travis. If you have This is kind of the point of tests - consistently looking for literally the exact same thing across all systems, every time, to spot inconsistencies with the tested code. I'd say the argument for any sort of skip functionality is weak, at best. I'm more or less decided on |
can add it back correctly when #9 is resolved
Despite your strong argument @Qix- , I think there still is a case for I do agree though, using |
Idk, I still feel strongly against it. That's what TODO's are for. Though I see your point. If there is a means by which you convey to a team "Hey, here is a test that should work, but doesn't currently - TDD that thing up!", then Unfortunately, there isn't a great way to check whether or not Though just making it clear, I think the whole skip a test thing is begging for fragmentation, inconsistencies across systems, and just general confusion. |
can add it back correctly when #9 is resolved
@Qix- I mostly agree with you on the points you mentioned, but I think there should be at least AVA should still display the skipped test, but make it loud and clear, that this test was skipped. Actually, your conversation gave me an idea on how we could implement one more unusual feature in AVA. Code example is worth a thousand words, so here it is: var test = require('ava');
test('regular test', function (t) {
t.end();
});
test.warning('text of the warning', 'failing test', function (t) {
t.true(false);
t.end();
});
test.skip('skipped test', function (t) {
t.end();
});
test.todo('test to implement soon', function (t) {
t.end();
}); Output: I call that thing a test modifier. It modifies when a test is executed and whether it should be executed at all. I have these modifiers in mind: skipSkip a test completely: test.skip('some test', fn); warningExecute a test, but also display a custom warning message on the side: test.warning('this test has some weird shit going on', 'some test', fn); whenExecute a test, when test.when(testFn, 'some test', fn); browserExecute a test only in browser environment. Useful for libraries, that support both node and browser, but also need to test some specific cases only in browsers. test.browser('some test', fn); nodeOpposite of test.node('some test', fn); todoMention, that this test needs to be implemented. Useful when you come up with a some condition you need to test, but have no time for it right now. Test is not executed, but its title is displayed in "TODO" section at the end of AVA's output (see screenshot above). test.todo('test to be implemented', fn); Let me know what you guys think! |
I know this is something very new and unusual, but so is AVA! It needs to be different, otherwise it will end up with no major advantages over tap/tape (aside concurrent execution). I am not pushing on it, let's discuss! I am thinking, that this will make tests more verbose, and as a result, more clear and understandable. |
@vdemedes it looks great! I also was thinking for something like it before few months, but I think this test('regular test', function (t) {
t.warning('custom warning message')
t.is(true, true)
t.end()
}) This will allow to be used in any type of test - regular, todo, node, browser. test.todo('test to implement soon', function (t) {
t.warning('but hey, be careful')
t.end()
}) |
@tunnckoCore the idea behind test modifiers - is to modify something related to a test. Like when it executes or if it executes at all. If those modifiers are inside |
Yes, can agree with you, but i don't think that signature of test.warning('this test has some weird shit going on', 'some test', fn); |
@vdemedes 👍 That looks great! Two possible additional modifiers: unlessOpposite of criticalIf this test fails do no proceed with other tests. This could be a kind of modified |
I like that, except for |
@MadcapJake +1 for .unless(). What you described about .critical(), is exactly the same as .before() (it's already implemented). @Qix- .when() is the same as .node() or .browser(), with a predefined testFn. Last two are basically presets. What do you think about ava.test.skip()? If you liked .todo(), then you also agree with .skip() (they are identical). |
I agree with the idea that if you're going to have I'm still skeptical about conditional testing, though. Browser vs. Node I can understand, but when it comes down to test vs. test, I could see faulty PRs mistakenly being merged because, let's face it, maintainers rarely actually check the output of each test. Why? Because all test frameworks adhere to a the idea that tests are consistent across all platforms, always, and that if a test fails then the whole process will fail. This kind of introduces the need to check each test run on a CI platform to ensure all the tests you really need to pass, passed. Having conditional tests will make it hard to see if the code being submitted actually passed all the necessary tests. An argument against tl;dr I like the idea of todo tests/having verbose "This is why it's skipped" messages, but any conditional enabling of tests is sure to cause problems. |
@vdemedes oh, I didn't realize that a |
I like Not sold on
We should rather provide a fail-fast flag for people that want to stop on the first failure (Please discuss in #48). You can use
|
@MadcapJake I think @Qix- @sindresorhus Ok, let's skip conditional tests for a while. If someone will show a real example when they're needed, we'll review them again, but with an actual project. @sindresorhus I see the point of So, let's proceed with |
Ok! I see that now! I think I was a bit thrown because it's written as |
@MadcapJake No no, the tests are atomic. But the Here's a real-world |
Right. I understand what it does now, I just meant that it's a bit confusingly laid out in the docs and the API is so similar to tests that it can be a bit decieving. |
I'm going to improve the |
I don't think Personally, I prefer feature detection over environment detection: test.when(() => !!Promise, 'handle a resolved promise', t => { }); But I think the following would be even more flexible: test[Promise ? 'serial' : 'skip']('handle a resolved promise', t => {});
// Sadly there is no test.concurrent:
test.concurrent = test;
test[Promise ? 'concurrent' : 'skip']('handle a rejected promise', t => {}); Use cases are libraries like async-done:
If one want to test the library in environments without e.g. Promise support, some of the tests have to be skipped. |
Status update: |
Add ability to skip individual assertions. Fixes #9.
this._skip
is stillfalse
after running.skip()
. It needs.bind(this)
to work.This fails with
i
being1
.The text was updated successfully, but these errors were encountered: