Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test modifiers #9

Closed
kevva opened this issue Nov 25, 2014 · 26 comments
Closed

Test modifiers #9

kevva opened this issue Nov 25, 2014 · 26 comments
Labels

Comments

@kevva
Copy link
Contributor

kevva commented Nov 25, 2014

this._skip is still false after running .skip(). It needs .bind(this) to work.

var i = 0;

ava(function (a) {
    a.skip(); // a.skip.bind(this) works
    i++;
    a.end();
}).run(function () {
    t.is(i, 0);
    t.end();
});

This fails with i being 1.

@kevva kevva added bug current functionality does not work as desired priority labels Nov 25, 2014
@kevva kevva removed the priority label Dec 4, 2014
@jenslind
Copy link

jenslind commented Sep 7, 2015

What is .skip() supposed to do? Skip the whole test? Wouldn't it be easier to have something like:

ava.skip(function () {
...
})

So the test never runs? Or do I miss something here?

@tunnckoCore
Copy link

@jenslind it won't output his title and exits immediately when it's called.
It can't be used (currently, if it is bug) with t.end(). Just tried it yesterday.

@sindresorhus
Copy link
Member

@jenslind It's supposed to skip the assertion, but right now it tries to skip the whole test, which is clearly wrong.

Going forward:

@Qix-
Copy link
Contributor

Qix- commented Sep 8, 2015

What's the advantage of using ava.skip? Why not just comment it out? I suppose I'm missing the point.

@sindresorhus
Copy link
Member

Yeah, I'm not totally sold on its merits either, but here's the reasoning I've heard:

As for skipped assertions; they're better than commenting out as you won't have to change the t.plan() count when you want to activate the assertion.

Happy to hear arguments either way.

@Qix-
Copy link
Contributor

Qix- commented Sep 8, 2015

I suppose I'm more of an orthodox "no useless crap in Git" kind of person. If you take it out, it's clear what happens in the diff, and you can put it back in whenever you need to thanks to a thing called Git. It's the same reason why I don't commit commented out code to begin with.

That's just me, though.


However, t.skip() or something to that effect, in the event a test doesn't need to be run (i.e. optional dependency not being present, etc.) would make more sense - but still no sense.

The big reason behind my conclusion on this is Travis. If you have t.skip() in any sort of conditional, that introduces the chance that your test and Travis' test are going to be checking for inconsistent things.

This is kind of the point of tests - consistently looking for literally the exact same thing across all systems, every time, to spot inconsistencies with the tested code.


I'd say the argument for any sort of skip functionality is weak, at best. I'm more or less decided on ava.skip, less on t.skip, so I'd love to see use cases for the latter (or former, too, I suppose).

sindresorhus added a commit that referenced this issue Sep 8, 2015
can add it back correctly when #9 is resolved
@MadcapJake
Copy link

Despite your strong argument @Qix- , I think there still is a case for t.skip() and that was mentioned in the referenced tape issue: test-driven development. Writing skipped tests and sharing them with your team allows other developers to look at how you intend the functionality to work and remove the skip if they try and implement the code needed for the test to be unskipped. Or change the test to make it fit within your team's current API.

I do agree though, using t.skip() in any kind of conditional way is definitely a big no-no. And even more broadly, you'd probably want to avoid t.skip() by the time your project uses any kind of CI service.

@Qix-
Copy link
Contributor

Qix- commented Sep 8, 2015

Idk, I still feel strongly against it. That's what TODO's are for. Though I see your point.

If there is a means by which you convey to a team "Hey, here is a test that should work, but doesn't currently - TDD that thing up!", then skip is a misnomer. It should be indicated, clearly, that that is the intent behind the test being, for all intents and purposes, commented out.

Unfortunately, there isn't a great way to check whether or not .skip() is being called in an appropriate manner (i.e. at the beginning of the test), other than to remove t.skip() and only keep ava.skip(). That would be the most sensible commonground solution.


Though just making it clear, I think the whole skip a test thing is begging for fragmentation, inconsistencies across systems, and just general confusion.

sindresorhus added a commit that referenced this issue Sep 8, 2015
can add it back correctly when #9 is resolved
@vadimdemedes
Copy link
Contributor

@Qix- I mostly agree with you on the points you mentioned, but I think there should be at least ava.skip() method for one simple reason. To avoid constant comment/uncomment loop when you need some test to temporarily not be executed. .skip() method should not be a seen as a way to make failing test suite to succeed, it should be seen only as a convenience method.

AVA should still display the skipped test, but make it loud and clear, that this test was skipped.

Actually, your conversation gave me an idea on how we could implement one more unusual feature in AVA. Code example is worth a thousand words, so here it is:

var test = require('ava');

test('regular test', function (t) {
  t.end();
});

test.warning('text of the warning', 'failing test', function (t) {
  t.true(false);
  t.end();
});

test.skip('skipped test', function (t) {
  t.end();
});

test.todo('test to implement soon', function (t) {
  t.end();
});

Output:

screen shot 2015-09-08 at 11 18 09 am

I call that thing a test modifier. It modifies when a test is executed and whether it should be executed at all. I have these modifiers in mind:

skip

Skip a test completely:

test.skip('some test', fn);

warning

Execute a test, but also display a custom warning message on the side:

test.warning('this test has some weird shit going on', 'some test', fn);

when

Execute a test, when testFn returns true:

test.when(testFn, 'some test', fn);

browser

Execute a test only in browser environment. Useful for libraries, that support both node and browser, but also need to test some specific cases only in browsers.

test.browser('some test', fn);

node

Opposite of .browser():

test.node('some test', fn);

todo

Mention, that this test needs to be implemented. Useful when you come up with a some condition you need to test, but have no time for it right now. Test is not executed, but its title is displayed in "TODO" section at the end of AVA's output (see screenshot above).

test.todo('test to be implemented', fn);

Let me know what you guys think!

@vadimdemedes
Copy link
Contributor

I know this is something very new and unusual, but so is AVA! It needs to be different, otherwise it will end up with no major advantages over tap/tape (aside concurrent execution). I am not pushing on it, let's discuss!

I am thinking, that this will make tests more verbose, and as a result, more clear and understandable.

@tunnckoCore
Copy link

@vdemedes it looks great! I also was thinking for something like it before few months, but I think this .warning would looks better as t.warning.

test('regular test', function (t) {
  t.warning('custom warning message')
  t.is(true, true)
  t.end()
})

This will allow to be used in any type of test - regular, todo, node, browser.

test.todo('test to implement soon', function (t) {
  t.warning('but hey, be careful')
  t.end()
})

@vadimdemedes
Copy link
Contributor

@tunnckoCore the idea behind test modifiers - is to modify something related to a test. Like when it executes or if it executes at all. If those modifiers are inside t, we won't be able to influence test execution. It would also introduce inconsistency in the API. What's inside a test function, should only be related to the test body, not its description.

@tunnckoCore
Copy link

Yes, can agree with you, but i don't think that signature of test.warning() is okey.

test.warning('this test has some weird shit going on', 'some test', fn);

@MadcapJake
Copy link

@vdemedes 👍 That looks great!

Two possible additional modifiers:

unless

Opposite of test.when i.e., only executes when testFn returns false.

critical

If this test fails do no proceed with other tests. This could be a kind of modified serial test so that it's executed first (or at least just with the other serial tests).

@Qix-
Copy link
Contributor

Qix- commented Sep 8, 2015

I like that, except for .when() and .unless() for the same reasons as conditional t.skip().

@vadimdemedes
Copy link
Contributor

@MadcapJake +1 for .unless(). What you described about .critical(), is exactly the same as .before() (it's already implemented).

@Qix- .when() is the same as .node() or .browser(), with a predefined testFn. Last two are basically presets. What do you think about ava.test.skip()? If you liked .todo(), then you also agree with .skip() (they are identical).

@Qix-
Copy link
Contributor

Qix- commented Sep 8, 2015

I agree with the idea that if you're going to have .skip(), then there is a great case for .todo() and other types of alerts for documenting why this test is being skipped. All or nothing, I suppose.

I'm still skeptical about conditional testing, though. Browser vs. Node I can understand, but when it comes down to test vs. test, I could see faulty PRs mistakenly being merged because, let's face it, maintainers rarely actually check the output of each test. Why? Because all test frameworks adhere to a the idea that tests are consistent across all platforms, always, and that if a test fails then the whole process will fail.

This kind of introduces the need to check each test run on a CI platform to ensure all the tests you really need to pass, passed. Having conditional tests will make it hard to see if the code being submitted actually passed all the necessary tests.

An argument against .browser() - which CI platform supports that? Travis doesn't support browser testing. I see the need for it, but it's going to make CI a very needlessly complicated thing, and the whole conditional test thing introduces a very real and high risk of headache and faulty or insecure code being introduced into a system - so much so that I wouldn't trust tests written with AVA, personally.


tl;dr I like the idea of todo tests/having verbose "This is why it's skipped" messages, but any conditional enabling of tests is sure to cause problems.

@MadcapJake
Copy link

@vdemedes oh, I didn't realize that a .before() failed test would block the rest of the tests from proceeding. Do you think it would make sense for there to be .before() tests that don't block? I could see the usefulness of both.

@sindresorhus
Copy link
Member

I like .skip(), .warning(), .todo(). I think warning should just be an option to the normal test methods though. So it would work with both test() and test.serial().
Example: test('foo, {warning: 'haalp'}, t => {});

Not sold on .when(), .browser(), .node(). I feel those would be better as just if statements around the test. I can't think of situation I could have needed those. Browser/node is also premature as we don't even support browser usage yet. Consistent test suite is also a good argument against these.

.critical()

We should rather provide a fail-fast flag for people that want to stop on the first failure (Please discuss in #48). You can use test.serial and put the critical tests first.

What you described about .critical(), is exactly the same as .before() (it's already implemented).

.before is meant for initialization, not for actual testing. See above.

@vadimdemedes
Copy link
Contributor

@MadcapJake I think .before() tests, that don't block, should just be "regular" tests :D .before() is meant to be used for preparation, so if a preparation fails, a test suite fails too.

@Qix- @sindresorhus Ok, let's skip conditional tests for a while. If someone will show a real example when they're needed, we'll review them again, but with an actual project.

@sindresorhus I see the point of .critical(), could be useful, agree.

So, let's proceed with .skip(), .warning() and .todo().

@MadcapJake
Copy link

Ok! I see that now! I think I was a bit thrown because it's written as test.before which evokes it being a loophole for writing non-atomic tests 😝 . It's documented and written the same as tests so maybe would be a good idea to separate it out and add a little warning note.

@vadimdemedes
Copy link
Contributor

@MadcapJake No no, the tests are atomic. But the .before() preparation does not have to do anything with testing functionality, it is just a preparation step.

Here's a real-world .before() use-case. I was writing a etcd client. There's also an etcd server. To test my client, the server must be running. Before my tests, I use .before() to start a Docker container with etcd server inside. I also use .after() to stop this Docker container.

@MadcapJake
Copy link

Right. I understand what it does now, I just meant that it's a bit confusingly laid out in the docs and the API is so similar to tests that it can be a bit decieving.

@sindresorhus
Copy link
Member

I'm going to improve the before/after docs soon.

@schnittstabil
Copy link

I don't think browser and node are very useful - What will be next? firefox, ie, rhino, ie8, node>=0.10 …?

Personally, I prefer feature detection over environment detection:

test.when(() => !!Promise, 'handle a resolved promise', t => { });

But I think the following would be even more flexible:

test[Promise ? 'serial' : 'skip']('handle a resolved promise', t => {});

// Sadly there is no test.concurrent:
test.concurrent = test;
test[Promise ? 'concurrent' : 'skip']('handle a rejected promise', t => {}); 

Use cases are libraries like async-done:

Handles completion and errors for callbacks, promises, observables, child processes and streams

If one want to test the library in environments without e.g. Promise support, some of the tests have to be skipped.

@sindresorhus sindresorhus modified the milestone: 0.6.0 Nov 11, 2015
@sindresorhus sindresorhus changed the title Skip test doesn't work Test modifiers Nov 11, 2015
@sindresorhus sindresorhus added enhancement new functionality and removed bug current functionality does not work as desired labels Nov 11, 2015
@sindresorhus
Copy link
Member

Status update: test.skip() was just implemented. You can follow test.only() status in #132. We'll defer the other modifiers to later as we have more important things to focus on at the moment. Contributions are of course always welcome if you want something to be done sooner ;)

jamestalmage added a commit to jamestalmage/ava that referenced this issue Nov 22, 2015
jamestalmage added a commit to jamestalmage/ava that referenced this issue Nov 23, 2015
@sindresorhus sindresorhus modified the milestones: 0.6.0, 0.7.0 Nov 24, 2015
jamestalmage added a commit to jamestalmage/ava that referenced this issue Dec 3, 2015
jamestalmage added a commit to jamestalmage/ava that referenced this issue Dec 3, 2015
jamestalmage added a commit that referenced this issue Dec 3, 2015
Add ability to skip individual assertions. Fixes #9.
@sindresorhus sindresorhus reopened this Dec 3, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants