Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expect test to fail #8317

Closed
rmehlinger opened this issue Apr 12, 2019 · 8 comments
Closed

Expect test to fail #8317

rmehlinger opened this issue Apr 12, 2019 · 8 comments

Comments

@rmehlinger
Copy link

🚀 Feature Proposal

Hi folks, I would like there to be a way to write a test that is expected to fail. Essentially, this would flip the outcome of the test from success to failure and vice versa.

Motivation

This came up when I was writing some code to auto-fail tests that call console.error. I want to verify that tests that call console.error fail, but there is no way to do so--the test is supposed to fail, but failed tests would in turn break my build. Note that unfortunately there is no way to write this using an expect and condition--I am programmatically failing the tests using expect.assertions(Infinity), so there's nothing I could mock out.

Example

describe('Ensure tests fail when they're supposed to', {
  itShouldFail('because 1 is not 0', () => {
    expect(1).toBe(0);
  });
});

Pitch

One of the scariest things as an engineer is a test that passes when it should fail. This is unfortunately pretty common when dealing with asynchronous Javascript. As such it's occasionally a good idea to ensure that a certain kind of behavior always generates a test failure. While the above example may seem trivial, it's actually useful to ensure that tests that are supposed to fail actually fail.

This would also be useful for testing the internal behavior of Jest.

@scotthovestadt
Copy link
Contributor

I strongly agree with your pitch: you need to test for failing tests, too. However, I'm not sure I agree with your suggested approach.

Internally, what Jest does to test failing tests is actually run an e2e-style test where an entirely new Jest process is spun up and has the failing test run for real. Then we read the output from that and be sure the test failed in the correct way.

For individual assertions, of course you can verify an error was thrown or a promise was rejected easily within a test.

I'm not sold on the value of a passes-if-fails style test function in Jest. How is it any different from expecting tests to succeed? It could be bugged too!

I feel like if you want to verify a test fails, you should do it for real: make a command that runs the subset of tests you expect to fail and verify that the output includes their failures.

If you want to verify a function throws, isn't called, a promise is rejected, etc: use the existing assertions for that and make the test pass.

Separate concerns.

@rmehlinger
Copy link
Author

I feel like if you want to verify a test fails, you should do it for real: make a command that runs the subset of tests you expect to fail and verify that the output includes their failures.

This makes sense. Perhaps it would be possible to add a flag for more structured test output, perhaps as JSON, to make it easier to parse the output and validate what's passing and what's failing?

@scotthovestadt
Copy link
Contributor

scotthovestadt commented Apr 12, 2019

Yup, that's how we process test results at Facebook: https://jestjs.io/docs/en/cli#json

You can enable it with --json.

Currently that allows you to access all of the data about results when they finish running, but I'm also working on a realtime version that will allow you to stream the results as JSON lines:
#8242

@alistairnewpark
Copy link

XFAIL is often used to mark a bug that is unfixed. The test is in place, but the fix is not done yet. I imagine something like:

test.xfail(…);

Similar to the test.todo(…); place holder and test.skip(…);.

test.xfail(…); would show as a separate row in the test results, although it is treated as a pass for the purpose of continuing the test run. When an XFAIL test passes then it is marked as XPASS in the test results, but a fail for the test run. This highlights the test is now working and the test should be changed from test.xfail(…); to test(…);.

For TAP output an XFAIL is shown as:
not ok 10 # TODO See issue #123456789
and an XPASS as:
ok 10 # TODO See issue #123456789
For reference see https://testanything.org/tap-version-13-specification.html#todo-tests

This would be a good feature to have in Jest.

This issue gives a negative test as an example and I agree that should not be a reason for using XFAIL.

@timmwagener
Copy link

XFAIL is often used to mark a bug that is unfixed. The test is in place, but the fix is not done yet.

Test runners for other languages, like pytest, have this feature. I have used it in there for exactly this scenario (and others...), and find it a rather useful option, currently missing in jest.

@vith
Copy link

vith commented Dec 30, 2019

ava also has such a feature:

// See: github.com/user/repo/issues/1234
test.failing('demonstrate some bug', t => {
	t.fail(); // Test will count as passed
});

quite useful. i must have checked the jest docs for it on 4 or 5 occasions now :(

the ability to parse the test results with some external tool is not really a replacement for the use case of contributing a known-failing test as a way of reporting a bug.

https://github.com/avajs/ava/blob/master/docs/01-writing-tests.md#failing-tests
https://github.com/avajs/ava

@willstott101
Copy link

I've opened a new issue specifically about the common xfail workflow: #10030

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants