-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expect test to fail #8317
Comments
I strongly agree with your pitch: you need to test for failing tests, too. However, I'm not sure I agree with your suggested approach. Internally, what Jest does to test failing tests is actually run an e2e-style test where an entirely new Jest process is spun up and has the failing test run for real. Then we read the output from that and be sure the test failed in the correct way. For individual assertions, of course you can verify an error was thrown or a promise was rejected easily within a test. I'm not sold on the value of a passes-if-fails style I feel like if you want to verify a test fails, you should do it for real: make a command that runs the subset of tests you expect to fail and verify that the output includes their failures. If you want to verify a function throws, isn't called, a promise is rejected, etc: use the existing assertions for that and make the test pass. Separate concerns. |
This makes sense. Perhaps it would be possible to add a flag for more structured test output, perhaps as JSON, to make it easier to parse the output and validate what's passing and what's failing? |
Yup, that's how we process test results at Facebook: https://jestjs.io/docs/en/cli#json You can enable it with Currently that allows you to access all of the data about results when they finish running, but I'm also working on a realtime version that will allow you to stream the results as JSON lines: |
XFAIL is often used to mark a bug that is unfixed. The test is in place, but the fix is not done yet. I imagine something like:
Similar to the
For TAP output an XFAIL is shown as: This would be a good feature to have in Jest. This issue gives a negative test as an example and I agree that should not be a reason for using XFAIL. |
Test runners for other languages, like pytest, have this feature. I have used it in there for exactly this scenario (and others...), and find it a rather useful option, currently missing in jest. |
ava also has such a feature: // See: github.com/user/repo/issues/1234
test.failing('demonstrate some bug', t => {
t.fail(); // Test will count as passed
}); quite useful. i must have checked the jest docs for it on 4 or 5 occasions now :( the ability to parse the test results with some external tool is not really a replacement for the use case of contributing a known-failing test as a way of reporting a bug. https://github.com/avajs/ava/blob/master/docs/01-writing-tests.md#failing-tests |
I've opened a new issue specifically about the common xfail workflow: #10030 |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
🚀 Feature Proposal
Hi folks, I would like there to be a way to write a test that is expected to fail. Essentially, this would flip the outcome of the test from success to failure and vice versa.
Motivation
This came up when I was writing some code to auto-fail tests that call
console.error
. I want to verify that tests that callconsole.error
fail, but there is no way to do so--the test is supposed to fail, but failed tests would in turn break my build. Note that unfortunately there is no way to write this using anexpect
and condition--I am programmatically failing the tests usingexpect.assertions(Infinity)
, so there's nothing I could mock out.Example
Pitch
One of the scariest things as an engineer is a test that passes when it should fail. This is unfortunately pretty common when dealing with asynchronous Javascript. As such it's occasionally a good idea to ensure that a certain kind of behavior always generates a test failure. While the above example may seem trivial, it's actually useful to ensure that tests that are supposed to fail actually fail.
This would also be useful for testing the internal behavior of Jest.
The text was updated successfully, but these errors were encountered: