-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow asynchronously skipping tests #8604
Comments
I struggle to see the value of this feature, you essentially want to be have support for a sort of pseudo false-positive state. When service By skipping in a CI environment you will receive no notification of this state without explicitly going in and checking, what is the difference between just leaving the test as a pass when the service is unavailable? |
My 2cents on this matter (quite an interesting thing to think about btw): @mattphillips I guess there might be no practical difference but the semantic difference is huge. This matters if you want to use the information produced by Jest to determine what happened during the build. A practical example: consider a test which depends on an external service. I push new code which would make that test fail and introduce a regression. When CI runs the checks on the code I’ve pushed, the service was not available but the test is marked as The false positive state, as you have described, would be to have a test which is actually not running but being marked as Now, I can understand the appeal for this feature as it has a real world correlation, but one might argue that introducing it in would encourage — or at least acknowledge that it is common to write — tests that aren’t deterministic (I.e. tests which rely on something you do not have control over). On the other hand, there doesn’t seem to be any reasonable manner to implement this outside of the To sum it up: I’d say this is a valid feature, even though it’s encouraged only for a minority of cases. Since in those cases there doesn’t seem to be a workaround which is “good enough” (or precise enough), I’d say it would be ok to have it in. |
What about some cascading? If i wrote test for 3rd party service availability Feels much better than seeing hundreds of failing tests. |
Thanks @lucasfcosta - that's a great summary of the problem. I would also add that the "conditional skipping" can also be environmentally aware. For example, in my environment, there are certain tests which are skipped only if they are running in a development environment (ex, tests which require a certain amount of local setup), but required in a CI environment: describe('some complicated service', () => {
beforeAll(async () => {
const shouldSkip = (
process.env.NODE_ENV == 'development' &&
await checkServiceAvailability()
)
if (shouldSkip)
return this.skip(`Skipping tests because ComplicatedService is not running (hint: run it with ./complicated-service)`)
})
… rest of the tests …
}) |
@mattphillips What would your solution be for platform specific tests? It would be wildly inappropriate to falsely mark tests as passed or failed when it doesn't even make sense to run the test. The questions we want our test frameworks to answer are: Can I run this test? Did the test pass? A boolean Pass/Fail cannot answer both of those question. |
jest can mark tests as skipped, I'm using this hack, but it only works synchronously: const haveDb = !!process.env.DB_CONNECTION_STRING;
const testDb = haveDb ? test : test.skip; |
Any update on this ? |
To add to @langpavel's idea, you could have an asynchronous check in a custom const { connect } = require("amqplib");
const { EventEmitter } = require("events");
const NodeEnvironment = require("jest-environment-node");
const { logger } = require("../src/logging");
class TestEnvironment extends NodeEnvironment {
constructor(config) {
super(config);
}
async setup() {
await super.setup();
// rabbit
try {
this.global.amqpConnection = await connect({
username: "admin",
password: "admin",
});
this.global.channel = await this.global.amqpConnection.createChannel();
this.global.hasAmqp = true;
} catch (err) {
logger.warn("AMQP is not available. Skipping relevant tests.");
this.global.hasAmqp = false;
}
}
async teardown() {
if (this.global.amqpConnection) {
await this.global.amqpConnection.close();
}
await super.teardown();
}
}
module.exports = TestEnvironment; describe("example", () => {
const hasAmqp: boolean = (global as any).hasAmqp;
const channel: Channel = (global as any).channel;
it = hasAmqp ? test : test.skip;
it("should send/receive", async () => {
// ...
});
}); |
I want to add a use case for programmatically skipping tests: |
I found this issue because I'm writing a series of integration tests; I'd like to write my tests so that they can run against any deployment of the API, but only run certain tests if that deployment has a feature flag enabled, e.g.: Works describe('feature available in some deployments but not others', () => {
const enabled = getFeatureFlag();
(enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
}); Does not describe('feature available in some deployments but not others', async () => {
const enabled = await getFeatureFlag();
(enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
}); An async describe, async setupFilesAfterEnv or a describe('feature available in some deployments but not others', () => {
it('runs a test that is only sometimes applicable', async () => {
const enabled = await getFeatureFlag();
if (!enabled) { return; }
});
}); |
I'd like to add another concrete use-case for the asynchronous, skip-from-within-a-test feature: I've created a package for use with Jest and Mocha that lets you create a sequence of tests which are dependent on each other. For example: let { auth, createPost, createComment, getComments, logOut } = require("../examples/TestApp");
let { chainedTest } = require("@art-suite/chained-test");
// NOTE: This file INTENTIONALLY fails in Mocha to demonstrate how failures are handled.
const aliceEmail = "alice@test.com";
const postBody = "The quick brown fox jumped over the lazy dog.";
const commentBody = "Brilliant!";
// The return-result of this first test will be passed as the second argument
// to all subsequent tests in the chain.
chainedTest("Alice's user story", () => auth(aliceEmail))
// In "then" tests, the test's return value is passed to the next test.
// skipped: if neither this nor any dependent tests are selected by test framework
.thenIt("lets Alice create a post", () =>
createPost(postBody)
)
.softTapIt("softTapIt failures don't skip following tests", () => {
throw new Error("fake-failure in softTapIt");
})
// "tap" tests: ignores the test's return value. Instead it passes lastTestValue through.
// skipped: if neither this nor any dependent tests are selected by test framework
.tapIt("lets Alice create a comment", (post, alice) =>
createComment(post.id, commentBody)
)
.tapIt("tapIt or thenIt failures WILL skip remaining tests", () => {
throw new Error("fake-failure in tapIt");
})
.thenIt("can get the created comment from the post", (post, alice) =>
getComments(post.id)
)
// In "softTap" tests, the test's return value is ignored.
// Instead it passes lastTestValue through to the next test.
// skipped: if not selected by test framework
.softTapIt("should have only one comment by Alice", (comments, alice) => {
expect(comments.length).toEqual(1);
expect(comments[0].userId).toEqual(alice.id);
})
.tapIt("should be able to logOut", logOut) If one of these tests fail, the rest cannot succeed. Logically, they need to be skipped. However, we don't want to report them as "passed" since they were never run. Likewise, we don't want to report them as "failed" since they didn't actually fail. What we need is a 3rd option - report the test as "skipped" (or "pending" as Mocha does). Note that the first failure will be reported as a failure, so tests will properly fail and CI/CD will fail. However, without "skip" there is no correct way to report what happened in Jest. i.e. There is no correct way to report the rest of the tests were not run. In Mocha this works beautifully:
Thank you @wolever for championing this feature-request. It's important, and I hope it makes it into Jest soon. Cheers! <edit: grammar> |
I think the chances of this ever being added to jest are extremely low. I've had a PR out for months now and we haven't even gotten any feedback on this from jest maintainers. If you need this I'd suggest you look for jest alternatives. |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
I believe this is still a desired feature. Commenting to keep it open. |
… release time When we try to release a new version, it will generate test data for a version that does not yet exist on npm. Forward compatibility testing is based on new test data and attempts to install a version of `@sounisi5011/encrypted-archive` that does not yet exist on npm. This causes the forward compatibility test to fail only at release time. To work around this, the latest version of `@sounisi5011/encrypted-archive` uses packages on local disk instead if installation from on npm fails. Note: Ideally, we should use the "npm view" command to skip tests for packages that do not exist on npm. However, Jest is not yet capable of skipping tests asynchronously. see jestjs/jest#8604
… release time (#462) When we try to release a new version, it will generate test data for a version that does not yet exist on npm. Forward compatibility testing is based on new test data and attempts to install a version of `@sounisi5011/encrypted-archive` that does not yet exist on npm. This causes the forward compatibility test to fail only at release time. To work around this, the latest version of `@sounisi5011/encrypted-archive` uses packages on local disk instead if installation from on npm fails. Note: Ideally, we should use the "npm view" command to skip tests for packages that do not exist on npm. However, Jest is not yet capable of skipping tests asynchronously. see jestjs/jest#8604
Another thing that I want to add to the conversation here, particularly in relation to the related request of const thingsToCreate = [ ... ]
const createdThings = []
test.each(thingsToCreate)('create %s', async (thing) => {
await createThing(thing) // Might throw error
createdThings.push(thing)
...
})
test('process one thing', async () => {
if (createdThings.length < 1) {
skip() // We can't run this test
}
await processOneThing(createdThings[0])
...
})
test('process two things', async () => {
if (createdThings.length < 2) {
skip() // We can't run this test
}
await processTwoThings(createdThings[0], createdThings[1])
...
})
test('process three things', async () => {
if (createdThings.length < 3) {
skip() // We can't run this test
}
await processThreeThings(createdThings[0], createdThings[1], createdThings[2])
...
}) And of course, this is just one example. There are plenty of cases where you might have a test that needs some combination of previous tests to have succeeded to varying degrees in order for that test to be relevant. While Of course, as has already been discussed, you can always just pass or fail instead of skip, but failures clutter the results making it harder to see the actual problem that needs to be resolved, and passing gives the false impression of success. |
Hey, jest maintainers! Any updates or feedback? |
@palmerj3 @mattphillips Any chance it can be progressed? Especially since there's POC MR, which kind of really works. Personally also used that behaviour a lot in mocha - to match test with conditions (multiple complex tests, multiple versions of code to be tested, not every test is viable for every version) |
This should be an useful feature, specially for using Jest on integration tests, but after I read the comments of Jest contributors, I'm not confident this will move forward. As Node recently is putting some effort in providing its own built-in testing framework (node:test), I most likely won't use nor recommend Jest for new backend projects. |
@gerardolima Jest has never been a good choice for backend projects. One could argue that Jest has always been the worst choice available for backend projects. |
Btw you can use the undocumented Jest feature let a: string | undefined = undefined;
it("test1", async () => {
a = await myRequest();
expect(a).toEqual("myString");
});
it("test2", () => {
if (typeof a !== "string") return pending("Skipping test");
expect(a.length).toBe(55);
}); (I know that you don't write tests like this, but there are enough examples in this thread already) |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days. |
I'm still interested in this |
For what its worth, our software teams disallow Jest specifically for this reason |
🚀 Feature Proposal
Other testing frameworks allow tests to asynchronously decide whether they should skip themselves.
For example, in Mocha:
Currently, however, it is impossible to asynchronously decide whether a test should be skipped in Jest.
See also: discussion here: #7245
Motivation
Some tests either depend on - or are explicitly testing - remote services which may or may not be available.
Without being able to programatically and asynchronously decide whether tests can be skipped, there are only three options for writing these sorts of tests:
Decide that they will either always pass or always fail if the service is unavailable. In either case the result can be misleading (ie, because in many cases "failure" indicates "the service is wrong", not merely "the service is unavailable", and "passing" suggests that everything is okay, which is also not necessarily true).
Keep them in a separate suite, one per remote service, which can be run with (for example),
npm run test:service-a
).Use a regular expression (or similar) to include / exclude these tests from a test run.
Example
A complete, real-world (but anonymized) example from a Mocha-based test suite:
Pitch
This belongs in Jest core because:
this.skip
)FAQ
Why can't you use an if-statement?
A common suggestion in #7245 is to use an if-statement (or similar) to skip tests:
However, this will not work for asynchronous tests, as tests must be declared synchronously, but the "is a remote service active?" check is necessarily asynchronous.
Wouldn't it be better if the tests failed/succeeded/retried/did something else?
There are situations when this is true, but (as evidenced by discussion on #7245) there are also situations where "skip tests when a remote service is not available" is a reasonable business decision (ex: #7245 (comment))
The text was updated successfully, but these errors were encountered: