Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow asynchronously skipping tests #8604

Open
wolever opened this issue Jun 24, 2019 · 24 comments
Open

Allow asynchronously skipping tests #8604

wolever opened this issue Jun 24, 2019 · 24 comments

Comments

@wolever
Copy link

wolever commented Jun 24, 2019

🚀 Feature Proposal

Other testing frameworks allow tests to asynchronously decide whether they should skip themselves.

For example, in Mocha:

it('tests a remote service', async function() {
  if (!(await remoteService.isActive())) {
    return this.skip()
  }
   test the remote service 
})

Currently, however, it is impossible to asynchronously decide whether a test should be skipped in Jest.

See also: discussion here: #7245

Motivation

Some tests either depend on - or are explicitly testing - remote services which may or may not be available.

Without being able to programatically and asynchronously decide whether tests can be skipped, there are only three options for writing these sorts of tests:

  1. Decide that they will either always pass or always fail if the service is unavailable. In either case the result can be misleading (ie, because in many cases "failure" indicates "the service is wrong", not merely "the service is unavailable", and "passing" suggests that everything is okay, which is also not necessarily true).

  2. Keep them in a separate suite, one per remote service, which can be run with (for example), npm run test:service-a).

  3. Use a regular expression (or similar) to include / exclude these tests from a test run.

Example

A complete, real-world (but anonymized) example from a Mocha-based test suite:

describe('example.com', () => {
  beforeAll(async function() {
    try {
      await fetch('https://example.com/api/whoami')
    } catch (e) {
      return this.skip(`Skipping tests for example.com: ${e}`)
    }
  })

  it('returns the current user', async () => {  })
  it('does the other thing', async () => {  })
})

Pitch

This belongs in Jest core because:

  1. It's currently supported by Mocha: https://mochajs.org/ (search for this.skip)
  2. It's impossible to implement outside of core (see elaboration below)
  3. The related discussion on Allow to skip tests programmatically #7245 suggests that it's important to a number of people (see, ex, this comment, which as of this writing has 16 👍 : Allow to skip tests programmatically #7245 (comment))

FAQ

Why can't you use an if-statement?

A common suggestion in #7245 is to use an if-statement (or similar) to skip tests:

describe('example.com', () => {
  const isActive = remoteService.isActive()
  if (isActive) it('returns the current user', async () => {  })
})

However, this will not work for asynchronous tests, as tests must be declared synchronously, but the "is a remote service active?" check is necessarily asynchronous.

Wouldn't it be better if the tests failed/succeeded/retried/did something else?

There are situations when this is true, but (as evidenced by discussion on #7245) there are also situations where "skip tests when a remote service is not available" is a reasonable business decision (ex: #7245 (comment))

@mattphillips
Copy link
Contributor

I struggle to see the value of this feature, you essentially want to be have support for a sort of pseudo false-positive state. When service A is unavailable then skip test B, what do you want to do with that information?

By skipping in a CI environment you will receive no notification of this state without explicitly going in and checking, what is the difference between just leaving the test as a pass when the service is unavailable?

@lucasfcosta
Copy link
Contributor

My 2cents on this matter (quite an interesting thing to think about btw):

@mattphillips I guess there might be no practical difference but the semantic difference is huge. This matters if you want to use the information produced by Jest to determine what happened during the build.

A practical example: consider a test which depends on an external service. I push new code which would make that test fail and introduce a regression. When CI runs the checks on the code I’ve pushed, the service was not available but the test is marked as passed, even though it didn’t run. My code then fails in production but I have no way of determining whether it was because the test was flaky or if it was because it didn’t run.

The false positive state, as you have described, would be to have a test which is actually not running but being marked as passed.

Now, I can understand the appeal for this feature as it has a real world correlation, but one might argue that introducing it in would encourage — or at least acknowledge that it is common to write — tests that aren’t deterministic (I.e. tests which rely on something you do not have control over). On the other hand, there doesn’t seem to be any reasonable manner to implement this outside of the core and it might be a decision from Jest that it doesn’t want to be prescriptive over what it’s users should do. In the case described by @wolever, for example, it seems reasonable to have tests being asynchronously skipped.

To sum it up: I’d say this is a valid feature, even though it’s encouraged only for a minority of cases. Since in those cases there doesn’t seem to be a workaround which is “good enough” (or precise enough), I’d say it would be ok to have it in.

@langpavel
Copy link

langpavel commented Aug 28, 2019

What about some cascading?
Like if precondition test will fail
only one error will be reported with count of skipped tests?

If i wrote test for 3rd party service availability
only this one fails and I can see real reason why other tests not run?

Feels much better than seeing hundreds of failing tests.

@wolever
Copy link
Author

wolever commented Sep 4, 2019

Thanks @lucasfcosta - that's a great summary of the problem.

I would also add that the "conditional skipping" can also be environmentally aware. For example, in my environment, there are certain tests which are skipped only if they are running in a development environment (ex, tests which require a certain amount of local setup), but required in a CI environment:

describe('some complicated service', () => {
  beforeAll(async () => {
    const shouldSkip = (
      process.env.NODE_ENV == 'development' &&
      await checkServiceAvailability()
    )
    if (shouldSkip)
      return this.skip(`Skipping tests because ComplicatedService is not running (hint: run it with ./complicated-service)`)
  })

   rest of the tests 
})

@ClaytonSmith
Copy link

@mattphillips What would your solution be for platform specific tests?

It would be wildly inappropriate to falsely mark tests as passed or failed when it doesn't even make sense to run the test.

The questions we want our test frameworks to answer are: Can I run this test? Did the test pass? A boolean Pass/Fail cannot answer both of those question.

@langpavel
Copy link

langpavel commented Sep 20, 2019

jest can mark tests as skipped, I'm using this hack, but it only works synchronously:

const haveDb = !!process.env.DB_CONNECTION_STRING;
const testDb = haveDb ? test : test.skip;

@GoMino
Copy link

GoMino commented Oct 20, 2019

Any update on this ?

@jfairley
Copy link

jfairley commented Oct 29, 2019

To add to @langpavel's idea, you could have an asynchronous check in a custom testEnvironment class.

const { connect } = require("amqplib");
const { EventEmitter } = require("events");
const NodeEnvironment = require("jest-environment-node");
const { logger } = require("../src/logging");

class TestEnvironment extends NodeEnvironment {
  constructor(config) {
    super(config);
  }

  async setup() {
    await super.setup();

    // rabbit
    try {
      this.global.amqpConnection = await connect({
        username: "admin",
        password: "admin",
      });
      this.global.channel = await this.global.amqpConnection.createChannel();
      this.global.hasAmqp = true;
    } catch (err) {
      logger.warn("AMQP is not available. Skipping relevant tests.");
      this.global.hasAmqp = false;
    }
  }

  async teardown() {
    if (this.global.amqpConnection) {
      await this.global.amqpConnection.close();
    }
    await super.teardown();
  }
}

module.exports = TestEnvironment;
describe("example", () => {
  const hasAmqp: boolean = (global as any).hasAmqp;
  const channel: Channel = (global as any).channel;

  it = hasAmqp ? test : test.skip;

  it("should send/receive", async () => {
    // ...
  });
});

@nstaromana
Copy link

I want to add a use case for programmatically skipping tests:
Our product runs on multiple platforms & uses the same Jest tests for each platform. Sometimes a platform implementation is not yet complete, the product will report this dynamically via API (but only after setup in beforeAll()).
Our tests also use snapshot matching. What happens is, if a platform doesn't support a feature and the test bails out, Jest will complain about the unused snapshot, resulting in a test failure.
Our current workaround is to use 'inline snapshots', but this is exceedingly cumbersome.
A programmatic skip() API would allow our tests to skip when a feature is not available on a certain platform, and also avoid having the Jest engine complain about unused snapshots.

@MrLoh
Copy link

MrLoh commented May 1, 2020

I created a proof of concept PR (#9944) that demonstrates how easy this is to implement in jest-circus. I think it is well worth adding this functionality to jest-circus as it allows to implement custom bailing behaviors (#6527) or stepped tests (#8387) in user land easily.

@kara-ryli
Copy link

I found this issue because I'm writing a series of integration tests; I'd like to write my tests so that they can run against any deployment of the API, but only run certain tests if that deployment has a feature flag enabled, e.g.:

Works

describe('feature available in some deployments but not others', () => {
  const enabled = getFeatureFlag();
  (enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});

Does not

describe('feature available in some deployments but not others', async () => {
  const enabled = await getFeatureFlag();
  (enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});

An async describe, async setupFilesAfterEnv or a jest.skip() method I can call in the context of a test or before method would all work. In the meantime this sort of workaround feels doable, but ugly, because it leaves the developer wondering whether the test was run or not.

describe('feature available in some deployments but not others', () => {
  it('runs a test that is only sometimes applicable', async () => {
    const enabled = await getFeatureFlag();
    if (!enabled) { return; }
  });
});

@shanebdavis
Copy link

shanebdavis commented Nov 13, 2020

I'd like to add another concrete use-case for the asynchronous, skip-from-within-a-test feature:

I've created a package for use with Jest and Mocha that lets you create a sequence of tests which are dependent on each other. For example:

let { auth, createPost, createComment, getComments, logOut } = require("../examples/TestApp");
let { chainedTest } = require("@art-suite/chained-test");

// NOTE: This file INTENTIONALLY fails in Mocha to demonstrate how failures are handled.

const aliceEmail = "alice@test.com";
const postBody = "The quick brown fox jumped over the lazy dog.";
const commentBody = "Brilliant!";

// The return-result of this first test will be passed as the second argument
// to all subsequent tests in the chain.
chainedTest("Alice's user story", () => auth(aliceEmail))

// In "then" tests, the test's return value is passed to the next test.
// skipped: if neither this nor any dependent tests are selected by test framework
.thenIt("lets Alice create a post", () =>
  createPost(postBody)
)

.softTapIt("softTapIt failures don't skip following tests", () => {
  throw new Error("fake-failure in softTapIt");
})

// "tap" tests: ignores the test's return value. Instead it passes lastTestValue through.
// skipped: if neither this nor any dependent tests are selected by test framework
.tapIt("lets Alice create a comment", (post, alice) =>
  createComment(post.id, commentBody)
)

.tapIt("tapIt or thenIt failures WILL skip remaining tests", () => {
  throw new Error("fake-failure in tapIt");
})

.thenIt("can get the created comment from the post", (post, alice) =>
  getComments(post.id)
)

// In "softTap" tests, the test's return value is ignored.
// Instead it passes lastTestValue through to the next test.
// skipped: if not selected by test framework
.softTapIt("should have only one comment by Alice", (comments, alice) => {
  expect(comments.length).toEqual(1);
  expect(comments[0].userId).toEqual(alice.id);
})

.tapIt("should be able to logOut", logOut)

If one of these tests fail, the rest cannot succeed. Logically, they need to be skipped. However, we don't want to report them as "passed" since they were never run. Likewise, we don't want to report them as "failed" since they didn't actually fail.

What we need is a 3rd option - report the test as "skipped" (or "pending" as Mocha does). Note that the first failure will be reported as a failure, so tests will properly fail and CI/CD will fail. However, without "skip" there is no correct way to report what happened in Jest. i.e. There is no correct way to report the rest of the tests were not run.

In Mocha this works beautifully:

✓ Alice's user story
✓ needs to create a post
1) softTapIt failures don't skip following tests
✓ lets Alice create a comment
2) tapIt or thenIt failures WILL skip remaining tests
- can get the created comment from the post
- should have only one comment by Alice
- should be able to logOut

3 passing (11ms)
3 pending
2 failing

1) softTapIt failures don't skip following tests:
    Error: fake-failure in softTapIt
    at ChainedTest.test (test/api.js:23:11)

2) tapIt or thenIt failures WILL skip remaining tests:
    Error: fake-failure in tapIt
    at ChainedTest.test (test/api.js:33:11)

Thank you @wolever for championing this feature-request. It's important, and I hope it makes it into Jest soon. Cheers!

<edit: grammar>

@MrLoh
Copy link

MrLoh commented Nov 15, 2020

I think the chances of this ever being added to jest are extremely low. I've had a PR out for months now and we haven't even gotten any feedback on this from jest maintainers. If you need this I'd suggest you look for jest alternatives.

@github-actions
Copy link

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Feb 25, 2022
@liamdocherty10
Copy link

liamdocherty10 commented Feb 25, 2022

I believe this is still a desired feature. Commenting to keep it open.

@github-actions github-actions bot removed the Stale label Feb 25, 2022
sounisi5011 added a commit to sounisi5011/npm-packages that referenced this issue Jun 3, 2022
… release time

When we try to release a new version, it will generate test data for a version that does not yet exist on npm.
Forward compatibility testing is based on new test data and attempts to install a version of `@sounisi5011/encrypted-archive` that does not yet exist on npm.
This causes the forward compatibility test to fail only at release time.
To work around this, the latest version of `@sounisi5011/encrypted-archive` uses packages on local disk instead if installation from on npm fails.

Note: Ideally, we should use the "npm view" command to skip tests for packages that do not exist on npm. However, Jest is not yet capable of skipping tests asynchronously. see jestjs/jest#8604
sounisi5011 added a commit to sounisi5011/npm-packages that referenced this issue Jun 3, 2022
… release time (#462)

When we try to release a new version, it will generate test data for a version that does not yet exist on npm.
Forward compatibility testing is based on new test data and attempts to install a version of `@sounisi5011/encrypted-archive` that does not yet exist on npm.
This causes the forward compatibility test to fail only at release time.
To work around this, the latest version of `@sounisi5011/encrypted-archive` uses packages on local disk instead if installation from on npm fails.

Note: Ideally, we should use the "npm view" command to skip tests for packages that do not exist on npm. However, Jest is not yet capable of skipping tests asynchronously. see jestjs/jest#8604
@saltman424
Copy link

Another thing that I want to add to the conversation here, particularly in relation to the related request of test.step (and hopefully test.optional or test.step.optional) #8387, is that a use case for being able to skip from within a test that I have encountered a lot is: handling more complicated versions of steps that need more ad hoc logic. For example:

const thingsToCreate = [ ... ]
const createdThings = []

test.each(thingsToCreate)('create %s', async (thing) => {
  await createThing(thing) // Might throw error
  createdThings.push(thing)
  ...
})

test('process one thing', async () => {
  if (createdThings.length < 1) {
    skip() // We can't run this test
  }
  await processOneThing(createdThings[0])
  ...
})

test('process two things', async () => {
  if (createdThings.length < 2) {
    skip() // We can't run this test
  }
  await processTwoThings(createdThings[0], createdThings[1])
  ...
})

test('process three things', async () => {
  if (createdThings.length < 3) {
    skip() // We can't run this test
  }
  await processThreeThings(createdThings[0], createdThings[1], createdThings[2])
  ...
})

And of course, this is just one example. There are plenty of cases where you might have a test that needs some combination of previous tests to have succeeded to varying degrees in order for that test to be relevant. While test.step would handle simple cases of this, it would be difficult, and in many cases, slow, to make it work in all the different variations that come up regularly.

Of course, as has already been discussed, you can always just pass or fail instead of skip, but failures clutter the results making it harder to see the actual problem that needs to be resolved, and passing gives the false impression of success.

@AkoElvis
Copy link

Hey, jest maintainers! Any updates or feedback?

@meredian
Copy link

meredian commented Jun 30, 2023

@palmerj3 @mattphillips Any chance it can be progressed? Especially since there's POC MR, which kind of really works.

Personally also used that behaviour a lot in mocha - to match test with conditions (multiple complex tests, multiple versions of code to be tested, not every test is viable for every version)

@gerardolima
Copy link

This should be an useful feature, specially for using Jest on integration tests, but after I read the comments of Jest contributors, I'm not confident this will move forward. As Node recently is putting some effort in providing its own built-in testing framework (node:test), I most likely won't use nor recommend Jest for new backend projects.

@albertodiazdorado
Copy link

@gerardolima Jest has never been a good choice for backend projects. One could argue that Jest has always been the worst choice available for backend projects.

@albertodiazdorado
Copy link

Btw you can use the undocumented Jest feature pending if you want to skip tests based on run-time information. It is not documented though, so who knows what's happening with it in the future:

let a: string | undefined = undefined;
it("test1", async () => {
  a = await myRequest();
  expect(a).toEqual("myString");
});

it("test2", () => {
  if (typeof a !== "string") return pending("Skipping test");
  expect(a.length).toBe(55);
});

(I know that you don't write tests like this, but there are enough examples in this thread already)

Copy link

github-actions bot commented Dec 4, 2024

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the Stale label Dec 4, 2024
@kibiz0r
Copy link

kibiz0r commented Dec 5, 2024

I'm still interested in this

@github-actions github-actions bot removed the Stale label Dec 5, 2024
@ClaytonSmith
Copy link

For what its worth, our software teams disallow Jest specifically for this reason

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests