-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional testing for platform-specific tests #380
Comments
There seem to be two ways that we could achieve this:
1 seems the easiest to implement, whereas 2 is more flexible. |
Potential options we could use:
|
The problem with using |
I have a simpler suggestion. Just write a conditional test using
That's what I suggested you should try first to see if it works, if it does, it's the fastest way to do this. |
If it works, then you can proceed with things like:
Where |
The So...
The thing is, how does |
Maybe the option you said |
The issue with This means we end up having to change our structure of our test directories, or have to remember which tests are for which platform. On the other hand, this may make it easier to do test load balancing, because otherwise it's possible for a test scheduler to schedule a test that won't run on a particular platform. But this will be a minor problem. |
Having it programmatic at the |
Fair enough - I'll look into that. On the topic of load balancing though this |
The It requires as custom test sequencer written for jest, that is configured https://jestjs.io/docs/configuration#testsequencer-string. There's a trade off here, the ideal load balancer would a "work-stealing queue". That is 1 CI/CD job maintains a queue, and test CI/CD jobs pull tests from the queue. This ensures that no runner will be stuck waiting to finish. The sharding system seems like it you would have to hope that your shards are roughly equally sized, it's possible that you may shard 20% of the tests which take 80% of the time, and the resulting 80% of tests take 20% of the time. The sharder/sequencer wouldn't know long certain tests take, so it's just a guess. It is however better than nothing. You should link this issue with MatrixAI/TypeScript-Demo-Lib#58, I had extra details about this there. I think there's a commercial service which does the queuing https://knapsackpro.com/integrations/javascript/jest/ci-server. But I wonder if jest supports a live queueing system. The Gitlab CI/CD job would have to configure a job that only finishes when all tests are pulled off, or when all test jobs complete. |
I think for now we will just use the shard option. It should work well enough if we randomise our tests, or maybe if we had some estimated time to completion. I believe jest has some mechanism for this already: https://jestjs.io/blog/2016/03/11/javascript-unit-testing-performance |
They did it based on file size as a proxy for how long it would take. See: https://jestjs.io/blog/2016/03/11/javascript-unit-testing-performance#optimal-scheduling-of-a-test-run |
The But this problem is a separate issue: MatrixAI/TypeScript-Demo-Lib#58. We can address that in typescript-demo-lib first then port it to PK. |
Using
The solution is simply to just skip the describe/test if the condition isn't met: function describeIf(condition, name, f) {
if (condition) {
describe(name, f);
} else {
describe.skip(name, f);
}
}
function testIf(condition, name, f, timeout?) {
if (condition) {
test(name, f, timeout);
} else {
test.skip(name, f, timeout);
}
} This does what we want it to, however, if the top-level describe is skipped then that file won't show up in the test output (unlike skipped tests, which do show up)
|
Isn't that what we want? Whether it shows up on the console or not is a secondary concern. Most important is that it doesn't error out if a test file doesn't have any active tests. Interesting that |
Note that:
Can you check that, I would have thought that the block doesn't run at all. I was actually thinking of not using |
|
I thought that it didn't run at all either, and the tests are skipped if the describe is skipped, but I can experiment. |
So it looks like anything directly inside a describe.skip('describe', () => {
console.log('in the describe');
beforeAll(() => {
console.log('in the beforeAll');
});
beforeEach(() => {
console.log('in the beforeEach');
});
afterEach(() => {
console.log('in the afterEach');
});
afterAll(() => {
console.log('in the afterAll');
});
test('test', () => {
console.log('in the test');
});
});
|
Seems like it would be better to avoid using describe.skip. You just don't do anything at all if the condition doesn't pass.
|
As mentioned here #380 (comment) if a file has no describe then even if you use |
Ok we are using |
With the merging of #381, this issue is completed. Not sure why this wasn't auto-closed. |
Specification
With our new CI/CD, which runs tests across three different platforms (Linux, MacOS, and Windows), we need a way to ensure that tests that are designed to only work on a specific platform(s) do not run on other platforms.
Additional context
This will involve looking at the jest CLI options when running tests so that they can be incorporated into the lines of the CI/CD that run tests on each platform.
npm test
TypeScript-Demo-Lib#41Tasks
The text was updated successfully, but these errors were encountered: