-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test count may not be as useful as it could be #43344
Comments
cc @nodejs/test_runner @cjihrig |
This could be improved by parsing the standard output of the child processes that run each test file. There is a TODO in the code regarding implementing a TAP parser. IMO that is the best approach, but also requires the most work. If someone really wanted to, they could implement more light weight parsing that, for example, only parses the ending summary lines of each child process. |
@cjihrig happy to work on this. Could you point me to the TODO comment? |
node/lib/internal/main/test_runner.js Lines 108 to 109 in adaf602
|
Thank you @aduh95. I'm gonna work on this 👍 |
I took a little look at this issue, and I was wondering if this can be approached by adding support for additional reporters other than the current this approach can also enable passing custom reporters that are implemented in userland as an option to the root test runner |
I agree with @MoLow!
I am in favor of this as well. |
@cjihrig I'd love to hear your feedback regarding the reporter's approach. |
In my head I always pictured the test runner outputting TAP via a stream and other reporters being implemented as transform streams (this part kind of depends on having the TAP parser in place). |
@MoLow I am happy to give it a shot. Could you provide a high-level design so I have a bigger picture of the different pieces?
This would totally be doable once the TAP parser is done. The current AST has all the info to run it through a codegen and spit out virtually any format. |
I totally understand why implementing reporters as a transform stream makes sense, my two arguments don't necessarily contradict that:
|
Test runners in the ecosystem don't output JSON by default - ever, afaik. A number of them output TAP by default, though. TAP is the best choice. |
I also think that we need to think in terms of streaming output, which |
ok, I've got your point :) |
I've been experimenting with the new Going with TAP as the native format seems to introduce unnecessary friction for consuming tools and it sounds like internally within In order to make sense of the test results of a file's test run, you need a TAP parser. In order to implement a proper TAP parser, you need a YAML parser. And parsing YAML correctly is very non trivial. In implementing a TAP parser myself, I've shied away from full YAML parsing and decided to implement a parser for the subset of yaml that
Sure streaming JSON, but is that really the problem space here? I would imagine For a stream emitting something like:
could be parsed pretty trivially with built-in functions (no 10k+ LOC yaml parsing lib dependency) as a stream: const testNdJSONStream = run();
export async function* readableStreamLines(
readableStream: NodeJS.ReadableStream
): AsyncGenerator<string> {
for await (const tapBytes of readableStream) {
if (!Buffer.isBuffer(tapBytes)) {
throw new Error(
`Didn't receive of bytes from TAPStream as expected`
);
}
yield tapBytes.toString('utf-8');
}
}
for await (const resultLine of readableStreamLines(testNdJSONStream)) {
const testResult = JSON.parse(resultLine)
} |
Another idea I had when considering all of this, node could potentially respect the TAP 13/14 spec while side-stepping the YAML parsing problem. TAP 13 spec basically just specifies that the contents between
What if This subset of flow style could also be valid JSON meaning that Something like:
This is valid TAP 13, the embedded doc is valid YAML and it is far friendlier to parse from node.js. |
completed via #43525 |
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs#43525 Refs: nodejs#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Work in progress PR-URL: nodejs/node#43525 Refs: nodejs/node#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> (cherry picked from commit f8ce9117b19702487eb600493d941f7876e00e01)
Work in progress PR-URL: nodejs/node#43525 Refs: nodejs/node#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> (cherry picked from commit f8ce9117b19702487eb600493d941f7876e00e01)
Work in progress PR-URL: nodejs/node#43525 Refs: nodejs/node#43344 Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> (cherry picked from commit f8ce9117b19702487eb600493d941f7876e00e01)
Version
v18.3.0
Platform
Darwin willmunn-2 20.6.0 Darwin Kernel Version 20.6.0: Tue Feb 22 21:10:41 PST 2022; root:xnu-7195.141.26~1/RELEASE_X86_64 x86_64
Subsystem
test_runner
What steps will reproduce the bug?
I wanted to try the new test runner, being a long term tape user, this was pretty exciting as the apis are very similar. It seems the test summary at the end of the tap output is recording test counts as the number of test files. I had a go at a simple fizz buzz implementation:
import test from 'node:test';
import assert from 'assert';
node --test
returns the following output:note that the the output suggests that only 1 test has been run. After adding another test file, I noticed that the output is actually counting the number of files, not the number of
test()
blocks or the amount of assertions.If I add a
test.skip
to one of the tests, it will still report that there are 0 skipped tests.How often does it reproduce? Is there a required condition?
No response
What is the expected behavior?
I personally think the best solution would be that The TAP output reports the number of
test()
blocks rather that the number of test files. So for my example:What do you see instead?
Additional information
Note that my suggestion is actually different to what the tape module does, this reports counts based on the number of assertions. I personally feel that number of
test
blocks makes the most sense.The text was updated successfully, but these errors were encountered: