-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report progress of individual test cases #6616
Comments
Sorta related: #6225. But this has an actual use case attached. |
Btw if you want to reproduce, clone |
We don't know the number of tests before executing the test files (and its @aaronabramov thoughts? |
This comment has been minimized.
This comment has been minimized.
I guess my mental model was that each |
yeah, we should be able to know everything about the number of tests we have once we executed the code in the test file (but before running it) |
Why is the total number of the tests intersting? I want to clarify my feature request isn’t about seeing a total number of tests before they run. It’s about seeing the passed number of tests change as each of them completes. |
yeah, i agree. we don't need it. |
+1, with this I can finish a port of the ava mini reporter to Jest Here's the current reporter interface: export default class BaseReporter {
onRunStart(results: AggregatedResult, options: ReporterOnStartOptions) {}
// these are actually for the test suite, not individual test results
onTestStart(test: Test) {}
onTestResult(test: Test, testResult: TestResult, results: AggregatedResult) {}
onRunComplete(contexts: Set<Context>, results: AggregatedResult ): ?Promise<void> {}
} The ideal for me would be (note some of the types are either renamed of do not exist yet): export default class BaseReporter {
onRunStart(results: AggregatedResult, options: ReporterOnStartOptions) {}
onTestSuiteStart(suite: TestSuite) {}
// new: called at the beginning of a .test() or .it() with the test meta info
onTestStart(test: Test) {}
// new: called at the end of a .test() or .it() the test meta info and result info
onTestResult(suite: TestSuite, test: Test, testResult: TestResult, results: AggregatedResult) {}
onTestSuiteResult(suite: TestSuite, testSuiteResult: TestSuiteResult, results: AggregatedResult) {}
onRunComplete(contexts: Set<Context>, results: AggregatedResult ): ?Promise<void> {}
} Until a major we could call a "test" the test suite and "assertion" an individual test (which pairs with some of the types we use): export default class BaseReporter {
onRunStart(results: AggregatedResult, options: ReporterOnStartOptions) {}
onTestStart(test: Test) {}
onAssertionStart(assertion: Assertion) {} // note this type doesn't exist
onAssertionResult(test: Test, assertion: Assertion, assertionResult: AssertionResult, results: AggregatedResult) {}
onTestSuiteResult(test: Test, testResult: TestResult, results: AggregatedResult) {}
onRunComplete(contexts: Set<Context>, results: AggregatedResult ): ?Promise<void> {}
} |
i'm not sure about using the word "assertion" this way. |
Yeah that makes sense, I like "test case" and that doesn't require renaming anything. I only used "assertion" because the type of a test case result is called Here's what we can go with then: export default class BaseReporter {
// Called at the beginning of a run
onRunStart(results: AggregatedResult, options: ReporterOnStartOptions) {}
// Called at the beginning of every test file
onTestStart(test: Test) {}
// Called at the beginning of every .test or .it
onTestCaseStart(test: Test, testCase: TestCase) {}
// Called with the result of every .test or .it
onTestCaseResult(test: Test, testCase: TestCase, testCaseResult: AssertionResult) {}
// Called with the result of every test file
onTestResult(test: Test, testResult: TestResult, results: AggregatedResult) {}
// Called after all tests have completed
onRunComplete(contexts: Set<Context>, results: AggregatedResult ): ?Promise<void> {}
} |
Could we call it FWIW the Jest docs calls a single |
👍 I think calling the whole file a "test" is a Facebook-ism, I've never seen this anywhere else. I'd love if Jest stopped using this terminology. |
makes sense! if we clean up test_run - all tests/test suites in all runners |
hey.. what if we renamed "test suite" to "test file"? the conversations i hear all the time (mostly outside fb)
|
@captbaritone you might have some thought on this as well |
I'm actually fot "test file". I often hear a "test suite" referring to all tests in the project, e.g. "We have 100+ tests in our suite." |
Yes! I mostly agree with @aaronabramov, but I think leaving
The only thing that this leaves is what to call @SimenB I'm not a fan of "spec" since it seems tightly coupled to the Behavior Driven Development methodology encouraged by Jasmine, and I don't think Jest really prescribes/endorses BDD. |
I still think "test" is best because, well, the function is named Not sure what to do about |
so:
i like that! we already use |
I'm not crazy about leaving "test" as just "test", even though that's the name of the function. There are lots of people, Facebook people included, who have a existing definitions for what "test" means to them. Additionally, for those still using If we do opt to use "test", then maybe we can just call a Just my $0.02. |
My choice would be "test case" and "test file". |
I started fiddling with this, and an early snag is that Should we do something about its signature at the same time, or just add more args? See https://jestjs.io/docs/en/configuration#runner-string Might make sense to expect runners to be event emitters, then we can just attach listeners instead of passing in callbacks. Could add a property (similar to /cc @rogeliog for thoughts on runner api |
Jest is really lacking on this part. Is adding custom JasmineReporters really the only option? The documentation on jasmine (in Jest) is really lacking as jest is rolling it's own jasmine bundle. I propose documenting where users can find valid jasmine documentation for implementing this themselves. |
We're removing the jasmine coupling, so that's no way forward at all |
Looked again at this, and just went with an event emitter as that seemed more sane. However, I'm having an issue with the test running inside a worker with no way of reporting back to jest until it's done. Any ideas? @mjesun thoughts on a worker posting messages to its parent? My in progress code, if anyone is interested: https://github.com/SimenB/jest/tree/report-more-events |
This is finally out in 26.2.0! Note that it only works if you're using |
Use Jest 26.2.0+ and |
Just tried it, and it didn't work.
Confirmed that package.json shows jest dependency as ran A particular file I have has ~30 tests, and it showed the same behavior as before. It's not listing the progress of each individual test case. Am I doing something wrong? |
I'm seeing the same problem as @tralston. I'm still seeing all tests output at once when the last one finishes, and nothing before. I've create an MWE at https://github.com/retorquere/jest-circus-test. Would appreciate feedback on what I should be doing differently. |
@retorquere there's no summary printed when running a single test, see #9420 (comment). Just duplicating your one test fixes the issue. Not sure it makes sense to hide the summary anymore now since we are able to report stuff as we go |
Then I don't understand what's different now since 26.2.2. In my experience jest has always reported per-file tests this way, even without jest-circus. My use-case is exactly as in my MWE - I generate cases based on sample files, which I now do by pre-generating a test js file per sample, and then run jest, but I'd love to get rid of that. |
I guess I misunderstood the purpose of this issue. I never really had a problem with the totals line (e.g. "6 passed, 6 total") being updated as tests passed or failed. I wanted to see (as I thought the title of the issue was reporting) the name and pass/fail of each individual test for each test file, as it's executed. For example, I have a small project with ~50 tests. 40 of them are in a big file, and that's the one we are working on the most. It currently takes ~30 seconds to execute (due to some slow dependencies that eventually I'll mock out). Most of the time I run jest just watching the file I'm working on. It would be really nice to see each test printed out as it's executed, not just the summary updated. Here's an example of what I'm talking about from (Gradle test runner, but with a Mocha theme): I realize this may not be the right issue for what I'm asking. If not, are you familiar with one like this? Or should I file a new one? I do understand that running tests asynchronously could present some challenges in displaying live results. |
@tralston Did you get anywhere with indvidual test reporting while they are being executed? I am looking for the same functionality. |
@dipasqualew Unless something changed (@SimenB would know best) this is still a problem for me. |
This issue was as far as I know always about "it feels like nothing is happening" which a live counter fixes. Getting each individual test to show up in the summary as they complete is a separate feature IMO. Feel free to open up an issue or send a PR. The data used for updating the counter should be enough to update the verbose reporter as results come in as well (although I'm not entirely sure how that will work with tests running in parallel - might need to use something like |
@SimenB However the live counter only updates when the test case completes and not after each individual test. Do you think that's also out of scope of this issue and part of an extension of the verbose reporter as well? |
Practically all bug reports/feature requests are out of scope in a closed issue. 🙂 It should update after every test, if it does not please open up a separate issue with a reproduction |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
🚀 Feature Proposal
It seems like Jest currently only reports progress when a complete suite passes.
I propose that instead, it should report progress every second for individual test cases.
Otherwise, the progress output is confusing. "Tests: N" being a separate line in the output gives the impression that it's just as granular as "Test Suites: N" and will be incremented immediately when a test case passes.
Motivation
This is confusing for large/slow suites because you don't get any sense of how much time is left and whether tests are stuck for some reason.
The text was updated successfully, but these errors were encountered: