Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing tests are not in output of new testrunner #2162

Closed
dotMorten opened this issue Jan 25, 2024 · 18 comments
Closed

Passing tests are not in output of new testrunner #2162

dotMorten opened this issue Jan 25, 2024 · 18 comments

Comments

@dotMorten
Copy link
Contributor

dotMorten commented Jan 25, 2024

Summary

The new TestRunner executable doesn't output passing tests, only failing.

Background and Motivation

I spent FOREVER trying to figure out why my unit test run kept hanging. Turns out, it just wasn't printing out passing tests, but only failing ones (which I had none off). This makes it really hard to monitor a test run in the output, and it's very different from how vstest.console would execute. By default please also output currently running test + result of passing tests. This is also important when a test does actually hang/crash so you can see what test might have caused it.

Even setting the diagnostic verbosity has no effect on this.

Proposed Feature

Match output of vstest.console.

@Evangelink
Copy link
Member

Evangelink commented Jan 25, 2024

@dotMorten thanks for the input and sorry it cause some trouble. We have realized that in big projects printing passed tests is having some perf impact (as the console plays some kind of lock) and from what was observed internally most teams don't really look at the output (at least mainly on CI) so we decided to print only errors by default.

I understand that for long runs it could feel like the execution is stuck.

Maybe some option to disable/enable printing would be best.

@MarcoRossignoli
Copy link
Contributor

MarcoRossignoli commented Jan 25, 2024

I want to add some more reason to this choice, you could find it also in other part of the design of the runner.

When we started to design it we did deep investigation of the pain points of the current VSTest and we noticed that most of the time the source of the troubles were that it was designed taking into account only the "interactive" usage of it. When I say interactive I mean when we do tests and we're "there" watching the outcome (i.e. I run it in my console, I run it inside VS or another IDE). Unfortunately this decision has got "big" impact on all the stack when you want to run "whole" big suite and not only your "partial" amount of tests of your "interactive testing session" (usually for big project or project that that last a long time the amount of tests increase a lot and it's no more possible run it in local, but we rely on "async" CI, we do partial run in our local to test the code part we're "focused" in).

On the other side tests run for the vast majority of their time "at whole" in CI.

So we decided to start from the opposite point of view and have the chance to have the best performance and support(hostability) in "CI" where we can save time and money and we need to support complex context(devices where you cannot sometime start n processes or you need native aot etc...) and where we for sure run "whole suite always", with the idea to add "interactive" feature where needed but without scarifying places where we can gain performance.

I'd say that I agree with the requested feature and we can add an "opt-in"(command line argument+env var) way to show in "our console display implementation" the passing tests. I say "our" because the underneath test platform that empower MSTest unlinks completely the concept of "display" so in future we could open it and allow to the users to plug their custom UX like for instance a spectre, windows form, wpf, web one https://github.com/microsoft/testfx/blob/main/src/Platform/Microsoft.Testing.Platform/OutputDevice/IPlatformOutputDevice.cs#L8.

Even setting the diagnostic verbosity has no effect on this.

Diagnostic in the new runner is there for troubleshooting reason and will redirect information only to files or to the protocol in case of server mode(VS run). New platform distinguish completely the "output display" for "users" and diagnostic information for troubleshooting, it's up to the "developer" of the extension decide if forward in both "channels" or only one.

@LucaCris
Copy link

Console and ILogger messages are often used when implementing new things starting from the self-test. So: needed, even if optional.

@bradwilson
Copy link

When we started to design it we did deep investigation of the pain points of the current VSTest and we noticed that most of the time the source of the troubles were that it was designed taking into account only the "interactive" usage of it.

I would say you have swung the pendulum too far in the other direction. Not only do I find the new dotnet test to be too sparse in interactive usage, I find it too sparse in CI usage as well. The system you have designed basically mandates that my CI test usage saves the log files into some kind of durable storage and/or ensures that I always dump those log files to the console when failure is encountered. That's additional work to make the system usable at all; right now, the behavior of dumping failure information into a log file that's immediately lost after CI is complete seems like a bad design.

That said, I am one of the people who far prefers dotnet vstest --Parallel to dotnet test because it has significantly better behavior with console output. If you move back to something that shows more information, please ensure it's better behaved than the "legacy" dotnet test output behavior, which is basically chaos and randomness right now.

@bradwilson
Copy link

Console and ILogger messages are often used when implementing new things starting from the self-test. So: needed, even if optional.

The lack of TRX (because of --logger being ignored) is a pretty glaring hole right now. I presume this is slated to be resolved. The log file that's output on failure is nice because it's human formatted, but bad because it's human formatted (that is, I can't transform the .log file into a different report like I can with a TRX file).

@bradwilson
Copy link

I'd say that I agree with the requested feature and we can add an "opt-in"(command line argument+env var) way to show in "our console display implementation" the passing tests.

Yes, I'd love for xUnit.net users to be able to see xUnit.net's console output here as an opt-in. They kind of get that today with --logger console;verbosity=normal though it's a mix between our output and your output. If we controlled the standard console output when just running via dotnet run, it would be great for that to be what the user could see interactively (as well as that being what ends up in the .log file, which I assume it would since that appears to be just straight console capture).

@MarcoRossignoli
Copy link
Contributor

MarcoRossignoli commented Mar 2, 2024

The system you have designed basically mandates that my CI test usage saves the log files into some kind of durable storage and/or ensures that I always dump those log files to the console when failure is encountered.

Actually that's a "default" for CI usage where usually you cannot rely on the console because when you have a lot of projects that run in parallel usually you need some other report format like trx. For instance when the suite start to be "serious" errors in console are useless like in our MSTest use case.
image

If 3/4 projects star to fail you don't know what to do with the output that is not "related to a specific project" and organize issues inside a file or use a format like trx or some report is the only way to have some meaningful information.
The above sample is the UX used for years by MS teams for .NET projects, the implementation we did for the runner is "arcade" style https://github.com/dotnet/arcade

If you want to see the error in console there's a parameter like described here https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-integrations#show-failure-per-test dotnet test -p:TestingPlatformShowTestsFailure=true (this is the first version and will be improved for the new terminal logger, the default msbuild logger is not super clear).
You can also have the "default non captured output" using this parameter https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-integrations#show-complete-platform-output dotnet test -p:TestingPlatformCaptureOutput=false

Also the fact that the new runner can run test with the dotnet run with IDE and with a simple F5 I don't know if a developer will move to the console so often and use the dotnet test for the "interactive"/developing loop usage (in real project the development usually is pretty "focused" on part of the code base and we rely on CI for the global check), personally I run in console less than once per day because the suite is never 10/20 seconds long and so I focus my development loop on specific tests and when I run everything I start to have the above scenario where I need some organization.

That said the current implementation is not mandatory, it's an "extension" like any other that's simply plug the default dotnet test infrastructure https://www.nuget.org/packages/Microsoft.Testing.Platform.MSBuild so it can be substituted with something custom.
We can also "discuss" on the defaults that at the moment for the reason I explained above are more "CI" oriented.

The lack of TRX (because of --logger being ignored) is a pretty glaring hole right now

Trx is an extension provided you need only to add the extension to the project https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions#test-reports
The new platform is done by a core library without any extension https://www.nuget.org/packages/Microsoft.Testing.Platform and dependency. We don't want to end in the corner where we take dependencies and we don't have a solution for users. The new platform is design to run "anywhere" any form factor any device where .NET is supported today and in the future. It's up to the adapter author and project templates decide what to add by default.

In this page you can find the current available extensions https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions

Yes, I'd love for xUnit.net users to be able to see xUnit.net's console output here as an opt-in.

As said above we plan to add something like --interactive option that tells to all the extensions that users wants an interactive usage and so every extension point can decide what to print and how to behave. For instance "default" output device(the console) will print more information and maybe also underneath adapters can take that param and print more information. Also we need to keep in mind that in the new platform the output is "an extension" and it's not fixed. We provide a default generic output console but if we have enough interest from test framework authors we can open that extension and give total freedom on the "output device" and you could take complete control of the output. Again we decided to not open everything by default but only extensions point that are already available in the current VSTest(in-process and out-of-process extensions). But we have more to offer.

That said, I am one of the people who far prefers dotnet vstest --Parallel to dotnet test because it has significantly better behavior with console output

The output here is "a bit better" because actually you're not using MSBuild at all so there's a bit less parallelism and a bit more of synchronization.

@bradwilson
Copy link

bradwilson commented Mar 5, 2024

Actually that's a "default" for CI usage where usually you cannot rely on the console because when you have a lot of projects that run in parallel usually you need some other report format like trx.

This is an assumption based on the size of the projects that you're building, I suspect, rather than one driven by what the average behavior might be.

Also the fact that the new runner can run test with the dotnet run with IDE and with a simple F5 I don't know if a developer will move to the console so often and use the dotnet test for the "interactive"/developing loop usage

Lots of projects have build scripts that are run interactively, in addition to whatever the debugging inner loop is. I ask people to run ./build before submitting PRs, because it performs all the operations that a PR build will end up running. If there is a switch that I can opt into like --interactive for that scenario, then great, but the existing (lack of) output would be a showstopper.

Also, there's basically zero chance someone will F5 on a test project rather than use Test Explorer if they're already accustomed to using Test Explorer.

dotnet test -p:TestingPlatformShowTestsFailure=true

The output from this is unacceptable as-is in my opinion. Compared to dotnet vstest (which verges on "bad" but far more acceptable):

image

I'm hoping the plans include --interactive having a better output (ideally better than everything that exists today, because I have complaints for both of dotnet test and dotnet vstest). Personally my ideal here would be "let the unit test framework decide what to output" so that users of xUnit.net v3 will get our output and not your output, and be able to use our option switches and not yours.

@MarcoRossignoli
Copy link
Contributor

This is an assumption based on the size of the projects that you're building, I suspect, rather than one driven by what the average behavior might be.

The spectrum of applications are very wide, we go from libraries like runtime/xunit to service space and devices, from small and quick test suite to very big and slow one.

The idea behind the dual model(interactive vs interactive) is to be able to have the best(performance, allocations, locks etc...) and useful UX in both scenario for all kind of application with a way to plug a custom interface for special needs. Interface here means any UX type, the new platform could be hosted by a WinForm application and use it as UX or a remote web app.

For instance run dotnet vstest when you have dozens of modules(quick ones) is not negligible with the current implementation in term of console synchronization and if the adapter underneath implements an high parallel model for instance run all methods in parallel this is also worse, host must run as server (multi tfm) and you have to send back all these info in some way using serialization plus the console call to show the result, this can be magnitude slower than the single unit test self so make sense in an interactive usage but why to pay in ci where our tests run for the vast majority of the time and at whole.

Also, there's basically zero chance someone will F5 on a test project rather than use Test Explorer if they're already accustomed to using Test Explorer.

F5 is not there to substitute the Test Explorer but to give another more way to run your tests.
For instance if I don't want to use the test explorer because I have really big suite and anyway the "visual space" showing the tests in my monitor is limited and I don't want to scroll, remove my hands from the keyboard or use it at all because I feel it slow workflow and I don't need to pay the discovery of everything I can simply focus on my tests adding a filter in the launcher and use F5 for my "focus testing". If the suite is big/slow most probably I won't run everything in local or inside VS but in CI.
Performance of debugging is faster, I can easily change command line parameters and environment variables or maybe prepend a tool, launcher is a powerful tool that enables new scenario.
I invite you try for some days this experience you could be surprised.

Also F5 allows some new mode like hot reload https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions#hot-reload experience that we would like to port to the test explorer too.

dotnet test -p:TestingPlatformShowTestsFailure=true

Yep I agree we're working to improve this is the first version, anyway if you want the classic view with the new dotnet test(that's not mandatory you can always run in old mode with the old dotnet test running on VSTest) you can use dotnet test -p:TestingPlatformCaptureOutput=true this will show everything in console like in the past.

The problem with the current and future dotnet test is that is based on MSBuild because the default gesture is to run it on solutions and not on direct dll(like dotnet vstest) after a first build, this was designed like that for friendly basic usage.
As we know MSBuild UX is improved a lot with the new terminal logger and it will improve in future, MSBuild team is doing amazing work there and in 9 where it's enabled by default we will see an improvement.
To fix once for all this issue we should do a big update to the dotnet test part of the SDK. I'll propose something there, I have some idea.

Personally my ideal here would be "let the unit test framework decide what to output" so that users of xUnit.net v3 will get our output and not your output, and be able to use our option switches and not yours.

We build the new plat with this in mind. Output is already pluggable, I don't know if we can exclude completely all the flags because for instance logging/trace are "inside" the plat self, but we can think a way to allow adapters to optout all or specific platform options.

@bradwilson
Copy link

I invite you try for some days this experience you could be surprised.

This is how I've been running v3 for several years now, because it's the only thing that's available. I accept it because I don't have alternatives, but not necessarily because it's the way I prefer to run tests. My preferred way disappeared a few years ago when TestDriven.NET decided to stop updating after VS 2019, to the point where I've been repeatedly tempted to resurrect something very much like it. So you're preaching to the choir here. 😁

@OoLunar
Copy link

OoLunar commented Mar 7, 2024

Personally I'd like to see a test summary at minimum when all tests have completed being ran. We used to have something like this:

Passed tests: 8. Failed tests: 1. Skipped tests: 0. Total tests: 9

Personally I think this is much more helpful than a complete build. As someone who has recently updated and used MSTest for the first time in awhile, I thought dotnet test was just building my project instead of actually running my tests. Having 9 tests pass on their first try, it's not unreasonable to think they were never ran to begin with...

I spent FOREVER trying to figure out why my unit test run kept hanging. Turns out, it just wasn't printing out passing tests, but only failing ones

I also think that there should be some form of UI to inform the user that the program is still successfully running.

We have realized that in big projects printing passed tests is having some perf impact (as the console plays some kind of lock)

Is... this not something multithreading can fix? I.E, having a single thread manage the console via a singleton class while the other tests update their status via said class? Please forgive my ignorance if I'm missing something incredibly obvious here.

@bradwilson
Copy link

bradwilson commented Mar 7, 2024

Is... this not something multithreading can fix?

I can run thousands of tests in a second or less. If I had to print out succeeding test names for each, regardless of pushing them onto a separate thread, the total runtime would be ballooned just waiting for the console output alone. The problem here isn't the coordination, it's the fact that console output is inherently slow.

Throw in the additional complication of wanting/needing to funnel your console output through MSBuild and the time goes up even higher.

@thomhurst
Copy link
Contributor

Just would like to put my two cents in here. I understand that the console is slow, and printing out passing tests will slow things down, but a UI or progress message at least would be helpful for those that think their program is hanging.

Say you had 5000 tests. I don't expect you to send a new console message on each and every one that completed.
You could have a Console writer that just writes every x (milli)seconds. The extra stress on the console should be quite minimal then as you're not requesting output very often?

And the summary at the end I think needs to be there. I like to see that xxx tests passed just for my own sanity so I know everything is good.

@elharo
Copy link

elharo commented Jun 11, 2024

JUnit CLI used to print a single dot for each test. GUI test runners commonly show a growing a colored bar, green until the first test fails and then red. Either works. Printing a full line for each passed test just makes the failing tests harder to find.

@MarcoRossignoli
Copy link
Contributor

@Evangelink I think we could plan this one for next sprint/s.

@nohwnd
Copy link
Member

nohwnd commented Jun 12, 2024

That would be great, I am all for perf, but also all for comfortable user UI that keeps us informed.

You could have a Console writer that just writes every x (milli)seconds.

We have this in VSTest, it is not enabled, but when it was in place it was not very useful because it was tied to time, and it just always ticked.

I think better option is to:

  1. write completed assemblies when they run all tests successfully,
  2. write errors when there are any.
  3. Write test count when there is no output for x seconds into a progress reporter.
    • When there are just 1 (or 3?) tests that are running write which tests those are.

@Evangelink Evangelink added this to the 3.6.0 milestone Jun 17, 2024
@nohwnd nohwnd self-assigned this Jul 16, 2024
@nohwnd nohwnd added the sprint label Jul 16, 2024
@Evangelink
Copy link
Member

Hey there!

We are working on a live reporter for v1.4, @nohwnd can give more information and will surely write some nice article about it, but here is some preview of it:

image

@nohwnd
Copy link
Member

nohwnd commented Aug 30, 2024

The new progress bar and output that utilizes ANSI will be shipped in 1.4, I split the reporting of tests in progress to separate issue so we can fine tune the experience between single dll and multiple dll runs.

There is an option to show passing tests, but it is not the default.

dotnet/docs#42328 (this will soon be on learn.microsoft.com)

This PR description has examples of how the reporting looks like: #3292

The views that show assembly, or per assembly summary are not enabled yet, they are waiting for integration with dotnet test.

Features:

A "progress" bar that shows counts of passed, failed and skipped tests. In ANSI mode the progress is updated live on the bottom of the screen, in non-ansi mode the progress is output repeatedly every 3 seconds.

image

Colored stack traces, with links to files, and (optional) relative paths:

image

Test run summary:

image

artifacts report:

image

Optionally showing passed tests:

image

Optionally showing link to assembly the test is coming from:

image

image

Optional per assembly summary:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants