-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passing tests are not in output of new testrunner #2162
Comments
@dotMorten thanks for the input and sorry it cause some trouble. We have realized that in big projects printing passed tests is having some perf impact (as the console plays some kind of lock) and from what was observed internally most teams don't really look at the output (at least mainly on CI) so we decided to print only errors by default. I understand that for long runs it could feel like the execution is stuck. Maybe some option to disable/enable printing would be best. |
I want to add some more reason to this choice, you could find it also in other part of the design of the runner. When we started to design it we did deep investigation of the pain points of the current VSTest and we noticed that most of the time the source of the troubles were that it was designed taking into account only the "interactive" usage of it. When I say interactive I mean when we do tests and we're "there" watching the outcome (i.e. I run it in my console, I run it inside VS or another IDE). Unfortunately this decision has got "big" impact on all the stack when you want to run "whole" big suite and not only your "partial" amount of tests of your "interactive testing session" (usually for big project or project that that last a long time the amount of tests increase a lot and it's no more possible run it in local, but we rely on "async" CI, we do partial run in our local to test the code part we're "focused" in). On the other side tests run for the vast majority of their time "at whole" in CI. So we decided to start from the opposite point of view and have the chance to have the best performance and support(hostability) in "CI" where we can save time and money and we need to support complex context(devices where you cannot sometime start n processes or you need native aot etc...) and where we for sure run "whole suite always", with the idea to add "interactive" feature where needed but without scarifying places where we can gain performance. I'd say that I agree with the requested feature and we can add an "opt-in"(command line argument+env var) way to show in "our console display implementation" the passing tests. I say "our" because the underneath test platform that empower MSTest unlinks completely the concept of "display" so in future we could open it and allow to the users to plug their custom UX like for instance a spectre, windows form, wpf, web one https://github.com/microsoft/testfx/blob/main/src/Platform/Microsoft.Testing.Platform/OutputDevice/IPlatformOutputDevice.cs#L8.
Diagnostic in the new runner is there for troubleshooting reason and will redirect information only to files or to the protocol in case of server mode(VS run). New platform distinguish completely the "output display" for "users" and diagnostic information for troubleshooting, it's up to the "developer" of the extension decide if forward in both "channels" or only one. |
Console and ILogger messages are often used when implementing new things starting from the self-test. So: needed, even if optional. |
I would say you have swung the pendulum too far in the other direction. Not only do I find the new That said, I am one of the people who far prefers |
The lack of TRX (because of |
Yes, I'd love for xUnit.net users to be able to see xUnit.net's console output here as an opt-in. They kind of get that today with |
Actually that's a "default" for CI usage where usually you cannot rely on the console because when you have a lot of projects that run in parallel usually you need some other report format like trx. For instance when the suite start to be "serious" errors in console are useless like in our MSTest use case. If 3/4 projects star to fail you don't know what to do with the output that is not "related to a specific project" and organize issues inside a file or use a format like trx or some report is the only way to have some meaningful information. If you want to see the error in console there's a parameter like described here https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-integrations#show-failure-per-test Also the fact that the new runner can run test with the That said the current implementation is not mandatory, it's an "extension" like any other that's simply plug the default
Trx is an extension provided you need only to add the extension to the project https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions#test-reports In this page you can find the current available extensions https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions
As said above we plan to add something like
The output here is "a bit better" because actually you're not using MSBuild at all so there's a bit less parallelism and a bit more of synchronization. |
This is an assumption based on the size of the projects that you're building, I suspect, rather than one driven by what the average behavior might be.
Lots of projects have build scripts that are run interactively, in addition to whatever the debugging inner loop is. I ask people to run Also, there's basically zero chance someone will F5 on a test project rather than use Test Explorer if they're already accustomed to using Test Explorer.
The output from this is unacceptable as-is in my opinion. Compared to I'm hoping the plans include |
The spectrum of applications are very wide, we go from libraries like runtime/xunit to service space and devices, from small and quick test suite to very big and slow one. The idea behind the dual model(interactive vs interactive) is to be able to have the best(performance, allocations, locks etc...) and useful UX in both scenario for all kind of application with a way to plug a custom interface for special needs. Interface here means any UX type, the new platform could be hosted by a WinForm application and use it as UX or a remote web app. For instance run
F5 is not there to substitute the Test Explorer but to give another more way to run your tests. Also F5 allows some new mode like hot reload https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-mstest-runner-extensions#hot-reload experience that we would like to port to the test explorer too.
Yep I agree we're working to improve this is the first version, anyway if you want the classic view with the new The problem with the current and future
We build the new plat with this in mind. Output is already pluggable, I don't know if we can exclude completely all the flags because for instance logging/trace are "inside" the plat self, but we can think a way to allow adapters to optout all or specific platform options. |
This is how I've been running v3 for several years now, because it's the only thing that's available. I accept it because I don't have alternatives, but not necessarily because it's the way I prefer to run tests. My preferred way disappeared a few years ago when TestDriven.NET decided to stop updating after VS 2019, to the point where I've been repeatedly tempted to resurrect something very much like it. So you're preaching to the choir here. 😁 |
Personally I'd like to see a test summary at minimum when all tests have completed being ran. We used to have something like this:
Personally I think this is much more helpful than a complete build. As someone who has recently updated and used MSTest for the first time in awhile, I thought
I also think that there should be some form of UI to inform the user that the program is still successfully running.
Is... this not something multithreading can fix? I.E, having a single thread manage the console via a singleton class while the other tests update their status via said class? Please forgive my ignorance if I'm missing something incredibly obvious here. |
I can run thousands of tests in a second or less. If I had to print out succeeding test names for each, regardless of pushing them onto a separate thread, the total runtime would be ballooned just waiting for the console output alone. The problem here isn't the coordination, it's the fact that console output is inherently slow. Throw in the additional complication of wanting/needing to funnel your console output through MSBuild and the time goes up even higher. |
Just would like to put my two cents in here. I understand that the console is slow, and printing out passing tests will slow things down, but a UI or progress message at least would be helpful for those that think their program is hanging. Say you had 5000 tests. I don't expect you to send a new console message on each and every one that completed. And the summary at the end I think needs to be there. I like to see that xxx tests passed just for my own sanity so I know everything is good. |
JUnit CLI used to print a single dot for each test. GUI test runners commonly show a growing a colored bar, green until the first test fails and then red. Either works. Printing a full line for each passed test just makes the failing tests harder to find. |
@Evangelink I think we could plan this one for next sprint/s. |
That would be great, I am all for perf, but also all for comfortable user UI that keeps us informed.
We have this in VSTest, it is not enabled, but when it was in place it was not very useful because it was tied to time, and it just always ticked. I think better option is to:
|
Hey there! We are working on a live reporter for v1.4, @nohwnd can give more information and will surely write some nice article about it, but here is some preview of it: |
The new progress bar and output that utilizes ANSI will be shipped in 1.4, I split the reporting of tests in progress to separate issue so we can fine tune the experience between single dll and multiple dll runs. There is an option to show passing tests, but it is not the default. dotnet/docs#42328 (this will soon be on learn.microsoft.com) This PR description has examples of how the reporting looks like: #3292 The views that show assembly, or per assembly summary are not enabled yet, they are waiting for integration with dotnet test. Features: A "progress" bar that shows counts of passed, failed and skipped tests. In ANSI mode the progress is updated live on the bottom of the screen, in non-ansi mode the progress is output repeatedly every 3 seconds. Colored stack traces, with links to files, and (optional) relative paths: Test run summary: artifacts report: Optionally showing passed tests: Optionally showing link to assembly the test is coming from: |
Summary
The new TestRunner executable doesn't output passing tests, only failing.
Background and Motivation
I spent FOREVER trying to figure out why my unit test run kept hanging. Turns out, it just wasn't printing out passing tests, but only failing ones (which I had none off). This makes it really hard to monitor a test run in the output, and it's very different from how vstest.console would execute. By default please also output currently running test + result of passing tests. This is also important when a test does actually hang/crash so you can see what test might have caused it.
Even setting the diagnostic verbosity has no effect on this.
Proposed Feature
Match output of vstest.console.
The text was updated successfully, but these errors were encountered: