Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I report a test that's not run - add ExplicitTestNodeStateProperty #2538

Open
bradwilson opened this issue Mar 10, 2024 · 34 comments
Open
Labels
Area: Testing Platform Belongs to the Microsoft.Testing.Platform core library Area: xUnit Type: Discussion Type: Feature

Comments

@bradwilson
Copy link

In the Microsoft.Testing.Platform object models under Microsoft.Testing.Platform.Extensions.Messages, I see messages for passed, failed, and skipped tests.

In xUnit.net v3, we also have a concept of a test that wasn't run because it's only ever run when explicitly asked to. It's philosophically closest to a skipped test, but it's not exactly the same. Is there a plan for anything like this in the new object model? I'm leaning towards not reporting anything at the moment, because dotnet test does not need to say anything. However, I am wondering whether there's a way to highlight such tests in Test Explorer today (and if not, if it's a feature under considering). By not reporting it, it will always just stay in the tree as "not run" which is the most appropriate UX if there's no concept of "won't be run unless you explicitly ask it to be run".

@bradwilson
Copy link
Author

Note: I did try using CancelledTestNodeStateProperty but that gets surfaced as a test failure, which this clearly is not. A concern I have is that if the test is reported as "in progress" but then nothing ever happens (no results are reported), I'm not 100% sure how the system will handle this (both on the dotnet test side and in the various UI-based runners like Test Explorer).

@MarcoRossignoli MarcoRossignoli added Type: Feature Area: Testing Platform Belongs to the Microsoft.Testing.Platform core library and removed Needs: Triage 🔍 labels Mar 11, 2024
@MarcoRossignoli
Copy link
Contributor

MarcoRossignoli commented Mar 11, 2024

A concern I have is that if the test is reported as "in progress" but then nothing ever happens

You should not report this state for a test that is not "acting".
The object model is open to new states so we can think to have something like StagedTestNodeStateProperty or PausedTestNodeStateProperty , @Evangelink do you have some better naming?

@nohwnd
Copy link
Member

nohwnd commented Mar 11, 2024

I think the same concept exists in Nunit, https://docs.nunit.org/articles/nunit/writing-tests/attributes/explicit.html. In my view this should map to a skipped test + reason in TestExplorer. This way it is not making the categories of tests too complicated, and it is easy for the user to understand what happened. Definitely easier than seeing the blue "not run" icon, which is often symptom of an error to run tests (at least currently).

@bradwilson
Copy link
Author

You should not report this state for a test that is not "acting".

Because of extensibility, from an execution pipeline perspective we may not know that a test isn't being run until after we find out that it's not being run; which is to say, we may believe we are starting a test that we then discover won't actually be running.

It's philosophically different than skipped. An explicit test can be run if you ask it to, but unless you explicitly ask for it (or ask for all explicit tests to be run), it won't. There's no way to force run a test which is skipped.

In my view this should map to a skipped test + reason in TestExplorer

I'm fine reporting it that way, but I do think it might be confusing to try to express both "this skipped test can't be run even if you ask" and "this skipped test can be run if you ask" with a single UI expression. I understand if you'd rather not make that distinction in the UI of Test Explorer, though it would be still nice if there was some additional metadata (if not a different state entirely) that some other test runner may be able to take advantage of.

@MarcoRossignoli
Copy link
Contributor

I would keep current and future IDEs representation and behavior out of the discussion, I think that the platform should be written without specific "caller" implementation in mind, I'm more for custom tailored made state that a caller needs to recognize and use or ignore.

@Evangelink
Copy link
Member

I also think it's better to have an explicit model that represents that states even if current IDEs are not able to handle it.

I am still thinking that the ideal would be to somehow have some kind of handshake between a client and the framework to align on supported states and mapping between them. We are still a bit too early on the design of this concept for now.

About this "explicit" state, this seems to me this is some kind of specialized discovery more than a special run state. From the few tests I have done, the current experience is that explicit tests are only run if there are only explicit tests in the list of tests you receive otherwise they are "skipped" (no result returned for them). which seems to confirm my feeling of specialized discovery. @bradwilson do you think this is a good definition of this state?

@nohwnd
Copy link
Member

nohwnd commented Mar 11, 2024

I would keep current and future IDEs representation and behavior out of the discussion, I think that the platform should be written without specific "caller" implementation in mind,

Sorry for the confusion. My remark was not purely about TestExplorer, but about keeping the number of possible results (or possible result "categories") to the minimum, so implementing a client does not require implementing 100 different flavors of skipped.

In the platform a skipped test result would represent a test that:

  • was found
  • was not ignored
  • was not filtered out
  • ended with skipped result

It would NOT represent a test that is not runnable even if included in run by a filter. This is detail of the current test frameworks, and can be conveniently represented as some condition + reporting skipped result.

This allows for the platform to continue working as is and distinguish only those steps:

find tests ("discovery" e.g. via reflection)
ignore tests
filter found tests
report tests ("discovery" => reporting to ui)
run tests
report result

In the run tests phase, the particural framework (or user) can represent the skipped flavors as condition + result as it does now:

-  [Explicit] = if in filter run it as normal, and report result
- [Fact (Skip = "reason" ] = always report skipped right away
- [SkippableFact] = run test, and use code to figure out result Skip.IfNot(Environment.IsWindows);
- or any similar flavor e.g. [WindowsOnly], or [ActiveTest]

The result would then be SkippedTestNodeStateProperty and additional metadata that a UI can optionally represent if it knows about them, but that don't make functional difference to them (it is still a skipped test).

Or it would be some flavor of *SkippedTestNodeStateProperty, but there would be Category (or some other metadata), that would make this map to a Skipped test when the client does not know about the details.


If the current way that xunit works is that [explicit] tests are not reported at all (ignored), then I would suggest to change that, and report them as skipped.


This all also nicely aligns with the run&discover command that cannot use previously reported (discovered) tests.

@bradwilson
Copy link
Author

bradwilson commented Mar 11, 2024

If the current way that xunit works is that [explicit] tests are not reported at all (ignored), then I would suggest to change that, and report them as skipped.

If you mean when I run them in my own UI (aka, dotnet run), then they are reported as "not run", which is a separate category from "skipped".

If you mean when I report them via the Testing Platform, I will report them as skipped based on this conversion, with some standardized language about why they were skipped.

I'm going to open a second issue related to metadata.

@bradwilson
Copy link
Author

Circling back to this, has the decision been made whether "not run" is a category of test that Microsoft.Testing.Platform cares about, as distinct from "skipped"?

Thinking about my implementation, there will be a change in behavior between TPv2 and MTP in terms of what the user sees in Test Explorer. With TPv2, a test which is not run (which is typically because it was marked as Explicit = true) continues to show up in Test Explorer as not run:

image

And this is what it looks like with MTP in 17.12 preview 1:

image

Also of note: the skip reasons aren't being reported into Test Explorer any more.

So I guess I have unresolved questions:

  1. Why aren't my skip reasons showing up in Test Explorer when using MTP?
  2. If I report a test as "starting" to you but never report it as finished, is that going to be a problem? If so, then I'll have to delay sending the starting message until I know the test was run, since I have no way to say "not run" consistently in the UI otherwise.

@bradwilson
Copy link
Author

bradwilson commented Aug 31, 2024

Is test filtering hooked up in 17.12 Preview 1? When trying to explicitly run just one test, it still runs them all. I am linked against MTP 1.4.0-preview.24430.15.

@MarcoRossignoli
Copy link
Contributor

Why aren't my skip reasons showing up in Test Explorer when using MTP?

I'll check this using your branch.

If I report a test as "starting" to you but never report it as finished, is that going to be a problem? If so, then I'll have to delay sending the starting message until I know the test was run, since I have no way to say "not run" consistently in the UI otherwise.

I'll check with @drognanar what the VS behavior in this case

@MarcoRossignoli
Copy link
Contributor

Is test filtering hooked up in 17.12 Preview 1? When trying to explicitly run just one test, it still runs them all. I am linked against MTP 1.4.0-preview.24430.15.

If you "Debug" your single tests you should be able to see if the Uid filter is sent or not, if not looks like there's an issue, I'll test that too using my internal preview of VS and your branch.

@bradwilson
Copy link
Author

bradwilson commented Sep 1, 2024

I think I see my bug on the UID filter. Hold off on that one. (In the process of trying to handle explicit tests, I forgot to filter the list 😂)

Edit: Yep, that was my bug. Whew. 😄

@bradwilson
Copy link
Author

I'll check with @drognanar what the VS behavior in this case

I can see that Test Explorer is doing what I want (leaving the test as not run), and I don't see anything that looks like error messages in the Output > Tests window. I was asking more along the lines of whether Microsoft.Testing.Platform would be upset at me leaving these tests "hanging" without resolution when I call context.Complete().

@drognanar
Copy link
Member

I tried to build the https://github.com/xunit/xunit/tree/microsoft-testing-platform branch in order to test out how the skipped message is sent back to VS, however I'm hitting lots of nullability build errors.

@bradwilson what setup do I need to test out xunit's protocol branch?

@bradwilson
Copy link
Author

I use .NET SDK 8. If there are new nullability rules added in 9 they may be triggering. Try adding a global.json.

@bradwilson
Copy link
Author

I'm building with 9.0.100-preview.7.24407.12 and I don't see any nullability issues. I do see a bunch of new instances of NETSDK1215 and NU1903.

@bradwilson
Copy link
Author

Getting past the NuGet warnings, I see a lot of new analyzers that are triggering (1705 error(s) and 3470 warning(s)), so you're going to be best off just using global.json and .NET 8 for now.

@drognanar
Copy link
Member

I added a global.json file and now I'm able to have the build succeed. Will take a further look into the VS integration later.

@bradwilson
Copy link
Author

FYI, I have fixed up the compiler issues (at least as much as I'm seeing with 9.0.100-preview.7.24407.12) and pushed up the changes. You should be able to pull and then get rid of your global.json, if you need access to .NET 9.

@drognanar
Copy link
Member

I checked and the testing platform adds a skipped reason property to a test node, but not to the serialized JSON.
I synced up with @MarcoRossignoli in order to propagate the property to the JSON.

@MarcoRossignoli
Copy link
Contributor

I synced up with @MarcoRossignoli in order to propagate the property to the JSON.

Fixing in #3754

@MarcoRossignoli
Copy link
Contributor

I can see that Test Explorer is doing what I want (leaving the test as not run), and I don't see anything that looks like error messages in the Output > Tests window. I was asking more along the lines of whether Microsoft.Testing.Platform would be upset at me leaving these tests "hanging" without resolution when I call context.Complete().

The testing platform won't be upset of it, when you push the info will be consumed by IDataConsumers the testing platform only handle the build-in UX and we don't do nothing at the moment for the in-progress that doesn't match the completed for what I know, @nohwnd can you confirm that the new built-in ux won't go mad if an in-progress won't match a complete?

@MarcoRossignoli
Copy link
Contributor

MarcoRossignoli commented Sep 5, 2024

I was wondering if this state should be returned as new discovery state. UX(in general not related to VS) needs to know up-front about it to "distinguish" from the other kind of tests node(i.e. put a different icon).
After at runtime you can run it or not, so it's natural that if UX received that state the tests run...but it doesn't change the nature of that "explicit" trait.

@bradwilson
Copy link
Author

Obviously it remains to be seen if/how other third party runners (Rider/Resharper/CodeRush/etc.) adopt support for Microsoft.Testing.Platform. The only UX experiences I have right now are:

  1. Test Explorer (the experimental one in 17.12 Preview 1), currently shows our NotRun tests as not run. As mentioned above, we originally were reporting these as Skipped, but I didn't like how that changed the behavior in Test Explorer (in TPv2 our Explicit tests stay shown as not run), so I've stopped reporting results for them entirely, as this is the only way to keep Test Explorer's behavior in line with TPv2.

    I'm not super thrilled with this, because it causes issues (like the TRX inconsistencies below), but the UX in Test Explorer trumps that IMO. Explicit tests are almost entirely an interactive tool, so being able to recognize when a test hasn't been run and then explicitly run it is crucial. Visually mixing them in with Skipped tests is an unacceptable UX. Command line (and especially CI) builds will choose to always or never run such tests, and I suspect the overwhelming majority of times it'll be "never".

  2. In the MTP command line runners (direct MTP CLI, as well as MSBuild VSTest task workflow), NotRun tests don't show up anywhere: they're not in the test count, and they're not reported in TRX. This is a result of the decision above to optimize the Test Explorer experience at the expense of the CLI experience. The native xUnit.net CLI and our first party runners are all aware of NotRun tests and report them as such.

    This is why I've decided to put --report-xunit-trx into our MTP support command line options, because I believe our TRX file is more complete than yours as a result of this additional knowledge. I would consider removing/hiding it if MTP supported NotRun as a first class concept and surfaced those tests with an outcome of NotRunnable (which is how we surface them in our TRX report). Test Explorer supports this is the Test Results view, so a user pulling a TRX file from a local or CI build will be able to differentiate these tests visually from Skipped tests.

Owners of other third party runners will obviously be limited to what kinds of execution results you can report. Obviously everybody is going to support Passed/Failed/Skipped as that's the bare minimum. I don't know what other states they might be able to report if there were associated execution results, but doing a survey might help inform whether there are other states you might consider.

@bradwilson
Copy link
Author

bradwilson commented Sep 5, 2024

I was wondering if this state should be returned as new discovery state.

FWIW, our ITestCaseDiscovered message does include Explicit in the metadata, so we would be able to support this in the discovery metadata if needed. Edit: It's also worth noting that at discovery time, we have no idea whether the user will ask to run explicit tests. So you can't say "Explicit means it'll never run", because right now you won't know that until you try to run it.

@MarcoRossignoli
Copy link
Contributor

MarcoRossignoli commented Sep 6, 2024

because I believe our TRX file is more complete than yours as a result of this additional knowledge.

The implementation of the trx is "new" and we've some inconsistencies and bug that will be fixed to make it aligned 100% with the past, we will/have same issue for all other users inside AzDo. So it's fine if you provide the xunit one and our own in future when the implementation will both good you can decide if drop our own and keep your or not. It's fine to me up to you, the important thing is that users can generate a trx.

FWIW, our ITestCaseDiscovered message does include Explicit in the metadata, so we would be able to support this in the discovery metadata if needed. Edit: It's also worth noting that at discovery time, we have no idea whether the user will ask to run explicit tests. So you can't say "Explicit means it'll never run", because right now you won't know that until you try to run it.

So I think we should add this concept if it's already in xUnit and nUnit:

ExplicitTestNodeStateProperty : The test is available but it will run only if explicitly requested, it's a successful execution state (won't fail the suite).

On UX can be described in an arbitrary way, NotRunnable or something else.

cc: @thomhurst @Evangelink @nohwnd

@MarcoRossignoli MarcoRossignoli changed the title How do I report a test that's not run? How do I report a test that's not run? Add ExplicitTestNodeStateProperty Sep 6, 2024
@MarcoRossignoli MarcoRossignoli changed the title How do I report a test that's not run? Add ExplicitTestNodeStateProperty How do I report a test that's not run - add ExplicitTestNodeStateProperty Sep 6, 2024
@bradwilson
Copy link
Author

So this would be an execution result, not a discovery result, correct?

@Evangelink
Copy link
Member

I have been reading this thread a few times and I cannot really make up my mind...

There is also some overlap with some questions raised by @thomhurst in #3699.

I think that it's interesting to have a separation of discovery "states" and execution "states". We should also thrives for the flexibility of states produced by the framework as it provides more capabilities for extensions and at the same time, we need a way to simplify or merge all these custom states into what a client can understand.

My initial thought was to have some kind of handshake where the client could declare the list of supported states and the framework would then map its states to what is supported but that seems to be defeating most of the goals we would like to achieve. I am wondering if we should have some base states (extensible) that would correspond to the main basic states of a workflow: discovered, in-progress, passed/skipped/failed and have the other common states as children of these states. For example, I tend to agree that inconclusive or explicit are special scenario of a skipped test and at the same time I do agree that they are common enough that some clients (e.g. Test Explorer) could special case them.

@nohwnd
Copy link
Member

nohwnd commented Sep 11, 2024

@nohwnd can you confirm that the new built-in ux won't go mad if an in-progress won't match a complete?

yes, it does not care.

@thomhurst
Copy link
Contributor

I've got another scenario to complicate things. I've just pushed some TestNodeMessages for my 'hooks' - So Global/Class Set Ups/Tear Downs. I want this so it's easy to see in the test explorer and such if it is running/passed/failed.

However now they're being logged as tests in the final output, and also being produced in the trx report. So I either don't push test node updates and lose visibility of them running, or I deal with them being shown as tests

@Evangelink
Copy link
Member

I've got another scenario to complicate things.

It's not really complicating. This is something we anticipated but given all barriers and limitations cause by VSTest and Test Explorer we had to disregard some scenarios. I think we are getting enough maturity to consider back some extra scenario but I want to be clear that given our collaboration experience with Test Explorer I don't have much expectation things would change on their side.

@thomhurst
Copy link
Contributor

I'll keep my fingers crossed anyway! 🤞

@bradwilson
Copy link
Author

Just as an extra data point, we'd be happy to push Test Collections as a node in the tree as well, if Test Explorer started supporting arbitrary hierarchy. I think part of that negotiation would be informing Test Explorer what the hierarchy of things is, so that they could not only present them in a tree but also so that the user could choose how they sort things. Many (most?) users might not choose to sort by Test Collection, since the default is Test Collection == Test Class, but for those who are using test collections in specific ways, being able to group them and run all the tests in a given collection would probably be a value-add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: Testing Platform Belongs to the Microsoft.Testing.Platform core library Area: xUnit Type: Discussion Type: Feature
Projects
None yet
Development

No branches or pull requests

6 participants