Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing in VS Code #107467

Closed
connor4312 opened this issue Sep 25, 2020 · 121 comments
Closed

Testing in VS Code #107467

connor4312 opened this issue Sep 25, 2020 · 121 comments
Assignees
Labels
insiders-released Patch has been released in VS Code Insiders plan-item VS Code - planned item for upcoming testing Built-in testing support under-discussion Issue is under discussion for relevance, priority, approach
Milestone

Comments

@connor4312
Copy link
Member

connor4312 commented Sep 25, 2020

State of the World

Testing support in VS Code has been a feature request for a long time. The VS Code community has build excellent extensions around testing, for example:

Each implementation of testing presents a different set of features, UI, and idiomaticity. Because there is no sanctioned approach to tests in VS Code, extension developers tend to make bespoke implementations, as we've seen in the Python and Java language extensions. Ideally, like in debugging, a VS Code user would have just about the same experience as they work between projects and languages.

VS Code's Approach

Investigate how VS Code can improve the testing support. Several extensions are already providing testing support, explore what APIs/UIs could be added to improve these testing extensions and the test running experience. -- 2020 Roadmap

The Test Explorer UI presents the best point of inspiration for us, as there are many existing extensions built on its API: it's capable and proven. Regardless of the direction we take in VS Code, we should have a way for its Test Adapters to be upgraded to the new world.

Wallaby is an excellent extension, but it's tailored and purpose-built to JavaScript, and includes functionality which is not readily portable to other languages. While it is a good source for inspiration, we're not aiming to encompass Wallaby's feature set in the extension points we provide, at least not yet.

We're prototyping an API in the extension host, but there are a number of approaches we can take:

Extension Host ('traditional' VS Code API) 'Test Protocol' (like DAP/LSP) Extension (like existing test explorer)
+ Simple to adopt for extension authors
+ Easier to manage state
+ Clear way to build 'official' test extensions
+ Encourages keeping expensive work in child processes
+ Could be theoretically shared with VS and other editors
+ Keep VS Code core slim
+ Unclear whether there's significant functionality we'd want that's not already possible in exthost api
- The 'obvious path' is doing heavy lifting in the extension host process, which is undesirable
- Additional implementation and maintainence complexity for VS Code
- Less friendly, additional complexity than TS APIs for extension authors
- Additional extension and set of libraries to maintain+version for types and implementation
- Less clear there's an official pathway for test extensions

API Design

The following is a working draft of an API design. It should not be considered final, or anything close to final. This post will be edited as it evolves.

Changes versus the Test Adapter API

As mentioned, the test adapter API and this one provide a similar end user experience. Here are the notable changes we made:

  • The test adapter API does not distinguish between watching a workspace and watching a file. In some cases, there is an existing process that reads workspace tests (such as a language server in Java) or it's not much more expensive to get workspace tests than file tests (such as mocha, perhaps). However, some cases, like Go, providing tests for a single file can be done very cheaply and efficiently without needing to involve the workspace.

    In this API we expect the TestProvider to, after activation, always provide tests for the visible text editors, and we only request tests for the entire workspace when required (i.e. when the UI needs to enumerate them).

  • We have modeled the test state more closely after the existing DiagnosticCollection, where the Test Adapter API uses only events to enumerate tests and does not have a central collection.

  • The Test Adapter API makes the distinction between suites and tests, we do not. They have almost identical capabilities, and in at least one scenario the 'suites' are more like tests and the leaf 'tests' cannot be run individually.

  • We use object identity rather than ID for referencing tests. This is in line with other items in the VS Code API, including Diagnostics.

Ideas and Open Questions

See the testing label for current work, questions, and problems.

API

See the current working proposal in https://github.com/microsoft/vscode/blob/master/src/vs/vscode.proposed.d.ts (ctrl+f for 107467)

@connor4312 connor4312 added plan-item VS Code - planned item for upcoming under-discussion Issue is under discussion for relevance, priority, approach labels Sep 25, 2020
@connor4312 connor4312 added this to the September 2020 milestone Sep 25, 2020
@connorshea
Copy link
Contributor

connorshea commented Sep 25, 2020

👋 I'm the maintainer of the Ruby Test Adapter extension and just wanted to drop in and give a bit of context on the pain points I've had with the Test Adapter API. The Adapter API works well for the most part, but the biggest problem I've had with it is getting it work well with large codebases.

I developed the Ruby adapter with my Rails app vglist in mind, and at the time it had around 300-400 tests in total (it now has 700-800). My adapter worked fine with that, it took a few seconds (maybe 5-10) to reload all the test suite data whenever a change was made to the test code. However, even 5-10 seconds is annoying to wait for when you just want to run your tests right after you modify them. And now I work at a company with a Rails app that has over 12,000 RSpec tests, so my extension now takes about 45 seconds to reload the tests whenever I change them (and it'd be even longer if I didn't have a pretty well-spec'd laptop). I don't ever really use my test adapter with that codebase because it slows me down far too much.

The problem is that the API doesn't support modifying the existing set of tests, only replacing all of them at once. Which requires RSpec to generate information for every test in the codebase whenever it reloads the tests.

I've considered a workaround where I'd cache the information in a tempfile or something, and have rspec only load information about changed files, and then modify the cached tree and send it up to the explorer UI, but that's complex and I haven't gotten around to actually doing it.

There are other problems - which I'm not really sure how to solve - with being unable to run the tests while it's loading the test suite because it doesn't handle the tests that pass while it's loading (the data can theoretically get out-of-sync if you, say, change a bunch of things between reloads, but >99% of the tests in the codebase will be exactly the same before and after any given change I make to the suite). So it's maybe worth considering that when designing the invalidation logic.

There are also problems with being able to view test logs to figure out issues when tests fail, but that's more of a problem with how the RSpec framework works than with anything the Test Adapter API does.

The current API Sketch looks quite good from what I'm seeing. I especially like the use of a cancellation token, which if I understand correctly seems like a much cleaner solution than the Adapter API where you just blindly attempt to kill any relevant running processes. There are some things missing there (e.g. how do I handle the case where the suite is unloadable because the user has created an invalid test suite via some syntax error?), but overall it looks like it takes a lot of the good decisions from the Adapter extension API.

Anyway, I hope this makes some sense. I understand it's probably not a super cohesive comment since I wrote it on-the-fly, but I've been hoping something like this would be introduced into the core of VS Code eventually, so I wanted to make sure I got my two cents in :)

@connor4312
Copy link
Member Author

connor4312 commented Sep 25, 2020

Thank you for that feedback, it's super helpful!

Scoping tests and handling large codebases is definitely something we want to tackle from the outset. Like all features we build, we will be dogfooding it in the VS Code repo, so it is a priority to be able to handle that. As you saw in the sketch, I think the collection approach solves the pain points around having to replace all tests.

The idea of running tests during load is interesting. The base case of running all tests is pretty simple from the API perspective -- we would just runTests without any filters and let the extension run as many as it can. So long as it's able to reconcile the collection into the right state running concurrent discovery and execution, everything should be fine.

One scenario that isn't covered is running a subset of tests as discovery is happening (e.g. if I wanted to run all tests with the word "foo" in them) without starting a new test run. This would take some finessing and will not be supported by all adapters, for instance Go could never support it without changes to its toolchain, so is probably not worth designing for...

There are some things missing there (e.g. how do I handle the case where the suite is unloadable because the user has created an invalid test suite via some syntax error?),

I think the right thing to do would be for the test extension to emit a diagnostic for that using our existing API. I've added it to the open questions as well though for discussion.

@connorshea
Copy link
Contributor

@connor4312 generally speaking I'd want to give the user an error notification so they know something has broken and their suite failed to load, rather than creating a diagnostic. But as long as the extension can choose what to do if that occurs, I suppose the Test API in VS Code doesn't need to care about that.

@connor4312
Copy link
Member Author

connor4312 commented Sep 25, 2020

When I say diagnostic I mean the "Diagnostic" in the VS Code API that appears in the problems view (and can also have an error squiggle in the editor)

The downside there is that if they have a syntax error, having the problem be duplicated by both the language server and test extension will be some ugly clutter.

@connorshea
Copy link
Contributor

connorshea commented Sep 25, 2020

The reason I wouldn't use a diagnostic is that 1) I personally rarely ever look at diagnostics/errors in the diagnostics view because the signal to noise ratio in my experience is so high, and 2) there are situations where a diagnostic wouldn't really make sense, for example if the user's tests fail to load because their local Postgres server isn't running or because they forgot to install their Ruby gems so the RSpec gem isn't even available.

@hbenl
Copy link

hbenl commented Sep 26, 2020

Some thoughts from my side (I'm the author of the Test Explorer UI extension):

  • the proposed API looks good, in particular having a central test collection will probably be welcomed by many TestProvider authors. This is obviously necessary if you want to support only detecting tests in the currently opened files (which I had never thought about) but even without that there seem to be some scenarios where TestAdapter authors struggle with this part of the Test Explorer API
  • bringing the existing TestAdapters to the new world could be done with a bridge extension (or built into Test Explorer itself) which registers all the tests coming from TestAdapters with the new TestProvider API
  • concerning the question whether a TestProvider should run in the Extension Host: I'm not worried about expensive work being done in the Extension Host process here: the most expensive part is usually running the tests and TestProviders should always do that in a child process anyway
  • perhaps some other TestAdapter authors would also like to share their thoughts: @Raagh @kavod-io @Gwenio @Testy @marcellourbani @numaru @bneumann @Florin-Popescu @matepek @fredericbonnet @drleq @dampsoft @betwo @zcoinofficial @vshaxe @kondratyev-nv @recca0120 @DEVSENSE @swellaby @prash-wghats @Derivitec @Bochlin @maziac

@bneumann
Copy link

Thanks for the mention @hbenl. Looks good to me, I also agree with @connorshea about the notificantion in case something is broken. I had that a lot and having a possibility to tell the users what went wrong might be good.

One thing that I was missing, and I am not sure if it possible, is to match my tests with the source files. My adapter runs C code and I can't simply grep all files for the function names. I could do that if the user registered the toolchain somehow but that would require a lot of setup for the user to make prior to testing. So it would be nice to somehow get cross information from the language server and check which files in my project are actually test files.

Not sure if that is meant by:
In a golden scenario, invalidation of tests would be done by a language server which can intelligently determine specific tests that should be invalidated when a file or a file dependency changes. Maybe this is still handled by an event on the TestProvider, but if we take a "Test Protocol" approach then coordination will be harder.

@hbenl
Copy link

hbenl commented Sep 26, 2020

One more thought: the central difference between tests and suites in Test Explorer is that the state of a suite is always computed from the states of its children (with one exception: a suite can be marked as errored). TestProvider doesn't distinguish between tests and suites but it should (optionally) provide the capability to let the state of a test be computed from the states of its children. Perhaps this could be flagged by another TestRunState (e.g. INHERIT). Of course it could be left to TestProvider authors to update the parents but this can become a slightly hairy issue if you want to avoid potential performance problems, so it would be nice if VS Code provided that.

@matepek
Copy link

matepek commented Sep 27, 2020

Hello I'm the author of C++ TestMate

The Test Adapter API makes the distinction between suites and tests, we do not. They have almost identical capabilities, and in at least one scenario the 'suites' are more like tests and the leaf 'tests' cannot be run individually.

Praise the initiative. The feature will be useful for Catch2 too. (related issue)

invalidation and auto-run

I think only language server isn't enough. We need the flexibility to retire tests in case of external dependencies and file changes.

A request of my user:
I don't see an "output window" concept on the current API but I assume that it is unavoidable. (Some test frameworks can provide useful output which cannot be associated to any tests.)
The person expressed the need to have the output window updated before the test itself finishes. I believe it is a reasonable request. It seems to me that multiple TestState with "Running" can be a solution. But I don't see how the output window fits into the current API.

About test state propagation:
For my extension if a test fails all the ascendants has to be failed too. Even a node has a "Passed" state if one of the descendant's has a "Failed" or "Errored" state than has to be propagated. Some precedence also seems necessary like:
"Running" over "Errored" over "Failed" over "Skipped" over "Unset".

But this raises another question. Scenario: B and C under A. Test B reports failure and test B still running. What should be the state of A? According to previous precedence it should be "Running" but users might would find it useful to see that some tests part of that "Suite"/node has already failed. I have some ideas about it but all of the sacrifices the simplicity and clearness of the API.

Question: The integrated test explorer what extra features will be available for VSCode users? I mean we have a working system now thanks to @hbenl. What is strongest the motivation of the integration?

@marcellourbani
Copy link

Thanks for the mention @hbenl

In my abapfs extension I only know I have a test after running it, a discover function would take weeks to run.

Proposed API looks ok as long as I can fire onDidChangeTest any time

@orta
Copy link
Contributor

orta commented Sep 28, 2020

/cc @connectdotz who has been doing great work on vscode-jest
/cc @captbaritone who I had a conversation exactly about this a year ago

@connor4312
Copy link
Member Author

connor4312 commented Sep 28, 2020

Thank you for the feedback and the tags!

Point taken around the error notifications. This week is endgame for our September release, but we'll continue discussion and update this issue + the sketch next week.

TestProvider doesn't distinguish between tests and suites but it should (optionally) provide the capability to let the state of a test be computed from the states of its children

But this raises another question. Scenario: B and C under A. Test B reports failure and test B still running. What should be the state of A? According to previous precedence it should be "Running" but users might would find it useful to see that some tests part of that "Suite"/node has already failed. I have some ideas about it but all of the sacrifices the simplicity and clearness of the API.

@hbenl You bring up a good point that, in the current design, state is never inherited. Generally for tests we want the opposite -- that state is always inherited -- but thinking about this more it's only applicable within a tree UI. Perhaps if the state is unset, then we say UI implementations should do the right thing automatically. For example, a test explorer would show parents as failed if any of their children failed, or running if any of their children are running. But a status bar item that shows failing/passing tests would only count those who had a non-unset status.

I think this approach would also make @matepek's scenario nice from an API point of view; the UI can be arbitrarily smart.

One thing that I was missing, and I am not sure if it possible, is to match my tests with the source files. My adapter runs C code and I can't simply grep all files for the function names.

@bneumann I think this problem is mostly out of scope. We will provide hooks in VS Code to build tests, but won't add any special new discovery mechanisms.

I think only language server isn't enough. We need the flexibility to retire tests in case of external dependencies and file changes.

@matepek Definitely, the language server won't be the only way (we aren't going to require test providers to also integrate their language server) but we should not block that path.

I don't see an "output window" concept on the current API but I assume that it is unavoidable. (Some test frameworks can provide useful output which cannot be associated to any tests.)
The person expressed the need to have the output window updated before the test itself finishes. I believe it is a reasonable request. It seems to me that multiple TestState with "Running" can be a solution. But I don't see how the output window fits into the current API.

@matepek Yes, test output and diffing is one particular area we want to improve. Thank you for mentioning the streaming scenario, I'll make sure we look into that and will tag you with the outcome/proposal.

What is strongest the motivation of the integration?

We actually are not set yet on making this integrated. Our roadmap outlines the purpose here (which I should add to the original thread for clarity):

Investigate how VS Code can improve the testing support. Several extensions are already providing testing support, explore what APIs/UIs could be added to improve these testing extensions and the test running experience.

Some benefits of being built-in, mentioned briefly in the table in the original issue, are:

  • Clear path to build official test extensions to give a consistent experience
  • Possibility for better diffing support, as matepek touched on
  • Coverage and live testing (under exploration)
  • Keeps APIs and versioning under vscode.d.ts on which other extensions can be built with compatibility checks (i.e. package.json engines field) and guarentees

At the moment we're sketching an extension host API to provider an understanding of the 'ideal world' API. This may become a built in API, it could be a protocol, or we could end up working with Holger to apply some principles the test explorer UI.

@connectdotz
Copy link

hi, I work with @orta in vscode-jest. This is an interesting thread, have a few questions:

  • First, the high-level question: I am wondering what is the main feature from the end-users' perspective? Is it a new build-in sidebar view listing all tests that can be run/debug via standard UI? For test extensions, does it boil down to using the build-in "Test Explorer" instead of their own sidebar views?

  • The proposed API only seems to cover the Diagnostic(the PROBLEMS tab), our extension also heavily uses Decoration for inline visual indications and assistance in the editor, as well as OUTPUT for detailed test output and notifications for error/help-tips (as others also mentioned). If a consistent interface is the goal, then maybe all these UI extensions should be included as well?

  • In jest, a test is identified by its name(label) and the hierarchical context (the parent block). It is hard to replace a single test because its name or line location can all change when users are developing their tests. Therefore we always end up reconstruct the test tree for the given file on each run. The proposed API seems to take the "bottom-up" approach, where each TestItem points to its parent. To find all tests by a given parent will require traversing all the items. This will be quite expensive for our use case, but if we are the only test framework prefer top-down, we could just build our own internal structure to track them... is this not an issue for other test frameworks/extensions?

  • Given there is no update operation for TestCollection, I assume TestItem is immutable? But the TestItem interface didn't have readonly so not sure... How does one update a test status for example? delete + add? or mutate the TestItem + firing onDidChangeTest? You are right that it is kind of redundant if you already observe the collection and yet still require the change event...

  • In multi-root workspaces? Does the explorer display the current folder only? Will users see the full workspace (summary of each folder)?

  • A stretch goal: In addition to standard UI, it would be great if this can also address our users' biggest pain point, which is often the test setup, env configuration, etc... how about something like a "test config", similar to a debug config, that Provider can provide snippet or even wizard to help set up in launch.json?

@rossknudsen
Copy link

hey all, I maintain the Jest Extension for the @hbenl's Test Explorer UI. Here is a bit of a brain-dump, some of which might be useful:

I agree with @connectdotz comments about the structure of Jest tests being very hierarchical and top-down. I maintain internal state in my extension as a tree and map from the internal structure to the Test UI structure when I need to emit events as the internal structure is highly correlated to the output. IMHO the test collection as a mutable list-like collection and mutable test objects would be more difficult to translate to/from than what I currently do. Now it may be that the Test UI extension can live on as an intermediary extension taking care of this translation on our behalf (amongst other tasks e.g. multiple workspaces). But I think it would be worth taking a little time to think about how the test information will be consumed by UI extensions as noted in the todo. Maybe leave this API in draft until the UI API has been fleshed out. It doesn't make sense to flatten it, pass it to VS Code to pass it to a UI extension where it is transformed back into tree form again. Other extension authors should comment on what their preferred format is.

The proposed API describes how the TestProvider can be controlled by VS Code - through the runTests and addWorkspaceTests methods. However, at least in the context of the extension I maintain, it doesn't always make sense to obey that command. In the case of the addWorkspaceTests method, the extension may have already discovered all of the tests before that command is invoked making it a no-op. Also because my extension is currently monitoring the file system for changes and running tests as it sees fit, what if the user wants to initiate a test run through the runTests method? Should the internal test run be cancelled in favour of the users choice? Is it Ok to plainly disobey the request?

  • Perhaps the TestProvider has a state, e.g. Parsing/Discovering, Running Tests, Idle etc, potentially with a corresponding event emitter for status changes. This could also serve to notify VS Code of events such as failure to load/run tests etc.
  • I think one missing feature would be the ability to open the corresponding source from a TestItem. The Test Explorer UI allows this but I can't see how you can find the source file with the proposed API.

Test Messages:

  • Seems a bit light-weight when compared with the Test Explorer UI API. There you have a message and a tooltip which would live at the equivalent level of the TestState. However, I don't have any suggestions on how to improve things, while remaining UI agnostic. I use the message to display detailed stack traces, I can't remember if I use the tooltip or not.
  • Would be awesome if we could have colored text.
  • Perhaps we need an equivalent of log levels for the TestMessages if it is possible to provide messages that are not errors (info, warning etc).

@sandy081 sandy081 modified the milestones: October 2020, November 2020 Oct 26, 2020
@matepek
Copy link

matepek commented Oct 30, 2020

Also maybe a category for testing could be useful.
Currently I'm using Other.

@connorshea
Copy link
Contributor

@connor4312 big congratulations on finally shipping this! :D Absolutely fantastic work over the last year :) Thank you for taking all our feedback into consideration and working to build the compatibility layer!

@JustinGrote
Copy link
Contributor

@connor4312 yes absolutely fantastic work and for being so responsive to a bunch of heathens you don't even work for :)

@jdneo
Copy link
Member

jdneo commented Aug 6, 2021

🎉Congratulate to the great work! @connor4312

I believe this will definitely make VS Code more powerful in the polyglot development. People can get a unified experience on testing. Awesome!

@connor4312
Copy link
Member Author

Thanks everyone for your feedback and suggestions to make this happen. Keep filing those Github issues if you run into problems or have questions 😉

@connor4312
Copy link
Member Author

Fyi for extension authors, there is a new marketplace category "Testing" which you can publish extensions into for discoverability 🙂

@JustinGrote
Copy link
Contributor

@connor4312 added!
pester/vscode-adapter@8ed0938

@maziac
Copy link

maziac commented Aug 22, 2021

Although this thread is closed, maybe someone can answer this:
I'm currently migrating to the vscode testing api and found an issue I don't know how to deal with.
Following situation:
In the runHandler I create a test run (run = controller.createTestRun(...))
Then later on I can run.started and after that I call the test case.

Now I tested following situation:
Inside the test case I created an infinite loop. I.e. My test case never returns.
Therefore it doesn't call passed or failed, obviously.

If this has happened once I'm also not able anymore to run a second test case since my extension is stuck.

How do I have to deal with such a situation?
Would also be nice if there would be some support from the testing api, i.e. a max time until the test case is considered being failed.

@connor4312
Copy link
Member Author

max time until the test case is considered being failed

The approach we've taken with the API is to let test extensions handle configuration themselves, since in most cases test frameworks are already configured with external test files and we don't want to duplicate things or have multiple points-of-truth.

If this isn't the case for yours, you're welcome to have a custom configuration file or just workspaceStorage, which you could consider using in concert with the runProfile.configureHandler

Inside the test case I created an infinite loop. I.e. My test case never returns. Therefore it doesn't call passed or failed, obviously.

Without knowing the specifics of your case, usually the test runner signals the test case failed after some time, and you can then put the test case into the errored or failed state.

@maziac
Copy link

maziac commented Aug 25, 2021

Ok, something different.
Parsing for test cases for me is a multistep approach. The example shows how it is done with a file watcher.

I would rather like to do this not automatically. I.e. I would like to have the user press a button to discover test cases.
Like e.g. in the Test Explorer of Holger Benl.
Is it possible to do this/how?

@JustinGrote
Copy link
Contributor

JustinGrote commented Aug 25, 2021

@maziac what you could do is basically ignore the resolvehandler and supply your own button to the menu. This would be pretty unintuitive. Why wouldn't you want to auto discover test cases? The discovery doesn't start until someone manually chooses to look at the test window (it doesn't happen just on startup every time)

You could also just wire into the runhandler to do your discovery and populate the tree right before running tests, so when someone clicks "run all" the tests just start showing up.

@hbenl
Copy link

hbenl commented Aug 25, 2021

I would like to have the user press a button to discover test cases.

This should definitely be supported by the new testing API. While automatic discovery of changes to the tests would be ideal in general, there are many situations where triggering the test discovery manually is necessary (e.g. when loading the tests is very slow so it shouldn't be triggered automatically, or when automatic discovery of changes is not possible or its implementation is buggy).

@JustinGrote
Copy link
Contributor

JustinGrote commented Aug 25, 2021

What you could do is have your "initialization" resolve handler (when the resolver is called with an undefined testitem indicating it's a fresh start) to register a default testItem (suite) in your test controller root, e.g. "MyTests". Then your runhandler on that testItem does the discovery and run at the same time and creates the test items with their results as needed.

@JustinGrote
Copy link
Contributor

That's basically what my Pester one does, it uses the filewatcher to create the test suite "entries" on a per file basis very quickly, but then discovery of the tests doesn't happen until you dropdown the item or click run. You'll need to do something special for list view though because it auto-expands the test entries by default.

https://github.com/pester/vscode-adapter/blob/f7769470a1d2d3490e41c6ed26da72e713c190ad/src/pesterTestController.ts#L89

@connor4312
Copy link
Member Author

Currently there's no "refresh" button but if we add one it'll likely result in a second call to the resolveHandler with test = undefined to re-request the root items.

Why wouldn't you want to auto discover test cases? The discovery doesn't start until someone manually chooses to look at the test window (it doesn't happen just on startup every time)

👍 listening to the resolveHandler is the recommended approach. We call it conservatively, and trying to do some custom loading without listening to the resolveHandler will break some things, like the "re-run" commands after an reditor reload.

@firelizzard18
Copy link

If you search for "go.test.refresh" in vscode-go, you'll see my implementation of refresh. All you need is:

  • A command to do the refreshing (it should take a test item as the first/only argument)
  • A menu contribution to testing/item/context
  • An appropriate when clause, possibly including use of the setContext command

If you set "group": "inline" in the menu contribution, a button will appear when you mouse over your test items. If you set "icon": "$(refresh)" in the command contribution, that button will show the refresh icon instead of the command name.

@maziac
Copy link

maziac commented Aug 25, 2021

Sorry, but all this seems a little hacky-ish to me.
I have the situation that the discovery of test items can be very expensive.
There is a main file that indirectly discovers testcases by parsing other files. These can be many files.
If the user changes this main file, even if he doesn't touch any of the "child"-testfiles, all of these child testfiles would have to be re-read.
And that would happen on each key-stoke when the user changes something in the editor.
Aa simple re-load button would really help.

@connor4312
Copy link
Member Author

It sounds like adding a long debounce on changes to the main file is sensible in your case.

@JustinGrote
Copy link
Contributor

JustinGrote commented Aug 25, 2021

@maziac

And that would happen on each key-stoke when the user changes something in the editor.

That's not necessarily true, if you don't set up a OnDocumentChanged handler, this won't happen. Maybe you just want to set up a file changed watcher so it only discovers tests when the main file is saved?

@jdneo
Copy link
Member

jdneo commented Aug 26, 2021

@maziac If you want to update the test items when the document changes (OnDocumentChanged), debouncing can somehow mitigate your problem, and you can even make the debouncing smarter by making it adaptive, see an example from Java Test Runner. It changes the debouncing time according to the time used to resolve the test items in the history (actually it "copys" the logic when VS Code send the request for outline)

@connor4312 I'd like to bring this topic out again: do you have any plan to support cancellation token in the editing scenario? Like the CodeLensProvider?

@maziac
Copy link

maziac commented Aug 27, 2021

Another scenario: multiroot work space.
How does the testing api behave for multi root?
E.g the controller resolveHandler on initialization: will it be called each time for each workspace or once for all?
And the Test UI in that case: Will all tests (of all workspaces) be shown together in one pane?

@JustinGrote
Copy link
Contributor

JustinGrote commented Aug 27, 2021

@maziac you can do it however you want, it's called per test controller with undefined testItem initially, then it's called for the specific test controller that owns the item in question for discovery. This is easy to see by just setting a breakpoint on the resolveHandler to watch it work.

I personally have one test controller that on an empty resolvehandler gets the list of workspace roots and then foreach's through them to create file watchers, all tests for all workspaces tied to a single controller. There's advantages and disadvantages to the appraoch, for me the A's outweigh the D's for pester.

@maziac
Copy link

maziac commented Sep 7, 2021

Another question:
I'm using different profiles for 'run' and 'debug'.

But I would like to suppress the possibility to choose 'debug' for the test suites and allow it only for test items.
In my case the test suites are test items that only collect other test suites or test cases.

Only for the "real" test cases I would like to allow the 'debug' option.

Is there any way to do that?

@firelizzard18
Copy link

@maziac You should be able to use tags #129456 to control which tests can be run by which profiles.

@github-actions github-actions bot locked and limited conversation to collaborators Sep 10, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
insiders-released Patch has been released in VS Code Insiders plan-item VS Code - planned item for upcoming testing Built-in testing support under-discussion Issue is under discussion for relevance, priority, approach
Projects
None yet
Development

No branches or pull requests