Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support VS Code testing API #3601

Closed
Tracked by #86
yoshuawuyts opened this issue Mar 15, 2020 · 43 comments · Fixed by #16662
Closed
Tracked by #86

Support VS Code testing API #3601

yoshuawuyts opened this issue Mar 15, 2020 · 43 comments · Fixed by #16662
Labels
A-test-explorer Issues related to the test explorer lsp extension and vscode test api A-vscode vscode plugin issues C-feature Category: feature request E-hard fun A technically challenging issue with high impact S-actionable Someone could pick this issue up and work on it right now

Comments

@yoshuawuyts
Copy link
Member

yoshuawuyts commented Mar 15, 2020

The Java plugin for VS Code comes with a built-in test explorer panel. This panel shows all tests in the project, and allows running the full suite (link).

Rust Analyzer currently only provides a way to run a single unit test. This opens the console, and runs it there. There is no way to run multiple tests, or browse the tests. Making progress towards this would probably help carry the IDE experience forward.

Example

report

Java test explorer workflow example (source)

@yoshuawuyts
Copy link
Member Author

Oh actually just now realizing there are a collection of plugins available for this:

Heh, that's nice. I wonder if a tighter integration story would be possible / desirable? Maybe there are things on the lang server side that could be picked up?

@matklad
Copy link
Member

matklad commented Mar 16, 2020

Rust Analyzer currently only provides a way to run a single unit test.

You can also run a module with tests. But yeah, having something like test explorer above would be nice and is definitely in scope. The biggest catch here is that there's no LSP protocol for this at the moment.

@matklad matklad added E-hard fun A technically challenging issue with high impact labels Mar 16, 2020
@godcodehunter
Copy link
Contributor

Ok, looks funny, I'll do it.

@xsoheilalizadeh
Copy link

Related microsoft/vscode#107467

@lnicola lnicola added A-vscode vscode plugin issues C-Architecture Big architectural things which we need to figure up-front (or suggestions for rewrites :0) ) S-actionable Someone could pick this issue up and work on it right now labels Jan 22, 2021
@yoshuawuyts
Copy link
Member Author

The VS Code Test UI has stabilized as part of the July 2021 update! There's a guide available here on how to write extensions for this.

Integrating with this would be fantastic!

Screenshots

testing

@matklad
Copy link
Member

matklad commented Aug 6, 2021

https://code.visualstudio.com/api/extension-guides/testing

We might want to support that.

The way I understand it, what VS Code wants is a global view of a hierarchy of all the tests in a project. There are two potential sources for such view:

  • statically discovered tests (eg, re-run cargo test -- --list on every change, or use symbol index to list every #[test] function)
  • external sources -- display a result of a test run

I don't think we can support the first case correctly: in the limit, it is a global view of all the tests, and to maintain that we'd have to do O(N) work on every change worst-case. Displaying a hierarchy of tests also feels like it duplicates other navigation functionality.

So I think we should do the following:

  • on the server side, provide the API to fetch project model, such that the vscode can draw one node per cargo package/target
  • on the client side, change our test runners to optionally pass in the --message-format json flag, and to populate nodes bellow cargo target with test results.

@yoshuawuyts yoshuawuyts changed the title Feature: test explorer Support VS Code testing API Aug 6, 2021
@godcodehunter
Copy link
Contributor

godcodehunter commented Aug 6, 2021

Yep, I am working on this. Now I finished the presentation and api. Today I hope to move on to creating observable test data structure.

https://code.visualstudio.com/api/extension-guides/testing

We might want to support that.

The way I understand it, what VS Code wants is a global view of a hierarchy of all the tests in a project. There are two potential sources for such view:

  • statically discovered tests (eg, re-run cargo test -- --list on every change, or use symbol index to list every #[test] function)
  • external sources -- display a result of a test run

I don't think we can support the first case correctly: in the limit, it is a global view of all the tests, and to maintain that we'd have to do O(N) work on every change worst-case. Displaying a hierarchy of tests also feels like it duplicates other navigation functionality.

So I think we should do the following:

  • on the server side, provide the API to fetch project model, such that the vscode can draw one node per cargo package/target
  • on the client side, change our test runners to optionally pass in the --message-format json flag, and to populate nodes bellow cargo target with test results.

@godcodehunter
Copy link
Contributor

Also about runnable explorer, seems it is was misstack idea, instead extension may provide task provider. But for project study goals it may be at separate command.

@matklad matklad removed the C-Architecture Big architectural things which we need to figure up-front (or suggestions for rewrites :0) ) label Sep 13, 2021
@gilescope
Copy link
Contributor

gilescope commented Sep 23, 2021

Good news - the lack of this is definitely something that I had push back on when trying to rollout rust analyser out to teams. Can't wait to give this a go.

@godcodehunter
Copy link
Contributor

Good news - the lack of this is definitely something that I had push back on when trying to rollout rust analyser out to teams. Can't wait to give this a go.

I am integrating the salsa database for now. I can push code that partially working maybe someone can help me in zulip chat, I have several stupid question, but I think we can finish it at weekend if someone who can help will be available in zulip

@matklad
Copy link
Member

matklad commented Sep 24, 2021

Looking at the code will be helpful -- I am surprised that this needs to touch salsa layer at all.

@godcodehunter
Copy link
Contributor

godcodehunter commented Sep 24, 2021

Looking at the code will be helpful -- I am surprised that this needs to touch salsa layer at all.

If we want to have information for the entire workspace, we need storage. And if I correct understand we use salsa for that purpose. Also for future goals, I think it also need

@matklad
Copy link
Member

matklad commented Oct 4, 2021

If we want to have information for the entire workspace

We shouldn't support getting the list of all the tests for the entire workspace. That's an inherently unscalable operation. To recompute this list after O(1) edit, we'd have to do O(N) processing regardless of any salsa tricks we do.

VS Code API supports that, but that API doesn't work for big workspaces for languages like Rust, where you can't list the test looking at isolated files.

As per architecture invariant in https://github.com/rust-analyzer/rust-analyzer/blob/master/docs/dev/architecture.md#crateside, we shouldn't look at the VS Code APIs when designing ours. We should do what makes the most sense.

@matklad
Copy link
Member

matklad commented Oct 18, 2021

Hey, I closed #5765 as it doesn't look like a viable approach. Here's the roadmap for what I think would be a better way to solve this:

  • on the server, expose the information about the cargo project structure (which packages and targets are there) (this needs to be a new LSP extension)
  • on the server, expose information about available tests in a specific file (this is already available via runnables lsp)

The VS Code side of things should combine these two sources of information and feed them into the test UI. Additionally, VS Code side should be in charge of actually running cargo test, collecting test results, and displaying which tests were run after the fact.

If anyone wants to implement this issue, it's yours! If we get several competing impls, that's fine -- we should be able to combine them for the best ideas.

Now, it might be the case that I am very wrong here, and something along the lines of #5765 is what we need here. I can get convinced by a small and working prototype where we can concretely discuss various algorithmic complexity implications. I won't have time to dive deep in "let's redesign the world" partial diffs.

@godcodehunter
Copy link
Contributor

godcodehunter commented Oct 18, 2021

One of the main points of discrepancy was the fact that Kladov does not want to implement the showing runnables for the entire project. This would allow in the future auto rerun test when making changes at test or related code, keep semantic run history and so on. Instead of it, Kladow proposes to dwell on a solution of the form cargo test --package package-name

@calebcartwright
Copy link
Member

Thought I'd chime in as the maintainer of the aforementioned existing extension that leverages the now-legacy Test Explorer plugin model 👋

Wanted to say that I'd love to see this incorporated natively in RA! That's partly because I think it makes for a better developer experience, partly because I never had the bandwidth to expand the existing extension beyond it's current rudimentary state, and somewhat selfishly 😄, partly because I'd love to not have to convert that one to the new testing API.

I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes.

I've not looked at the new Test API too closely, but I wonder if similar behavior might be possible? I do think it's important that users at least have the option of being able to display the full hierarchy of their tests, even if it's non-default behavior they have to configure. Several of the developers I'm working with are coming from the JetBrains/Visual Studio world with other languages and tell me they find that type of Test visualization in VS Code helpful while learning Rust.

@matklad
Copy link
Member

matklad commented Nov 4, 2021

Yeah, my thinking here is that after the user runs the test the hierarchy is populated with all the tests executed (the implementation would leverage —message-format=JSON argument to cargo test). The “reload” button then is just the “run” button.

@godcodehunter
Copy link
Contributor

Thought I'd chime in as the maintainer of the aforementioned existing extension that leverages the now-legacy Test Explorer plugin model wave

Wanted to say that I'd love to see this incorporated natively in RA! That's partly because I think it makes for a better developer experience, partly because I never had the bandwidth to expand the existing extension beyond it's current rudimentary state, and somewhat selfishly smile, partly because I'd love to not have to convert that one to the new testing API.

I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes.

I've not looked at the new Test API too closely, but I wonder if similar behavior might be possible? I do think it's important that users at least have the option of being able to display the full hierarchy of their tests, even if it's non-default behavior they have to configure. Several of the developers I'm working with are coming from the JetBrains/Visual Studio world with other languages and tell me they find that type of Test visualization in VS Code helpful while learning Rust.

By the way, in my implementation, I restored the display functionality in hovers. And it is almost ready, now basically only the transfer protocol remains to be completed? Are you more interested in this?

@calebcartwright
Copy link
Member

Are you more interested in this?

I'm not 100% sure if this question was targeted to me, but I'll share my two cents anyway. I'm a big believer in first framing these types of discussions in terms of the capabilities, e.g. the "what" before the "how" as I find the former helps keep the user top of mind.

As both a user of test explorer capabilities across a number of programming languages, and a maintainer of the aforementioned test explorer extension for Rust, IMO it's critical a test explorer for Rust provide users with the ability to:

  • (a) View hierarchical representation of individual test cases in the current/open project (note that this does not necessarily need to be automatically updated as the user modifies code. In my experience the test explorer does not automatically update for any language and requires users to manually hit the "reload" button when they want to update this visual representation after adding/removing test code)
  • (b) Execute individual test cases and execute "groups" of related tests (lots of ways to slice what a "group" is for Rust, but I think by module, crate, package, target, type, etc. are some options worth considering)
  • (c) View most recent results for any test executed via the explorer

There's certainly plenty of other features/capabilities that tie into other parts of the editor (e.g. debugging, navigation between source/explorer view, context menu items, etc.), but the three I mentioned above constitute what I view to be the "core" capabilities. My concern is that if the visual/explorer test experience for Rust won't have those features then IMO there'd be a degraded user experience with Rust that sounds like it could differ from what users would be accustomed to in the test explorer with other languages, both in terms of features and behavior.

@godcodehunter
Copy link
Contributor

godcodehunter commented Jan 12, 2022

Are you more interested in this?

I'm not 100% sure if this question was targeted to me, but I'll share my two cents anyway. I'm a big believer in first framing these types of discussions in terms of the capabilities, e.g. the "what" before the "how" as I find the former helps keep the user top of mind.

As both a user of test explorer capabilities across a number of programming languages, and a maintainer of the aforementioned test explorer extension for Rust, IMO it's critical a test explorer for Rust provide users with the ability to:

  • (a) View hierarchical representation of individual test cases in the current/open project (note that this does not necessarily need to be automatically updated as the user modifies code. In my experience the test explorer does not automatically update for any language and requires users to manually hit the "reload" button when they want to update this visual representation after adding/removing test code)
  • (b) Execute individual test cases and execute "groups" of related tests (lots of ways to slice what a "group" is for Rust, but I think by module, crate, package, target, type, etc. are some options worth considering)
  • (c) View most recent results for any test executed via the explorer

There's certainly plenty of other features/capabilities that tie into other parts of the editor (e.g. debugging, navigation between source/explorer view, context menu items, etc.), but the three I mentioned above constitute what I view to be the "core" capabilities. My concern is that if the visual/explorer test experience for Rust won't have those features then IMO there'd be a degraded user experience with Rust that sounds like it could differ from what users would be accustomed to in the test explorer with other languages, both in terms of features and behavior.

Hello, I have rewritten most of the code and now I have returned the current functionality that was.
The points:
a, c) It's almost ready, you need to do the formation of delta changes and send (the changes themselves and the protocol are already defined). I decided not to request updates, but delta updates while rust analyzer work
b) This has not been done yet, but I think it will not take much code.

image

@CAD97
Copy link
Contributor

CAD97 commented May 21, 2022

Interesting note: it seems that cargo test -- --list --format=json (first of all is unstable)

About that... rust-lang/rust#97242 😅

Anyway, I hacked together a cargo-only testing extension: https://github.com/CAD97/cargo-testing-panel

It's still super rough around the edges and falls apart if you look at it wrong, but it's a proof of concept of what the functionality could look like. Disclaimer: I am not a typescript developer so the everything is super sketchy.

image

I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes. [@calebcartwright]

There doesn't appear to be a reload button in the builtin testing window. I suppose a testing/item/context contribution could be added to make a reload option? (Of course I can just add an intent directly.) Using item.canResolveChildren might also be interesting; I think r-a already does cargo test --workspace --all-targets --no-run --message-format=json (since I copied the code for dispatching that, anyway), so the root list of test package/binaries can be populated by that and watching for creation of new targets, then cargo test -q --package=crate --target=target -- --list run on-demand by canResolveChildren.

I'll probably evolve my extension to use cargo-nextest if I keep working on it, and let r-a do the std libtest integration.

Or of course a more integrated solution where r-a does the incremental test discovery itself is also possible, of course.

@CAD97
Copy link
Contributor

CAD97 commented May 21, 2022

Here's the roadmap for what I think would be a better way to solve this: [@matklad]

  • on the server, expose the information about the cargo project structure (which packages and targets are there) (this needs to be a new LSP extension)
  • on the server, expose information about available tests in a specific file (this is already available via runnables lsp)

Having worked with the vscode testing API a bit, this is what roughly what I think this could ideally look like (modulo I don't know the LSP naming conventions):

  • server: controller.resolveHandler(undefined)
  • server->client: requestRootTestables
  • client: (equivalent of) cargo test --workspace --all-targets --no-run to discover testable targets
  • client: watch set of testable targets
  • client->server: provideRootTestables
  • server: controller.items.add(controller.createTestItem(...)) for each testable target
  • ...
  • client: sees testable target set change
  • client->server: updateRootTestables
  • server: adjust controller.items as promted
  • ...
  • server: controller.resolveHandler(testItem)
  • server->client: requestChildTestables(testItem)
  • client: create child testable set for:
    • each module in the target root (or selected module), with "I have children" flag
    • each test in the target root, with "I'm a root runnable" flag
  • client->server: provideChildTestables
  • ...
  • server: runHandler(testRunRequest, cancellationToken)
  • server->client: runTestables(testRunRequest)
  • client: do the bits necessary to run all (children) tests as requested
  • client->server: updateChildTestables with discovered updated testable hierarchy
  • client->server: provideTestResults as test results come in

The most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result. I've currently implemented ignore as just not providing a result. (However, I do provide an explicit result for the root run request to provide the duration, and that means you can "run" an ignored test directly to get a pass. I can patch this by checking for the "0 passed" case in the suite finished event.)

@godcodehunter
Copy link
Contributor

@godcodehunter Copying and diffing on the LSP layer is IMO not a hack, but likely the right solution. I am actually not sure how you intend to use salsa-rs/salsa#273 for this, but it doesn't seem like the intended use case to me.

I already explained why this is a mistake, I'm not going to do it again.

@godcodehunter
Copy link
Contributor

Here's the roadmap for what I think would be a better way to solve this: [@matklad]

  • on the server, expose the information about the cargo project structure (which packages and targets are there) (this needs to be a new LSP extension)
  • on the server, expose information about available tests in a specific file (this is already available via runnables lsp)

Having worked with the vscode testing API a bit, this is what roughly what I think this could ideally look like (modulo I don't know the LSP naming conventions):

  • server: controller.resolveHandler(undefined)

  • server->client: requestRootTestables

  • client: (equivalent of) cargo test --workspace --all-targets --no-run to discover testable targets

  • client: watch set of testable targets

  • client->server: provideRootTestables

  • server: controller.items.add(controller.createTestItem(...)) for each testable target

  • ...

  • client: sees testable target set change

  • client->server: updateRootTestables

  • server: adjust controller.items as promted

  • ...

  • server: controller.resolveHandler(testItem)

  • server->client: requestChildTestables(testItem)

  • client: create child testable set for:

    • each module in the target root (or selected module), with "I have children" flag
    • each test in the target root, with "I'm a root runnable" flag
  • client->server: provideChildTestables

  • ...

  • server: runHandler(testRunRequest, cancellationToken)

  • server->client: runTestables(testRunRequest)

  • client: do the bits necessary to run all (children) tests as requested

  • client->server: updateChildTestables with discovered updated testable hierarchy

  • client->server: provideTestResults as test results come in

The most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result. I've currently implemented ignore as just not providing a result. (However, I do provide an explicit result for the root run request to provide the duration, and that means you can "run" an ignored test directly to get a pass. I can patch this by checking for the "0 passed" case in the suite finished event.)

Maybe you are interested in my code? He is not so complex and already doing what you describe https://github.com/godcodehunter/rust-analyzer

@godcodehunter
Copy link
Contributor

What about blocker (salsa-rs/salsa#273) Niko says that he try merge it until the end of next week

@laralove143
Copy link

What's blocking this?

@laralove143
Copy link

Can anyone look into this? It seems ready to be merged

@robgal519
Copy link

salsa-rs/salsa#273 has been closed in favor of salsa-rs/salsa#275. What is the current status of this feature? Is it possible to move forward, or is it abandoned?

@godcodehunter
Copy link
Contributor

godcodehunter commented Oct 2, 2022

No, code pass all data to extension. There exist error that test I'd is number but should be string. If you want, can you can finish. I'm currently in exile and don't have much time for that.
P.S. I plan to return to work on the code in a week or two. If you don't want to read the code.

@connor4312
Copy link

The most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result

@CAD97 is this distinct from the "Skipped" state -- could you elaborate here?

@CAD97
Copy link
Contributor

CAD97 commented Oct 19, 2022

@connor4312 I believe TestRun.skipped did not exist when I first investigated, and the API only provided TestRun.passed, TestRun.failed, and TestRun.errored. (I was unable to find any documentation of what VSCode version added the API to confirm this.)

@connor4312
Copy link

Sorry for multiple comments, didn't have time to read the full thread yesterday.

There doesn't appear to be a reload button in the builtin testing window

There is one if you register a TestController.refreshHandler: https://insiders.vscode.dev/github.com/microsoft/vscode/blob/9c76d66e695381ac392144bc3ec5c7e21edbb472/src/vscode-dts/vscode.d.ts#L15880-L15891

@eaglgenes101
Copy link
Contributor

Is there some way to try the current implementation for this functionality out, however rough around the edges it might be?

@godcodehunter
Copy link
Contributor

godcodehunter commented Dec 19, 2022

Is there some way to try the current implementation for this functionality out, however rough around the edges it might be?

https://github.com/godcodehunter/rust-analyzer/tree/ohh-god . Just run the analyzer with an extension. Implementation correct pass msg to vs code client, but client incorrect handle it (update tree). I stop at this point

@ShuiRuTian
Copy link
Contributor

ShuiRuTian commented Apr 22, 2023

Hi guys, I just implement another PR based on the thought of #3601 (comment) (although it's still kind of different, but now it's client to control the life-cycle of tests.)

There is still a lot of room for improvement, but I think it could be reviewed now to avoid any early design issues.

So, please review or have a try if you are interested in it :)
#14589

Screenshot from 2023-04-23 01-23-28

@godcodehunter
Copy link
Contributor

My PR actually needs minor fixes (debugging) and test migration. I don't know if I'll find time for it, just because to be honest I'm freelancing right now. I hope that my code with incremental updates will serve as a starting point for someone, including a lot of work done on test management. But you can consider it abandoned, I was very sorry that my work did not arouse interest in the community

@michalfita
Copy link

Lack of this feature is actually the one thing that make RustRover attractive to work with code having plenty of tests written. Even if they solved running individual tests in the stupid way (creates run configuration for each invocation).

@SomeoneToIgnore
Copy link
Contributor

If that's the only thing you lack, you can try https://marketplace.visualstudio.com/items?itemName=swellaby.vscode-rust-test-adapter

seems to work for me albeit I am not sure how to get the test output in this case
image
image

@gilescope
Copy link
Contributor

gilescope commented Oct 22, 2023 via email

@calebcartwright
Copy link
Member

I've left https://marketplace.visualstudio.com/items?itemName=swellaby.vscode-rust-test-adapter on hold for quite a while now as I can't really justify investing time into it given that the capabilities (and more) are hopefully coming to RA 🤞

There's several notable feature gaps and bugs in that extension, so YMMV

@bors bors closed this as completed in 767d5d3 Mar 5, 2024
@HKalbasi HKalbasi added the A-test-explorer Issues related to the test explorer lsp extension and vscode test api label Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-test-explorer Issues related to the test explorer lsp extension and vscode test api A-vscode vscode plugin issues C-feature Category: feature request E-hard fun A technically challenging issue with high impact S-actionable Someone could pick this issue up and work on it right now
Projects
None yet