-
Notifications
You must be signed in to change notification settings - Fork 29.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for "related code" in tests #126932
Comments
This is applicable to Go. As far as I know, that naming convention isn't used for test and benchmark functions, but that's probably because those aren't included in documentation. I think it would be reasonable to use the same convention to attach tests and benchmarks to functions/etc. I am the author of the Go Test Explorer extension, and I may be submitting PRs to vscode-go to implement the testing API. |
@jasonrudolph recently did some work on a signficant-other extension that implements something like this. Our friends on the Java team (@jdneo) released similar functionality in November, and @brettcannon wants this in Python too. How would the API for this look like? Would it be something along the lines of |
I don't see why it would need to be more complicated than this, since the extension can also keep updating the location if the code/tests move. An open question is how to handle "one to many" situations in both directions. |
In Java, we leverage the reference-view to deal with the "one to many" case. |
@connor4312 I think
|
(2) is interesting, how would you envision the UI/UX for that? |
Linking to symbols would be useful, since you wouldn't have to constantly update the location for add-on commands, but it could always be a dropdown menu on right-click for the test (maybe even a tree expansion) |
The other simple alternative is a pair of functions such as |
/cc @kimadeline |
@connor4312 IMO what would be most valuable would be to show the test play button in the gutter by the related code. That way, if I write a test for a function, I can edit the function and run the test without changing context. If autorun is enabled, then I can configure it to run related tests automatically, so I can edit the function, save, and then see the test results immediately (after the test completes). Also I was thinking of a context menu option for tests. For example, if the user right clicks on the test item or the play button in the gutter, the context menu has "Go to related code", which either jumps or opens a peek view like "Go to definition"; and "Find all related code", which opens a side bar view like "Find all references".
That should work. I would prefer |
Would you expect a specific test to run, or just the test file that was found and believed to be associated with the code? While the latter would at least be possible in Python (based on file name matching), for instance, there's no way we would be able to associate code to a specific test. |
@brettcannon Here's what I'm thinking:
It is up to the extension to determine how to associate TestItems with DocumentSymbols (or ranges). Assuming Python creates a TestItem for the file, it can associate I am the author of the test explorer implementation for // ExecuteFeature does something
func ExecuteFeature() { ... }
// ExampleExecuteFeature is an example of how to use ExecuteFeature
func ExampleExecuteFeature() { ... }
// TestExecuteFeature tests ExecuteFeature
func TestExecuteFeature(t *testing.T) { ... } In Go, examples are treated as tests, and are linked to functions or methods by matching the name. Therefore, it is reasonable for |
Thanks for the feedback, all. From feedback, I think one approach we can take is, like we do on tests, something of a hybrid:
cc @jdneo / @kimadeline / @JustinGrote |
@connor4312 I've been thinking about how I would actually implement this kind of functionality for Go:
So the two |
🤣 this is why we prefer pull instead of push. The workbench is usually better equipped in knowing when to ask for what. E.g we should know that files have been opened and we should be asking for info. It should be two steps: statically describe the files for which you can provide something (e.g document selector) and later answer our question. That also enables non obvious scenarios like LS where we might need to know about tests that aren't opened on the machine you are running in. |
Perhaps we could actually go with a "pull-only" model here, and have a user settings to configure the "pull aggression." So by default, we would only pull related code when asked for via a command ("run tests for" or "implementations for"). But maybe there is a setting which does what you describe that pulls implementation ranges for tests of active editors. I'm not super keen on this since there is complexity around dealing with invalidation and re-pulling, but the path is there. |
@jrieken @connor4312 I see your points and I'm OK with letting VSCode/the user decide when to pull that info. But in the specific case of both files being opened, I very much like to automatically connect the source ranges. If |
I encountered an issue where I need to meet the 80% test coverage criteria. Unfortunately, I can't determine which test cases cover which parts of the code simply by looking at the coverage UI. |
That is a little different, but tracked in #212196 |
Hello, it's been a minute! I've taken this up again and implemented a proposed API in #222252. This is a simpler provider-based approach, where the provider is called on-demand. There's no passive gutters as in my previous implementation (perhaps an API point we can add in the future) but it should be a lot easier to implement. In my initial PR this is surfaced in a few ways:
The PR implements this in the selfhost provider we use to run tests in the VS Code repo, using a simple file-level import graph. |
@connor4312 What kind of time frame are you looking for on feedback? Coincidentally I'm currently working on the Go test controller again, but I'm part way through a full rewrite so I'm currently focused on feature parity with the existing implementation. Side note, I implemented a TestItemResolver that allows me to expose Go tests via a For anyone else looking for the type definitions, here they are (from the PR): export interface TestController {
/**
* A provider used for associating code location with tests.
*/
relatedCodeProvider?: TestRelatedCodeProvider;
}
export interface TestRelatedCodeProvider {
/**
* Returns the tests related to the given code location. This may be called
* by the user either explicitly via a "go to test" action, or implicitly
* when running tests at a cursor position.
*
* @param document The document in which the code location is located.
* @param position The position in the document.
* @param token A cancellation token.
* @returns A list of tests related to the position in the code.
*/
provideRelatedTests?(document: TextDocument, position: Position, token: CancellationToken): ProviderResult<TestItem[]>;
/**
* Returns the code related to the given test case.
*
* @param test The test for which to provide related code.
* @param token A cancellation token.
* @returns A list of locations related to the test.
*/
provideRelatedCode?(test: TestItem, token: CancellationToken): ProviderResult<Location[]>;
} |
I would expect that this would not be finalized sooner than September, as we're at the end of the July iteration and will probably want to let it bake in proposed for at least one iteration. |
@connor4312 Here's a bug for your brain that's kind of related. Go doesn't have explicit support for table-driven tests, but there are common patterns such as: func TestMath(t *testing.T) {
cases := []struct {
Name string
Expr string
Result string
}{
{"add", "1+1", "2"},
{"sub", "1-1", "0"},
{"mul", "1*1", "1"},
{"div", "1/1", "1"},
}
for _, c := range cases {
// This creates a sub-test with the given name
t.Run(c.Name, func(t *testing.T) {
// Example sub-test implementation
result, err := script.Eval(c.Expr)
if err != nil {
t.Fatal(err)
}
if result != c.Result {
t.Errorf("want %s, got %s", c.Result, result)
}
})
}
} I plan to add static analysis to detect cases like this, but this presents a dilemma: what do I report as the range for the test item? It seems to me that the "Go to test" action should jump to body ( The best solution I can think of (that doesn't involve adding explicit table-drive test support to test items) is some kind of |
Hm, yea, we don't have a very good solution to that right now. I feel like related code might solve that approach but we'd want some more explicit "executed related tests" binding, such that a user could place their cursor in the case's struct and then run that. But it's also not very discoverable without its own 'play' buttons. I think the way to go might be to actually declare the subtest's range as its case in the struct, rather than |
@connor4312 Is there any consideration to how I would present one or more tests as being related to a code item like a function?
For instance, in Pester you can tag tests with anything. If I allowed a special tag that let you say a test was assigned to a certain function named
Invoke-MyFunction
, I'd want to expose a "play" button right next to theInvoke-MyFunction
definition to run all tests "tagged" to that function. In this case though I don't want it to be tracked with a separate ID, I want it to invoke those individual tests as part of the tag "rollup" which might be scattered across different files. Also tricky is how you would present a failure to the user in this case.Ideally this would work with AutoRun as well whenever I change the function.
At any rate this is definitely a "nice to have" as opposed to a "must have" and could come at a later iteration, just a thought that would make testing really simple for people and not have to lose the context of where they are directly working on the application. I was inspired by your markdown test adapter example.
With the existing implementation it could just be offered under a "Function Test Rollup" header or something and aggregate the results together somehow, but when run it wouldn't update the "referenced" tests necessarily (though I guess I could do this within the context of the extension) so its fragmented and not ideal.
Originally posted by @JustinGrote in #107467 (comment)
The text was updated successfully, but these errors were encountered: