Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing: non-error output #129201

Closed
firelizzard18 opened this issue Jul 22, 2021 · 8 comments
Closed

Testing: non-error output #129201

firelizzard18 opened this issue Jul 22, 2021 · 8 comments
Assignees
Labels
api-finalization feature-request Request for new features or functionality insiders-released Patch has been released in VS Code Insiders testing Built-in testing support verification-needed Verification of issue is requested verified Verification succeeded

Comments

@firelizzard18
Copy link

firelizzard18 commented Jul 22, 2021

I discussed this with @connor4312 on #122208 (comment), but now I have a concrete need.

Go has benchmarks. Benchmarks (in Go) are defined and run in much the same way that tests are. Tests are defined with func TestXxx(*testing.T) and benchmarks are defined with func BenchmarkXxx(*testing.B); tests are run with go test -run='^TestXxx$' and benchmarks are run with go test -bench='^BenchmarkXxx$'. So it is natural for benchmarks to be exposed along side tests in the testing API.

Given a benchmark:

func BenchmarkFoo(b *testing.B) {
    for i := 0; i < b.N; i++ {
        Foo("bar")
    }
}

Running this benchmark will produce something like the following:

goos: linux
goarch: amd64
pkg: example.com/foo
cpu: Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz
BenchmarkFoo
BenchmarkFoo-20   123456  123.4 ns/op  123 B/op  12 allocs/op
PASS
ok  	example.com/foo	6.313s

For running benchmarks with the testing API to be meaningful, the user needs to see the results of the benchmark (the second BenchmarkFoo line). Previously, I could attach an info message to a test that passed. That capability was removed in the finalized testing API.

@connor4312 connor4312 self-assigned this Jul 23, 2021
@connor4312 connor4312 added feature-request Request for new features or functionality testing Built-in testing support labels Jul 23, 2021
@connor4312 connor4312 added this to the On Deck milestone Jul 23, 2021
@connor4312 connor4312 modified the milestones: On Deck, August 2021 Aug 11, 2021
@connor4312
Copy link
Member

appendOutput may now take an additional location and TestItem to associate with

/**
* Appends raw output from the test runner. On the user's request, the
* output will be displayed in a terminal. ANSI escape sequences,
* such as colors and text styles, are supported.
*
* @param output Output text to append.
* @param location Indicate that the output was logged at the given
* location.
* @param test Test item to associate the output with.
*/
appendOutput(output: string, location?: Location, test?: TestItem): void;

image

Tomorrow I'll polish things a bit and have clicking on the output open the test output and select the line in a terminal.

@firelizzard18
Copy link
Author

To clarify, calling appendOutput with a test item will associate output with that item in addition to appending it to the raw output? As opposed to the old appendMessage which IIRC associated output with a test item instead of appending it to the raw output.

@connor4312
Copy link
Member

That's correct @firelizzard18

@firelizzard18
Copy link
Author

firelizzard18 commented Aug 12, 2021

I'm trying to think of how best to use appendOutput(output, location, testItem) and failed(testItem, testMessage[s]). Given these APIs, I think it's going to be kind of messy. The extension has to consume 'events' produced by go test -json and produce calls to the testing API. Given this test:

func TestFoo(t *testing.T) {
	// produces output
	t.Log("Hello world")

	// causes the test to fail
	t.Fail()
}

This is what I have to work with:

{"Time":"2021-08-11T18:40:31.483381884-05:00","Action":"run","Package":"foo","Test":"TestFoo"}
{"Time":"2021-08-11T18:40:31.483455845-05:00","Action":"output","Package":"foo","Test":"TestFoo","Output":"=== RUN   TestFoo\n"}
{"Time":"2021-08-11T18:40:31.483460908-05:00","Action":"output","Package":"foo","Test":"TestFoo","Output":"    foo_test.go:9: Hello world\n"}
{"Time":"2021-08-11T18:40:31.483465236-05:00","Action":"output","Package":"foo","Test":"TestFoo","Output":"--- FAIL: TestFoo (0.00s)\n"}
{"Time":"2021-08-11T18:40:31.483467643-05:00","Action":"fail","Package":"foo","Test":"TestFoo","Elapsed":0}
{"Time":"2021-08-11T18:40:31.483471215-05:00","Action":"output","Package":"foo","Output":"FAIL\n"}
{"Time":"2021-08-11T18:40:31.483559559-05:00","Action":"output","Package":"foo","Output":"FAIL\tfoo\t0.001s\n"}
{"Time":"2021-08-11T18:40:31.483597857-05:00","Action":"fail","Package":"foo","Elapsed":0.001}

{ Action: 'run' | 'pass' | 'fail', Test: 'TestFoo' } become calls to started, passed, or failed, respectively. Previously, { Action: 'output', Test: 'TestFoo' } became a call to appendMessage. In addition, for all output events, even when Test is missing, become calls to appendOutput. This ensures that the test output terminal shows the same output as running go test directly, for the most part.

Please note, the failure event does not give a reason for failure. The underlying Go test infrastructure does not make any association between a test failing and a reason why it failed. There are various methods that produce output and fail the test, such as *testing.T.Fatal, but they simply call *testing.T.Log (or Logf) followed by *testing.T.Fail (or FailNow). Thus, in order to provide a failure message in the call to failed, I have to remember messages as they are produced.

In order to preserve the one-to-one correspondence between the test output terminal and directly running go test in a terminal, I need to call appendOutput for each message, in the order that the messages arrived. In addition, I want the user to see failure messages for tests that failed. I see three ways of doing this:

  1. I can call appendOutput as events come in, passing a test item and location when appropriate. Once the test has completed, if the test failed, I can then call failed with the output for that test.
    • Pros: output shows up in the test output terminal as it is produced.
    • Cons: failed tests will have duplicate output, because each message for that test will be passed to appendOutput and failed
  2. I can store all of the output events in an array and only process them once all tests have completed. Then I call failed for each failed test, with the output for that test, and I call appendOutput for each output event, but I do not pass the test item for output events for failed tests.
    • Pros: each output message is only attached to each test once, with failed for tests that failed and appendOutput for tests that did not.
    • Cons: the user sees nothing in the test output terminal until after all tests have completed.
  3. Never call appendOutput with a test item, except for benchmark stats.

Adding a severity parameter or isError: boolean to appendOutput would solve the basic issue with output - I would likely append all output as an error and call failed with an empty array. Complicating matters is the fact that I'd like to use TestMessage.diff. github.com/stretchr/testify is a popular Go library for making assertions in tests. It produces output such as:

    foo_test.go:13: 
                Error Trace:    foo_test.go:13
                Error:          Not equal: 
                                expected: 1
                                actual  : 2
                Test:           TestFoo

I'd like the extension to recognize this pattern and create a corresponding TestMessage.diff. However, that necessitates using failed and not appendOutput. Since this output is produced by a failed assertion, and it explicitly says 'Error', I could reasonably assume that the corresponding test had failed. Then, I could omit the test item from appendOutput and later pass the test message diff to failed, thus avoiding double reporting.

Do you see any better solutions?

@connor4312
Copy link
Member

I think approach (1) is the recommended way here. I view the new functionality not as an alternative way to append messages to a certain test, but rather a way to annotate test runners' standard output--which should usually be streamed in anyway--with metadata about how it was generated. This is in contrast to the failure TestMessage which is rich, well-structured diagnostic information that the user can dig into.

@connor4312
Copy link
Member

Verifier: scan through the API and make sure docs make sense. There were no functional changes in this issue since last verification.

@rchiodo rchiodo self-assigned this Sep 28, 2021
@rchiodo rchiodo added the verified Verification succeeded label Sep 28, 2021
@rchiodo
Copy link
Contributor

rchiodo commented Sep 28, 2021

/verified

Docs look good to me.

@rchiodo rchiodo removed their assignment Sep 29, 2021
@TarunavBA
Copy link

@connor4312 how do I turn in this feature?

@github-actions github-actions bot locked and limited conversation to collaborators Oct 31, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-finalization feature-request Request for new features or functionality insiders-released Patch has been released in VS Code Insiders testing Built-in testing support verification-needed Verification of issue is requested verified Verification succeeded
Projects
None yet
Development

No branches or pull requests

5 participants
@firelizzard18 @connor4312 @rchiodo @TarunavBA and others