-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/go: 'go test' should dynamically compute best timeout #12446
Comments
One possible timeout is: The 5x increase was chosen since it seems the benchmark follow the following progression: 1x, 2x, 5x, 10x, 20x, 50x, etc. Thus, at worst, the last iteration of a Benchmark may run for 2.5x longer than targeted time. Also, since the summation of all the iterations of a given Benchmark resembles a geometric series where r=1/2, we get a another factor of 2x. Thus, 2*2.5 = 5x. |
The addition of sub-benchmarks has made this feature harder to implement (if not impractical) since the number of benchmarks is not known ahead of time. |
I don't think this bug should be addressed. That is to say, I don't think we should change the go tool as this will enable people to continue to do the wrong thing. To accurately benchmark you should have a before and after sample and run those both several times back to back to rule out thermal variation. This is what the -c flag gives you, a binary that you can run with whatever timeout and count flags you need on dedicated benchmarking hardware. Working around this inside go test encourages people to benchmark on shared cloud Ci hardware, which is inaccurate. |
See also #19394. |
One option would be to apply the timeout to only the test portion, and allow the benchmarks to continue running for |
That sounds eminently sensible. Unfortunately, |
I keep getting bitten by this. Yet another option here is to staticly detect when the user is guaranteed to hit a timeout. Multiply benchtime * count * leaf-benchmarks, and if it is greater than timeout, don't even run any benchmarks; exit immediately. This would require running all benchmarks for 0 iterations up front to discover all subbenchmarks. (There are no guarantees that this would work, but the whole calculation is only best effort anyway.) Another option is to have the real timeout be count * timeout. I thought that had been suggested, but I don't see it here.
I think that this is tantamount (in intent) to having no timeout at all for the benchmark portion, which also seems fine to me. |
It doesn't seem to have been mentioned that I would expect the intent of The reason for this is to allow the user to track down the cause of a test which is randomly hanging. Sure you could write a wrapper script to run |
perhaps per test timeouts #48157 would be a better fit |
Using
go1.5
I discovered that "go test" crashes after the timeout of 600s has passed. I was running all the benchmarks in
compress/flate
and I set-benchtime 10s
. Since each benchmark will run for about 10s or more, and there are 36 benchmarks in that package, this occupies at least 360s, which is running close to the default of 600s.In situations like these, the
go test
tool should be able to figure out that a timeout of 600s is cutting things close when running 36 benchmarks each for 10s and choose a timeout that is more appropriate.The text was updated successfully, but these errors were encountered: