-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf -26%] System.Tests.Perf_Random (3) #47870
Comments
@DrewScoggins, at the same time these were shown to regress slightly, did other Random benchmarks improve? How can I see all of the benchmarks on this benchmark class for the same time period? |
We did see improvements in the constructor as evidenced by the auto-filed issues below. You can also look at these index pages that have links to every tests full history. I will just use find in the browser to look for the tests that I care about. Windows x64 DrewScoggins/performance-2#3846 |
Hmm. I assume this is due to my changes in 5ee25aa, but I saw these as improvements rather than regressions locally, and I would have expected massive improvements in benchmarks on NextBytes() and Next(), so something seems a bit off here. |
Here are the results from running on my machine. They seem to match what we are seeing in the lab. Next_Double and Ctor got faster, but the other three got worse. I have also grabbed the links to the coreruns that were used and put them as links above the tables. I got this information from the details area of the issue.
|
For the avoidance of doubt, "_unseeded" and "ctor" are the ones using the new algorithm. Below I've rearranged @DrewScoggins numbers into two tables, the seeded ones in first table (same algorithm) unseeded in second table (new algorithm) each in pairs of rows, old first then new. The seeded ones vary, but there's a possible regression in some, perhaps due to the new indirection.
The unseeded ones I think have the improvements we expected
|
Note we have no perf test for |
What did it look like on 32 bit, Drew? That implementation is different. |
Ohhh! That makes much more sense then. I'm fine accepting the small regressions in the seeded ones; we knew about those, and I don't think they matter. It's good to have the tests to catch major unexpected regressions, but if someone is using the seeded ctor they're either a) just trying to provide better randomness via their own seed and should stop doing so, or b) trying to get reproducibility in which case it's most likely in a test or something where a small regression isn't a big deal. |
OK fine. |
Run Information
Regressions in System.Tests.Perf_Random
Historical Data in Reporting System
Repro
.
Payloads
Baseline
Compare
Histogram
System.Tests.Perf_Random.Next_int_int
Compare Jit Disasm
System.Tests.Perf_Random.Next_int
Baseline Jit Disasm
Compare Jit Disasm
System.Tests.Perf_Random.NextDouble
Baseline Jit Disasm
Compare Jit Disasm
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
The text was updated successfully, but these errors were encountered: