You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is something wrong happening with benchmarks. What I can say now is if you reorder your test cases you can get different results. I can't tell right now why it's happening, there can be myriads of reasons. I wanna start with the most extreme case: the first test case executes much faster than others. I was trying to test it on my laptop and my PC and got the same results. That's why I think that the "/" route executes much faster than others #225 (it's still a unique case, but the difference is not so dramatic). I was looking through the issues/PRs where people send their benchmarks and sometimes I saw the same picture (#176 (comment)). If you have the ability, please make some tests regarding the first test case. You can create multiple same tests and see if there are any differences depending on the test's place or just switch some tests and see if there is any difference.
The text was updated successfully, but these errors were encountered:
ivan-tymoshenko
changed the title
Benchmark result depends on their order
Benchmark results depend on their order
Jan 4, 2022
I'm not sure what's happening. It's really hard to separate your hardware/os from V8, opt. compiler, benchmark library, etc. to find out where the problem is. So at first, I want to know if there is only my problem or not.
There is something wrong happening with benchmarks. What I can say now is if you reorder your test cases you can get different results. I can't tell right now why it's happening, there can be myriads of reasons. I wanna start with the most extreme case: the first test case executes much faster than others. I was trying to test it on my laptop and my PC and got the same results. That's why I think that the "/" route executes much faster than others #225 (it's still a unique case, but the difference is not so dramatic). I was looking through the issues/PRs where people send their benchmarks and sometimes I saw the same picture (#176 (comment)). If you have the ability, please make some tests regarding the first test case. You can create multiple same tests and see if there are any differences depending on the test's place or just switch some tests and see if there is any difference.
The text was updated successfully, but these errors were encountered: