You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the JIT and lazy initialization may interfere with the results, and to avoid it we can implement the following:
run the tested code in the loop and as soon as the previous result is faster than the current, stop the loop (surely, it could be a fluctuation caused by the external reason, but running all tests X times would take forever to execute)
extending the test runner communication protocol to support several notifications from within one test (and write to the results.log file only the best result).
Unfortunately, it requires changes in all tests.
The text was updated successfully, but these errors were encountered:
That sounds good, running the actual benchmark workload 2+ times seems better than a warmup with a different workflow.
Another way as I suggested in #246 is to not warmup, just avoid counting startup. Lazy initialization that takes 0.6s seems excessive and arguably an issue of the library used.
JIT warming up has been removed, and lazy initialization issues are addressed (not finished yet though) with the testing of the code to work properly before the benchmark.
The proposed idea has been tested, and dismissed. While the 2nd or 3rd iteration could be faster, it's not always happening as GC could affect the results quite drastically. Fine tuning is required, and it's out of scope of that project.
Right now the JIT and lazy initialization may interfere with the results, and to avoid it we can implement the following:
Unfortunately, it requires changes in all tests.
The text was updated successfully, but these errors were encountered: