-
Notifications
You must be signed in to change notification settings - Fork 991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark #4517
base: master
Are you sure you want to change the base?
benchmark #4517
Conversation
Should also cover comment in #4666
|
Codecov Report
@@ Coverage Diff @@
## master #4517 +/- ##
=======================================
Coverage 99.44% 99.44%
=======================================
Files 73 73
Lines 14539 14539
=======================================
Hits 14458 14458
Misses 81 81
Continue to review full report at Codecov.
|
cc @Anirban166 this old draft PR has some potential new benchmarking tests for #6078. If we extract the good tests from here I think we could close this PR too. |
After having discussion with Matt on slack we decided to narrow down scope of #4687. So new benchmarking feature can be more usable and not introduce extra maintenance burden that tracking historical timings, and other features initially listed here, would require.
As a starting point I took system.time tests that have been already taken out from tests.Rraw file (to reduce run time of main test script). Those tests have been moved to a new benchmark() function that meant to replace test() function when system.time is needed.
To keep things simple, we don't need new benchmark.data.table(), as we can just call
test.data.table("benchmarks.Rraw")
orcc("benchmarks.Rraw")
in dev-mode. It will already recognize benchmarks calls in the test script.This is still very much a starting point so any feedback is very welcome.
Ideas for improvement:
times
argument to run expression multiple times and take mean/median to compare. This will allow to make a more tight tolerance (test 1110).Initial proposal at bc8a8be
PR brings new set of scripts, and internal functions, to measure performance of data.table. They are not run in any of our workflow as of now, but rather should be run manually. For now there is no point to merge this branch. Opening PR to more easily document and refer to amongst gh issues.
For example, addressing "add timing test for many .SD cols #3797" for which scripts are defined in
benchmarks.Rraw
file. Yet to close #3797 we need to add a rules to be checked after all benchmarks, to confirm optimize=0 is not that much different than optimize=Inf.