-
-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update criterion settings #276
Conversation
Should we refactor the benchmark configs to a common location? Doing it will be a bit involved, since we'll have to add a new |
Yeah an importable default would be nice for others and might reduce barriers to contributing benchmarks. |
Also can you remove the (1000, 5) param for |
Codecov ReportBase: 38.59% // Head: 38.44% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #276 +/- ##
==========================================
- Coverage 38.59% 38.44% -0.16%
==========================================
Files 93 95 +2
Lines 6223 6227 +4
==========================================
- Hits 2402 2394 -8
- Misses 3821 3833 +12
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
When I find more time to work on fixing this I will do the above and add a configuration for profiling with pprof. In the spirit of helping out with the investigation related issues. |
@YuhanLiin how do you add a feature? I tried unsuccessfully and couldn't find good docs on the subject. |
Docs are here. You need to add the feature to the |
…nto update-benchmarks
Got the feature in but now there is a peculiar failure for the ubuntu builds regard the tests for linfa-nn. Not sure what's prompting this error as I haven't adjusted the source code. This failure appears on this branch but is not evident when running the same check from wsl on the master branch
|
Actually, pushing if it isn't desirable, we can always revert. |
Try |
no dice with |
Try |
no dice |
This PR updates the criterion settings for our benched algorithms. Over the course of a day or a few days I will update this description with the timing stats from before and after changes for each algorithm. Unless, it is okay to just do it for a few then I'll do that instead
linfa-ica
Old - 1min 39secs
New - 3min 38secs
linfa-pls
Old - 4mins 16secs
New - 9mins 42secs
linfa-linear
Old - 1min 29secs
New - 3min 37secs
linfa-nn
Old - 3mins 43secs
New - 8mins 57secs
linfa-clustering
k-means:
Old - 4min 6secs
New - 7mins 34secs
gaussian_mixture
Old - 51secs
New - 2mins 5secs
dbscan
Old - 1min 14secs
New - 2mins 50secs
appx_db_scan
Old - 47secs
New - 1min 27secs
linfa-ftrl
Old - 1min 43secs
New - 3min 57secs
linfa-trees
Old - 55secs
New - 2min 38secs