You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One area where we are lacking right now is the benchmarking coverage. I would like to improve that in the coming weeks.
Infrastructure for benchmarking
Benchmarks are an essential part of linfa. They should give feedback for contributors on their implementations and users confidence that we're doing good work. In order to automate the process we have to employ an CI system which creates a benchmark report on (a) PR (b) commits to master branch. This is difficult with wall-clock benchmarks (aka criterio.rs) but possible with valgrind.
add a workflow executing the benchmark on PR/commits to master and create reports in JSON format
build a script parsing reports and posting it as comments to PR (see here)
add a page to the website which displays reports in a human-readable way
(pro) use polynomial regression to find influence of predictors (e.g. #weights, #features, #samples, etc.) to targets (e.g. L1 cache misses, cycles etc.) and post the algorithmic complexity as well
The text was updated successfully, but these errors were encountered:
While Iai is more consistent, it also hasn't been updated in almost 2 years and it also can't exclude setup code from the benchmarks. I'm also not sure how valid instruction-count benchmarks would be for multithread code. I believe we should stick with Criterion for future benchmarks. For most changes we can just do benchmark comparisons manually. For CI usage I'd rather set up a dedicated benchmarking machine like rustc-perf.
One area where we are lacking right now is the benchmarking coverage. I would like to improve that in the coming weeks.
Infrastructure for benchmarking
Benchmarks are an essential part of linfa. They should give feedback for contributors on their implementations and users confidence that we're doing good work. In order to automate the process we have to employ an CI system which creates a benchmark report on (a) PR (b) commits to master branch. This is difficult with wall-clock benchmarks (aka criterio.rs) but possible with valgrind.
The text was updated successfully, but these errors were encountered: