You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some parameters we want to test are only relevant for Arrow. Some of these, like whether to return an arrow Table or data.frame when reading a file, may be specific to a single benchmark. Others, like environment variables that only affect Arrow, should be controllable at the global level. We don't want to run non-arrow benchmarks parametrized by arrow-only environment variables--that's wasteful/slow.
When running benchmarks continuously (on every commit), we only want to run the Arrow benchmarks--no point wasting electrons on other people's (unchanged) code.
What if we added a function (arrow_params()?) similar to default_params that each benchmark can register that indicates which param combinations involve arrow? We can use that in whatever run_all() function we add for conbench, and we can also use it to only add variations for arrow env vars to the combinations that involve arrow.
The text was updated successfully, but these errors were encountered:
We have two related challenges:
What if we added a function (
arrow_params()
?) similar todefault_params
that each benchmark can register that indicates which param combinations involve arrow? We can use that in whateverrun_all()
function we add for conbench, and we can also use it to only add variations for arrow env vars to the combinations that involve arrow.The text was updated successfully, but these errors were encountered: