You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed a few dependabot PRs in this repo, such as this one that upgrade dependencies of specific benchmarks. While in general, I think this is good practice, for a benchmark suite, I think we'd want to upgrade these dependencies as infrequently as possible to keep benchmarking results comparable with one another (and not have to always rerun baselines). Occasionally we are forced to upgrade, for example to get compatibility with a new version of CPython, but that should be deliberate.
(It's possible there is a security counterargument to be made, but I'm not a security expert and I don't know specifically whether that matters or not).
Would it make sense to update the dependabot config to only look at the top-level dependencies of pyperformance itself rather than the dependencies of specific benchmarks?
The text was updated successfully, but these errors were encountered:
for a benchmark suite, I think we'd want to upgrade these dependencies as infrequently as possible to keep benchmarking results comparable with one another
+1
It's possible there is a security counterargument to be made
I don't consider the benchmarks to be a security concern, though I haven't done a thorough analysis of all the benchmark code. IIRC, none of the benchmarks access anything other than files they already own or temporary network services they start themselves.
Would it make sense to update the dependabot config to only look at the top-level dependencies of pyperformance itself
I noticed a few dependabot PRs in this repo, such as this one that upgrade dependencies of specific benchmarks. While in general, I think this is good practice, for a benchmark suite, I think we'd want to upgrade these dependencies as infrequently as possible to keep benchmarking results comparable with one another (and not have to always rerun baselines). Occasionally we are forced to upgrade, for example to get compatibility with a new version of CPython, but that should be deliberate.
(It's possible there is a security counterargument to be made, but I'm not a security expert and I don't know specifically whether that matters or not).
Would it make sense to update the dependabot config to only look at the top-level dependencies of
pyperformance
itself rather than the dependencies of specific benchmarks?The text was updated successfully, but these errors were encountered: