You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suppose one of the most basic ideas is to strategically place some benchmarks, and compare them between commits, right? There are some built-in tools like benchstat and benchcmp which we can use. We could also :
a) have a Makefile command that compares HEAD vs origin/master, and fails if the regression is over a threshold
b) build a Github Actions workflow that does the same thing, and open-source it under the beatlabs organization
c) use some off-the-shelf solution like gobenchdata, gobenchui or cob to keep track of performance over time
Some thoughts :
It shouldn't be a hard requirement; complexity might increase due to adding features and not due to accidental performance regression
These benchmarks should be relatively stable, so that don't end up comparing apples and oranges
Hmm, you're right. The benchstat tool actually reports differences only if they're statistically significant which should somewhat normalize this fluctuation. To do so it requires 4-5 runs per benchmark before and after to provide an answer which makes the benchmarks slower though. I'll keep looking into this
I do not have a good answer on how to proceed with this. Maybe it is good enough since GitHub action has a specific instance type available for the CI. If they change the instance type to something else we will see an improvement/impact.
Is your feature request related to a problem? Please describe
There is a need to have performance tests for Patron.
Let's investigate how to do this.
The text was updated successfully, but these errors were encountered: