You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The library contains a benchmark directory with a single benchmark. It would be nice to have a check that the performance using the library is actually better than the performance without it.
Not sure how to implement that. People with experience in Catch2 and cmake wanted :)
The text was updated successfully, but these errors were encountered:
I don't have experience with performance benchmarking in Catch2, but I've used it before with cmake. I'll look into what people normally do in those cases.
It looks like that comparing the performance with only Catch2 isn't possible. It does seem though that Github Actions has support for Catch2 and performance tracking. What if we use that for performance regression testing?
I managed to build the tests with the performance test, but I couldn't quite understand if we're testing it here somehow. I see that without the -r junit to show the full textual output we get the times for each of the subsections. Are we somehow verifying that?
When you build the performance test it will run with the regular tests but no check is really being done.
I usually manually run the binary to see the output.
So it's all very brittle today. I like the idea from the link of keeping reference performance numbers and compare against that!
The library contains a benchmark directory with a single benchmark. It would be nice to have a check that the performance using the library is actually better than the performance without it.
Not sure how to implement that. People with experience in Catch2 and cmake wanted :)
The text was updated successfully, but these errors were encountered: