Confusion over Micronaut benchmarks #7618
Replies: 4 comments 1 reply
-
Turns out someone who is not a Micronaut maintainer or contributor submitted changes which were merged that limit the connection pool size to 5 https://github.com/TechEmpower/FrameworkBenchmarks/pull/6864/files#diff-2769de54789a707c3c745d0628fd88523b3caccca52a4c1eed4e7d5a3d7697c0R9 Is there any control on who is allowed to modify the framework benchmarks? |
Beta Was this translation helpful? Give feedback.
-
Fix is here: #7622 |
Beta Was this translation helpful? Give feedback.
-
The above PR is from the Micronaut core engineers. Could we ask in future that PRs are not merged from individuals who are not part of the framework team? The last PR limited the connection pool to 5 which crushed our benchmark results and makes us look bad when that is not the case. |
Beta Was this translation helpful? Give feedback.
-
I think that we can add or create a GH action for that. Now the GH action, only run the tests to the frameworks that changed and also label a PR as We can add an optional file in the framework (and/or language) dir, with a list of GH users to require review or force review. Update: And not create it, if is a toolset change. |
Beta Was this translation helpful? Give feedback.
-
Hi,
The Micronaut framework code hasn't changed in 11 months: https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java/micronaut
Round 20 results: https://www.techempower.com/benchmarks/#section=data-r20&l=zik0vz-6bj&test=composite
Round 21 results: https://www.techempower.com/benchmarks/#section=data-r21&l=zik0vz-6bj&test=composite
Basically there is a huge discrepancy and drop off in performance that doesn't make much sense in the Round 21 and I don't believe these results are valid.
Can we request to re-run these tests since we don't understand what has changed?
Beta Was this translation helpful? Give feedback.
All reactions