-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
R2R: Add a CI or perf validation for # of methods jitted #88533
Comments
note that hello world requires System.Console that is not prejitted in dotnet/runtime so maybe we instead can just test a completely blank app for simplicity |
Should we measure this in the perf lab, in the same way as we are tracking other micro benchmarks? It would be a good idea to track this for more than just a basic console app. |
Sure, it's just that "hello world" in dotnet/runtime can block PRs which break R2R early - would not even allow a contributor to push such a change. |
I do not think we would use this as a strict gate for blocking PRs. |
Perhaps we should have simple validation that majority of R2R methods are being used, and visualize #-of-methods jitted metric for existing benchmarks to trend across previews. |
I have a proposal for this problem. Firstly, let's choose which type of app we're going to test first (blank app?). We can always add more apps in the future as deemed necessary. Then, let's run a few experiments to see how many jitted methods we expect. From there, we can write the test that will fail if it finds more methods (one or range?) than expected. However, we need this test to be non-blocking as it's not a big breaking deal, unless it's a huge regression. This, considering our .NET codebase is constantly changing and evolving. As for the perf lab, I think that is a very good idea. We can consider that the next step after getting the number-of-jitted-methods test up and running. I would like to hear everyone's thoughts on this @mangod9 @jkotas @trylek @EgorBo |
Few notes:
It happened two times already and the first sign was a significant regression in startup time/time to first request in TE benchmarks (we probably want to add a graph for "methods jitted" there). |
I do not think there is a way to disable ETW events in CoreCLR today.
The real-world performance is always influenced by many factors that produce too much noise. It is why we are running performance benchmarks in an isolated environments so that we can get stable results. It is not real world, but it is something that we can reason about.
Do you have links to these two instances? We should validate that any proposals here would be actually effective in catching these two regressions early. |
What are the benchmarks that we track startup time for in the perflab? I think that the number of methods JITed should be tracked for the exact same set of benchmarks. |
in both cases there was a change that silently broke r2r |
These two changes broke r2r on x86 machines without avx2. adm64 and x86 machines with avx2 were not affected as far as I know. In order to catch these two breakages, we would need to run this test on a machine without avx2. |
Moving this to .NET 9 since most infrastructure improvements, like this case, shouldn't block releases. |
Tagging subscribers to this area: @hoyosjs Issue DetailsWe have recently seen more methods being JITed for simple "helloworld" application, so getting visibility and ensuring we can minimize proliferation of these via automation would be helpful. Note that we are continuously working on improving R2R handling of new functionality, but always good to track this metric.
|
We have recently seen more methods being JITed for simple "helloworld" application, so getting visibility and ensuring we can minimize proliferation of these via automation would be helpful. Note that we are continuously working on improving R2R handling of new functionality, but always good to track this metric.
The text was updated successfully, but these errors were encountered: