-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Windows] Regression in few TechEmpower benchmarks #1668
Comments
I'm pretty sure it's only enabled when you turn profiling on. |
What's the stack before eventsource? What method is doing the emitting? |
Looks like runtime issue? Which is set as ThreadPoolWorkQueue.cs#L440-L443
I did have issue with VS triggering machine-wide verbose listing on |
@adamsitnik the drop in March you are pointing at is due to the INTEL micro code updates for Specter that were pushed by windows update. |
Still probably shouldn't be logging ThreadPool events at |
How much micro benchmark coverage do we have for Extensions - especially logging? |
From the traces this is specifically EventSource which looks like people are running benchmarks for dotnet/runtime#52092, though don't know if they are in the perf repo (as it probably should be logging anything for these events unless it was switched on in Perfview specifically) |
@sebastienros how can I disable the events to get a clean trace file? |
I would say that it's OK, we have dedicated micro benchmarks for Caching, DI, Http, Logging and Primitives.
35 benchmarks: https://github.com/dotnet/performance/tree/main/src/benchmarks/micro/libraries/Microsoft.Extensions.Logging but here the problematic logging is ETW logging, not |
@adamsitnik if this can be set as perfview arguments, here is the one to change:
|
So the defaults shouldn't be triggering the |
cc @brianrob |
Yeah, this looks like it's due to the fact that profiling is enabled. You can confirm that something else isn't causing this by explicitly disabling these events. With Perfview, try @benaadams, there is an incoming change to address the issue that you saw with VS, that will limit the collection to just |
Should it be recording every queued ThreadPool item by default? (e.g. verbose); As that will be millions of events per second? |
No, it should not be. I think that this environment uses a cached version of PerfView that used to do this, but the latest version should not. |
I have tried that: crank --config .\scenarios\platform.benchmarks.yml --scenario caching --profile aspnet-citrine-win --application.collect true --application.collectArguments "BufferSizeMB=1024;CircularMB=4096;TplEvents=None;ClrEvents=None" But for some reason, it ends up with more than 90% of broken stacks: I've tried to extend the rundown time, but it did not help: --application.collectArguments "BufferSizeMB=1024;CircularMB=4096;TplEvents=None;ClrEvents=None;MinRundownTime=120" All I get are native call stacks.
@sebastienros could you please update PerfView to latest version? |
Updated from 2.0.66 to 2.0.68. |
@adamsitnik, sorry that's because the command I gave you shutoff unwind info publish. Try |
After the update, I get a warning about 90%+ stacks being BROKEN. After solving the symbols, I can see that More than 17% of total CPU time is spent in @brianrob is this expected? The versions: | .NET Core SDK Version | 6.0.100-preview.5.21260.9 |
| ASP.NET Core Version | 6.0.0-preview.5.21261.1+743828a |
| .NET Runtime Version | 6.0.0-preview.5.21260.8+05b646c | |
With --application.collectArguments "BufferSizeMB=1024;CircularMB=4096;TplEvents=None;ClrEvents:JitSymbols" |
@sebastienros could you please increase the timeout? I wanted to check if it reproes on Citrine, but I am getting the following error:
|
|
@adamsitnik this is my log, can you share yours to understand the timelines?
|
Tip, use |
Tracing is throwing exceptions for some reason? https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntddk/nf-ntddk-exraisedatatypemisalignment
|
I've narrowed down the regression to dotnet/runtime#50778 and opened an issue in the runtime repo: dotnet/runtime#52640 I am not closing this issue yet as it would be great to clarify why |
Closing this issue due to age. Feel free to reopen if this is still a priority. |
I wanted to provide some nice benchmark results for @maryamariyan who is currently writing the blog post about
Microsoft.Extensions*
improvements we did for .NET 6.To my surprise, the
CachingPlatform-intel-win
has regressed from 290k+ to 270k RPS.Plaintext-intel-win
has also regressed from 540k to 500k:I've taken a very quick look at the profiles and noticed something that I've not seen before: a LOT of time spent in very expensive event logging:
Repro:
@sebastienros @davidfowl : do we know about the regressions and what has caused them? is the expensive logging always enabled, or just when profiling? how can I disable it?
cc @jeffhandley
The text was updated successfully, but these errors were encountered: