Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Randomized allocation sampling #104955

Merged
merged 5 commits into from
Nov 14, 2024
Merged

Randomized allocation sampling #104955

merged 5 commits into from
Nov 14, 2024

Conversation

noahfalk
Copy link
Member

@noahfalk noahfalk commented Jul 16, 2024

This feature allows profilers to do allocation profiling based off randomized samples. It has better theoretical and empirically observed accuracy than our current allocation profiling approaches while also maintaining low performance overhead. It is designed for use in production profiling scenarios. For more information about usage and implementation, see the included doc docs/design/features/RandomizedAllocationSampling.md

[Updated 10/30]
This PR supersedes #100356. Currently it includes 2 commits, the first has the (mostly) rebased changes from July and the 2nd addressed a few things I missed in the rebase and followed up on issues flagged in July.

Most of the testing happened during July but I did re-run some of the functional tests as a smoke test to ensure nothing broke in the few recent changes. We'll also get updated CI regression testing and I'm coordinating with @chrisnas to have him validate it works in a real profiler.

@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Jul 16, 2024
@noahfalk noahfalk self-assigned this Jul 16, 2024
@noahfalk noahfalk added area-VM-coreclr community-contribution Indicates that the PR has been added by a community member area-NativeAOT-coreclr and removed needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners labels Jul 16, 2024
@noahfalk
Copy link
Member Author

noahfalk commented Jul 20, 2024

Functional testing found and fixed an off-by-one error in the RNG code but otherwise things looked fine. I also resynced this PR on top of the latest changes in #104849 and #104851. The last commit, now number 10, remains the interesting one.

I also did some performance testing using GCPerfSim as an allocation benchmark. My default configuration was 4 threads, workstation mode, 500GB of allocations entirely with small objects and no survival. It is intended to put maximum stress on the allocation code paths. GCPerfSim command line: -tc 4 -tagb 500 -tlgb 0.05 -lohar 0 -lohsr 100000-2000000 -sohsi 0 -lohsi 0 -pohsi 0 -sohpi 0 -lohpi 0 -sohfi 0 -lohfi 0 -pohfi 0 -allocType reference -testKind time. I also ran a few variations that added modest amounts of survival and LOH allocations that are a bit more realistic, though still extremely allocation heavy.

EDIT: Don't rely on these numbers, they are misleading. See #104955 (comment)

Benchmarks - No Tracing enabled

Scenario Baseline time PR time
Default 20.8 21.4
Default + sohsi 100 40.9 41.8
Default + sohsi 100 -lohar 10 42.8 43.9
Default + sohsi 100 -lohar 10 -lohsi 200 43.6 44.6

Benchmarks - Tracing AllocSampling+GC keywords, verbose level

Scenario Baseline time PR time
Default 22 23.1
Default + sohsi 100 41.6 43.0
Default + sohsi 100 -lohar 10 44.7 44.5
Default + sohsi 100 -lohar 10 -lohsi 200 44.2 45.0

Overall it looks like around ~0.9 additional seconds for the PR to do 500GB of allocations. On a tight microbenchmark its noticeable and then as other GC or non-allocation costs increase it becomes relatively less noticeable. I'm investigating to see if it can be improved at all.

@noahfalk
Copy link
Member Author

Continued perf investigation+testing has cast my previous results into doubt. After lots of searching for what could have caused the regression, my best explanation is that it actually had nothing to do with the source changes in this PR and instead it is either non-determinism in the build process or some user error on my part. I reached that conclusion by doing the following:

I've had a folder on my machine C:\git\runtime3 that throughout the entire process has been synced here:

commit 42b2b19e883f06af5771b5d85b26af263c62e781 (HEAD)
Author: Matous Kozak <55735845+matouskozak@users.noreply.github.com>
Date:   Fri Jul 12 09:42:55 2024 +0200

This folder has no changes from any of my PRs in it and I've been using the build here for all the baseline measurements. Then I executed the following changes:

move artifacts -> artifacts_backup
build.cmd clr+libs -c release
src\tests\build.cmd generatelayoutonly Release
copy artifacts_backup -> artifacts_backup_2

I can consistently reproduce the same magnitude perf regression using the coreclr built in the artifacts directory, but the regression doesn't appear using the build in the backup or backup_2 directory. I've done many runs on each binary switching between them in a semi-randomized ordering trying to ensure that the results for each binary are repeatable and robust relative to background noise on the machine.

Beyond that I've also got many other builds that include different subsets of the change but there is no clear relationship between the source and the perf results. During one period I progressively added functionality starting from the baseline without triggering the regression to occur, then during another period I was progressively removing functionality from the final PR and the regression would always occur. Even deleting the entirety of the source changes in that folder and syncing it back to the baseline didn't eliminate the perf overhead. Every build was done in a new folder starting without an artifacts folder to remove opportunity for incremental build problems to play a role.

The only explanations I have that make sense to me are either: (a) non-deterministic builds are giving bi-modal perf results for the same input source code or (b) I am making some other error in my testing methodology repeatedly

I'm going to see if I can get another machine to repeat some of the original experiments but at the moment I no longer have any evidence the PR is causing a regression.

@noahfalk
Copy link
Member Author

@jkotas @MichalStrehovsky - Functional and perf testing both looked good now, all outstanding comments on the PRs have been addressed, and CI is green. From my perspective this is ready to be merged unless any further review is planned?

I could check in #104849, #104851, then this PR in sequence but I'm not sure that gives any advantage over just checking in this PR alone and closing 104849 and 104851 as no longer needed.

@noahfalk
Copy link
Member Author

@brianrob @chrisnas - I wanted to give a heads up that I removed the HeapIndex parameter on the AllocationSampling event. The value wasn't readily accessible from outside the GC and @Maoni0 recommended that it probably isn't very useful so it is easiest to drop it. If you have any concerns let me know.

@chrisnas - do you want to give this branch a spin with your profiler to test it out? I'm guessing this is the final source or pretty close to it.

@chrisnas
Copy link
Contributor

@brianrob @chrisnas - I wanted to give a heads up that I removed the HeapIndex parameter on the AllocationSampling event. The value wasn't readily accessible from outside the GC and @Maoni0 recommended that it probably isn't very useful so it is easiest to drop it. If you have any concerns let me know.

Sounds good to me: I never found a usage for it :^)

@chrisnas - do you want to give this branch a spin with your profiler to test it out? I'm guessing this is the final source or pretty close to it.

I'll run my tests on monday - the rest of the week is off for me sorry

@noahfalk
Copy link
Member Author

I'll run my tests on monday - the rest of the week is off for me sorry

Sounds great! Its not a rush :)

@noahfalk
Copy link
Member Author

noahfalk commented Nov 4, 2024

/ba-g remaining issues are deadletters for which an issue isn't appropriate

@noahfalk
Copy link
Member Author

noahfalk commented Nov 4, 2024

@jkotas - Thanks for the earlier feedback. I think that has been covered now but let me know if you have any other feedback. Thanks!

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

/ba-g remaining issues are deadletters for which an issue isn't appropriate

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

@chrisnas - just let me know whenever you've had a chance to test.
@jkotas - all the feedback so far should be addressed, let me know if there is anything else.

Thanks!

@jkotas
Copy link
Member

jkotas commented Nov 7, 2024

I do not have any additional feedback. As I have mentioned in #104955 (review), this should get a review from somebody familiar with tracing.

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

As I have mentioned in #104955 (review), this should get a review from somebody familiar with tracing.

@brianrob - would you mind taking a look if you haven't already?

@chrisnas
Copy link
Contributor

chrisnas commented Nov 8, 2024

@chrisnas - just let me know whenever you've had a chance to test.

I'm back from a conference in Belgium and I'm starting to work on it.
I've validated the numbers with the "manual" tests from src\tests\tracing\eventpipe\randomizedallocationsampling\manual.
I need now to change our profiler to leverage the new event and then compare to the really allocated objects for a more general scenario

noahfalk and others added 2 commits November 12, 2024 15:22
Co-authored-by: Adeel Mujahid <3840695+am11@users.noreply.github.com>
@noahfalk
Copy link
Member Author

Thanks all!

As a heads up I'll be out for extended period starting the week after next. I'm hoping that review will be finished this week and we can merge this. That gives me all of next week to respond if there is any unexpected post-merge regression discovered. I think we've got a reasonable amount of testing that the feature is working correctly already, but if you did find something @chrisnas in your additional testing we'd still have a long runway to get it fixed before .NET 10 ships next year.

Copy link
Member

@brianrob brianrob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @noahfalk!

@noahfalk
Copy link
Member Author

/ba-g Still getting deadletter issues on Chrome debugger tests, but it appears to be unrelated and pre-existing

@noahfalk noahfalk merged commit 1c4c009 into dotnet:main Nov 14, 2024
154 of 161 checks passed
@MichalStrehovsky
Copy link
Member

I'm seeing the newly added test failing in native AOT outerloop in #109842.

/root/helix/work/correlation/nativeaottest.sh /root/helix/work/workitem/e/tracing/eventpipe/randomizedallocationsampling/allocationsampling/ allocationsampling.dll ''
  0.0s: ==TEST STARTING==
  0.0s: Started sending sentinel events...
  0.0s: Connecting to EventPipe...
  0.0s: Creating EventPipeEventSource...
  0.0s: EventPipeEventSource created
  0.0s: Dynamic.All callback registered
  0.0s: Running optional trace validator
  0.0s: Finished running optional trace validator
  0.0s: Starting stream processing...
  0.0s: Saw new provider 'Microsoft-Windows-DotNETRuntime'
  0.1s: Saw sentinel event
  0.1s: Stopped sending sentinel events
  0.1s: Starting event generating action...
  0.5s: Allocated 400000 instances...
  1.0s: Allocated 800000 instances...
  2.0s: Allocated 1200000 instances...
  3.5s: Allocated 1600000 instances...
  4.8s: 2000000 instances allocated
  4.8s: Stopping event generating action
  4.8s: Sending StopTracing command...
  4.9s: Finished StopTracing command
  4.9s: Saw new provider 'Microsoft-DotNETCore-EventPipe'
  4.9s: Stopping stream processing
  4.9s: Dropped 0 events
  4.9s: Reader task finished
  4.9s: Validating optional callback...
  4.9s: AllocationSampled counts validation
  4.9s: Nb events: 2763
  4.9s: Nb object128: 0
  4.9s: ==TEST FINISHED: FAILED!==
Xunit.Sdk.EqualException: Assert.Equal() Failure: Values differ
Expected: 100
Actual:   -1
   at Xunit.Assert.Equal[T](T, T, IEqualityComparer`1) + 0x24b
   at __GeneratedMainWrapper.Main() + 0x4f

@noahfalk
Copy link
Member Author

I'm seeing the newly added test failing in native AOT outerloop in #109842.

@jkotas - gave me the heads up earlier today and filed #109828. I'm investigating it today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants