-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking how much memory is committed per object heap #36731
Tracking how much memory is committed per object heap #36731
Conversation
Tagging subscribers to this area: @Maoni0 |
I am wondering about the ultimate purpose of this accounting. - Is that for diagnostics or we expect users to actually specify separate individual limits per kind of allocation? I.E. User can allow XX megs for small objects, YY megs for large, ZZ megs for pinned... |
Yes. the overall goal of this PR (and more to come) is to provide (i.e. let user specifies) a per-object-heap level hard limit. |
this verification code seems unnecessarily complicated. I would propose a much simpler way to verify. the reason why you can verify that the committed_by_oh is the sum of what do you think? |
I think that is a very interesting idea, this can hit 3 birds in one stone:
The verification code was placed under For the interest of time, I think we can proceed with this plan:
I ran the stress tests (for both workstation and server GC, both without |
b41e8da
to
a0c8a9b
Compare
I would suggest to not merge the PR as is. it's true that it's under an optional define but we should not check in code even under an optional define when we know we would change it and the change doesn't require that much effort (I think this should be much easier to implement than the one you did). I would estimate that since you now have looked through the relevant code paths it wouldn't take more than a few days, right? |
I think you might have misunderstood what I just said. I have pushed the change to delete the code under the definition.
I think I shouldn't take more than a week. I was just thinking it is probably a better idea to do the config switches first. Either way will work fine for me. |
@cshung can you make sure Large Pages is tested in this case? even manually? I'll be able to put it on bing.com as soon as your build shows up in rolling builds. |
sorry @cshung, I misread. I'm totally fine with checking in the rest of the code but I have not looked at it in detail yet. let me do that. I see the description of the PR still mentions the code you deleted, BTW. |
cfcab1f
to
a68babc
Compare
@mjsabby I am very happy that Bing is going to try it out. This is going to be a very good validation of the work. This PR is indeed trying to solve the 3x commit problem we discovered earlier. However, this PR alone is insufficient, as it only tracks the committed memory per object heap. In particular, it doesn't have the new configuration switches yet. I am currently working on introducing the new switches, with that, I will alter the initial reservation logic, and add the per object heap commit limit checks. With that, we will solve the 3x commit problem. I will make sure large pages are tested when I introduce the switches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
ea0f6c7
to
8cd12c8
Compare
8cd12c8
to
5fdb407
Compare
5fdb407
to
f50de72
Compare
The overall goal of this PR (and more to come) is to provide a per-object-heap level hard limit.
In particular, this PR introduced a new field named
committed_by_oh
that has 4 entries corresponding to small object heap, large object heap, pinned object heap, and supporting data structure. The values are the amount of memory committed in bytes.The key change here is passing the
oh
parameters to the variousvirtual_*
methods. These tell the method which object heap this virtual operation is meant for. The various call site needs to provide that value. Most of the time, we know the call's purpose, by looking at the locals or the flags on the associated segment.The rest of the code is meant for making sure these passed values are correct. In particular, all the code that is under theVERIFY_COMMITTED_BY_OH
definition is optional and is meant for testing only. The amount of memory committed for an object heap should match with the committed memory calculated from the associated segments of that object heap. However, it is not always true. In particular, by the time thevirtual_*
operation started until the segment objects are updated (or threaded), it is not true. Theunsaved_virtual_operation_count
is meant for making sure that we do not assert for identical value if we knew thevirtual_*
operation has already happened but the data structure is not yet updated. Thebegin_virtual_operation
andcomplete_virtual_operation
pair is meant to update that variable, and also provides a pairing of these operations by the character argument.The testing was a bit more ambitious than it has to be. We only need to make sure all the passedoh
parameter value (and hence thecommitted_by_oh
value) is correct under the case forheap_hard_limit
, but I expanded the check so that the values are true even ifheap_hard_limit
is not there. This is to avoid accidental misuse of those parameter values in the future. A slight twist is that ifVERIFY_COMMITTED_BY_OH
is not defined, thecommitted_by_oh
is not maintained at all ifheap_hard_limit
is not on, this is meant for performance. I wanted to avoid taking the critical section unnecessarily.