Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the heuristics to use heap size instead of collected memory #50909

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

gbaraldi
Copy link
Member

This also chagnes things to do a proper moving average instead of averaging with just the last measurement.
@d-netto this would be what the other interpretation of the paper meant (what they implemented in v8 but not in mmtk)

Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, assuming that this interpretation of the paper seems better to us

@gbaraldi
Copy link
Member Author

I haven't tested it, this is just what it would look like.

@gbaraldi
Copy link
Member Author

gbaraldi commented Aug 15, 2023

So testing this I see mostly similar performance but higher heap sizes overall which makes me think I might have done something wrong because I had the impression that this would make the GC run more often and that would make the max heap smaller, but that's not what i'm seeing.
Potentially the moving average changes are causing that however.

Further tests show that the moving average doesn't make that big of a difference, in fact they seem to be a not unsignificant regression.

These heuristics however make the linked list benchmark about 30% faster by using a smaller heap.

@d-netto
Copy link
Member

d-netto commented Aug 15, 2023

asan timed out on this PR.

Restarting the job to see if it's reproducible.

@kpamnany
Copy link
Contributor

kpamnany commented Sep 6, 2023

@MarisaKirisame, could you please take a look at this and see if these changes make the use of MemBalancer logic correct?

Also, regarding your comment here, do you feel that the target_heap computation is wrong and min_interval should not be part of the computation?

@MarisaKirisame
Copy link

@kpamnany yes, the change make more sense.
I also feel think min_interval should not be relevant in the computation - there are better ways to set the constant.
Another thing - the heap size does not need to be smoothed. We smooth other quantity because they are inexact and we do noisy measurement. For heap size it is exact measurement so no need of smoothing.

@gbaraldi
Copy link
Member Author

@d-netto could you run some tests for this? I used the tuning factor from v8, though it might be a bit too agressive.

@d-netto
Copy link
Member

d-netto commented Sep 18, 2023

At least two tests are failing with ErrorException("fatal error allocating signal stack: mmap: Cannot allocate memory") which seems concerning.

@gbaraldi
Copy link
Member Author

Yeah, it seems to be ramping very quickly

@MarisaKirisame
Copy link

the value you guys get from the paper is for interactive application which might be the cause of the high memory use. It might need to be scaled up by another 10x-ish. See https://github.com/MarisaKirisame/MemoryBalancer/blob/main/python/eval.py#L24 for the number we use for js under noninteractive setting.

vtjnash added a commit that referenced this pull request Dec 15, 2023
This replaces #50909, though
notably does not include the change to use heap size instead of heap
memory.

This adds the smoothing behavior from that prior PR (to better estimate
the long-term rates / ignore transient changes), updates the GC_TIME
printing to reflect the change to use MemBalancer heuristics, and adds
some other guardrails to the decisions so they do not get put off too
far into the future. Since, unlike several other languages that use
MemBalancer, we do not have a time-based trigger for GC to update these
heuristics continuously, so we need to make sure each step is reasonably
conservative (both from under and over predicting the rate). Finally,
this is stricter about observing limits set by the user, by strictly
limiting the exceedence rate to around 10%, while avoiding some prior
possible issues with the hard cut-off being disjoint at the cutoff. This
should mean we will go over the threshold slowly if the program
continues to demand more space. If we OOM eventually by the kerenl, we
would have died anyways from OOM now by ourself.
d-netto pushed a commit that referenced this pull request Mar 15, 2024
This replaces #50909, though
notably does not include the change to use heap size instead of heap
memory.

This adds the smoothing behavior from that prior PR (to better estimate
the long-term rates / ignore transient changes), updates the GC_TIME
printing to reflect the change to use MemBalancer heuristics, and adds
some other guardrails to the decisions so they do not get put off too
far into the future. Since, unlike several other languages that use
MemBalancer, we do not have a time-based trigger for GC to update these
heuristics continuously, so we need to make sure each step is reasonably
conservative (both from under and over predicting the rate). Finally,
this is stricter about observing limits set by the user, by strictly
limiting the exceedence rate to around 10%, while avoiding some prior
possible issues with the hard cut-off being disjoint at the cutoff. This
should mean we will go over the threshold slowly if the program
continues to demand more space. If we OOM eventually by the kerenl, we
would have died anyways from OOM now by ourself.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GC Garbage collector
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants