Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement mem balancer #724

Merged
merged 29 commits into from
Jan 13, 2023
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
c31de17
Add GCTriggerSelector
qinsoon Dec 13, 2022
1d5c692
Add an option for gc_trigger
qinsoon Dec 13, 2022
92ce2cf
Refactoring args for creating plans and spaces.
qinsoon Dec 13, 2022
3f496eb
Remove Plan::poll. Add struct GCTrigger. Rename old trait GCTrigger to
qinsoon Dec 13, 2022
e3d98b0
Call on_gc_start/end
qinsoon Dec 13, 2022
cbf398b
Rename GCTriggerPolicySelector to GCTriggerSelector. Add info logging
qinsoon Dec 14, 2022
05159f8
Fix build/test/style
qinsoon Dec 14, 2022
9053110
Fix doc
qinsoon Dec 14, 2022
1a4d38d
Some cleanup
qinsoon Dec 14, 2022
39c038f
Fix tests on 32 bits
qinsoon Dec 14, 2022
d5a9300
Implement complete MemBalancer. Add on_gc_release for GC trigger.
qinsoon Dec 15, 2022
9aa0cba
Add can_heap_size_grow for GCTriggerPolicy. We ony do emergency GC if
qinsoon Dec 15, 2022
a0dd8f1
Use regex to parsing. Separate validation from parsing. Allow T as size
qinsoon Dec 16, 2022
1ef4e85
Merge branch 'gc-trigger-option' into complete-mem-balancer
qinsoon Dec 16, 2022
0e23ce0
Merge branch 'master' into complete-mem-balancer
qinsoon Dec 18, 2022
dd29f39
Minor fix
qinsoon Dec 19, 2022
8431b7a
Add Plan.end_of_gc. Generational plans set next_heap_full_size in
qinsoon Dec 19, 2022
c34c6b4
Fix typo
qinsoon Dec 21, 2022
9eb4dbf
Merge branch 'plan-end-of-gc' into complete-mem-balancer
qinsoon Dec 21, 2022
325e412
Update mem balancer for generational plan
qinsoon Jan 3, 2023
c3b0d6a
Merge branch 'master' into complete-mem-balancer
qinsoon Jan 3, 2023
a55866b
Tidy up
qinsoon Jan 3, 2023
fe3879c
Merge branch 'master' into complete-mem-balancer
qinsoon Jan 5, 2023
21ac93a
Use AtomicRefCell instead of Atomic for MemBalancerStats
qinsoon Jan 5, 2023
5aa7023
Add trait GenerationalPlan
qinsoon Jan 12, 2023
b2f83c7
Fix typo in comments
qinsoon Jan 12, 2023
1b88d90
Use Option<f64> for MemBalancerStats previous values
qinsoon Jan 12, 2023
02d2962
Merge branch 'master' into complete-mem-balancer
qinsoon Jan 12, 2023
c72eb5e
Rename GenerationalPlan::gen to common_gen. Rename Gen to CommonGenPlan.
qinsoon Jan 12, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions src/plan/generational/copying/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,10 @@ impl<VM: VMBinding> Plan for GenCopy<VM> {
self.tospace().available_physical_pages()
}

fn get_mature_reserved_pages(&self) -> usize {
self.tospace().reserved_pages()
}

fn base(&self) -> &BasePlan<VM> {
&self.gen.common.base
}
Expand All @@ -159,8 +163,8 @@ impl<VM: VMBinding> Plan for GenCopy<VM> {
&self.gen.common
}

fn generational(&self) -> &Gen<VM> {
&self.gen
fn generational(&self) -> Option<&Gen<VM>> {
Some(&self.gen)
}

fn is_current_gc_nursery(&self) -> bool {
Expand Down
2 changes: 1 addition & 1 deletion src/plan/generational/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ impl<VM: VMBinding> ProcessEdgesWork for GenNurseryProcessEdges<VM> {

fn new(edges: Vec<EdgeOf<Self>>, roots: bool, mmtk: &'static MMTK<VM>) -> Self {
let base = ProcessEdgesBase::new(edges, roots, mmtk);
let gen = base.plan().generational();
let gen = base.plan().generational().unwrap();
Self { gen, base }
}
#[inline]
Expand Down
37 changes: 31 additions & 6 deletions src/plan/generational/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ use crate::plan::Plan;
use crate::policy::copyspace::CopySpace;
use crate::policy::space::Space;
use crate::scheduler::*;
use crate::util::conversions;
use crate::util::copy::CopySemantics;
use crate::util::heap::VMRequest;
use crate::util::metadata::side_metadata::SideMetadataSanity;
Expand Down Expand Up @@ -42,7 +41,7 @@ impl<VM: VMBinding> Gen<VM> {
args.get_space_args(
"nursery",
true,
VMRequest::fixed_extent(args.global_args.options.get_max_nursery(), false),
VMRequest::fixed_extent(args.global_args.options.get_max_nursery_bytes(), false),
),
true,
);
Expand Down Expand Up @@ -111,8 +110,15 @@ impl<VM: VMBinding> Gen<VM> {
space_full: bool,
space: Option<&dyn Space<VM>>,
) -> bool {
let nursery_full = self.nursery.reserved_pages()
>= (conversions::bytes_to_pages_up(self.common.base.options.get_max_nursery()));
let cur_nursery = self.nursery.reserved_pages();
let max_nursery = self.common.base.options.get_max_nursery_pages();
let nursery_full = cur_nursery >= max_nursery;
trace!(
"nursery_full = {:?} (nursery = {}, max_nursery = {})",
nursery_full,
cur_nursery,
max_nursery,
);

if nursery_full {
return true;
Expand Down Expand Up @@ -151,6 +157,7 @@ impl<VM: VMBinding> Gen<VM> {
// The conditions are complex, and it is easier to read if we put them to separate if blocks.
#[allow(clippy::if_same_then_else, clippy::needless_bool)]
let is_full_heap = if crate::plan::generational::FULL_NURSERY_GC {
trace!("full heap: forced full heap");
// For barrier overhead measurements, we always do full gc in nursery collections.
true
} else if self
Expand All @@ -160,6 +167,7 @@ impl<VM: VMBinding> Gen<VM> {
.load(Ordering::SeqCst)
&& *self.common.base.options.full_heap_system_gc
{
trace!("full heap: user triggered");
// User triggered collection, and we force full heap for user triggered collection
true
} else if self.next_gc_full_heap.load(Ordering::SeqCst)
Expand All @@ -170,9 +178,18 @@ impl<VM: VMBinding> Gen<VM> {
.load(Ordering::SeqCst)
> 1
{
trace!(
"full heap: next_gc_full_heap = {}, cur_collection_attempts = {}",
self.next_gc_full_heap.load(Ordering::SeqCst),
self.common
.base
.cur_collection_attempts
.load(Ordering::SeqCst)
);
// Forces full heap collection
true
} else if self.virtual_memory_exhausted(plan) {
trace!("full heap: virtual memory exhausted");
true
} else {
// We use an Appel-style nursery. The default GC (even for a "heap-full" collection)
Expand Down Expand Up @@ -250,8 +267,16 @@ impl<VM: VMBinding> Gen<VM> {
/// [`get_available_pages`](crate::plan::Plan::get_available_pages)
/// whose value depends on which spaces have been released.
pub fn should_next_gc_be_full_heap(plan: &dyn Plan<VM = VM>) -> bool {
plan.get_available_pages()
< conversions::bytes_to_pages_up(plan.base().options.get_min_nursery())
let available = plan.get_available_pages();
let min_nursery = plan.base().options.get_min_nursery_pages();
let next_gc_full_heap = available < min_nursery;
trace!(
"next gc will be full heap? {}, availabe pages = {}, min nursery = {}",
next_gc_full_heap,
available,
min_nursery
);
next_gc_full_heap
}

/// Set next_gc_full_heap to the given value.
Expand Down
8 changes: 6 additions & 2 deletions src/plan/generational/immix/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,10 @@ impl<VM: VMBinding> Plan for GenImmix<VM> {
self.immix.available_physical_pages()
}

fn get_mature_reserved_pages(&self) -> usize {
self.immix.reserved_pages()
}

fn base(&self) -> &BasePlan<VM> {
&self.gen.common.base
}
Expand All @@ -201,8 +205,8 @@ impl<VM: VMBinding> Plan for GenImmix<VM> {
&self.gen.common
}

fn generational(&self) -> &Gen<VM> {
&self.gen
fn generational(&self) -> Option<&Gen<VM>> {
Some(&self.gen)
}

fn is_current_gc_nursery(&self) -> bool {
Expand Down
10 changes: 8 additions & 2 deletions src/plan/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -175,8 +175,8 @@ pub trait Plan: 'static + Sync + Downcast {
fn common(&self) -> &CommonPlan<Self::VM> {
panic!("Common Plan not handled!")
}
fn generational(&self) -> &Gen<Self::VM> {
panic!("This is not a generational plan.")
fn generational(&self) -> Option<&Gen<Self::VM>> {
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
None
}
fn mmapper(&self) -> &'static Mmapper {
self.base().mmapper
Expand Down Expand Up @@ -282,6 +282,12 @@ pub trait Plan: 'static + Sync + Downcast {
panic!("This is not a generational plan.")
}

/// Return the number of used pages in the mature space. Only
/// generational plans have to implement this function.
fn get_mature_reserved_pages(&self) -> usize {
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
panic!("This is not a generational plan.")
}

/// Get the number of pages that are reserved for collection. By default, we return 0.
/// For copying plans, they need to override this and calculate required pages to complete
/// a copying GC.
Expand Down
2 changes: 2 additions & 0 deletions src/scheduler/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@ impl<C: GCWorkContext> Release<C> {
impl<C: GCWorkContext + 'static> GCWork<C::VM> for Release<C> {
fn do_work(&mut self, worker: &mut GCWorker<C::VM>, mmtk: &'static MMTK<C::VM>) {
trace!("Release Global");
self.plan.base().gc_trigger.policy.on_gc_release(mmtk);
Copy link
Collaborator

@wks wks Jan 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a problem for now. This is added before the VMCollection::vm_release() below, but plan_mut.release happens after VMCollection::vm_release(). It's a bit weird this way, but it doesn't matter if the VM doesn't implement vm_release.

VMCollection::vm_release() was created for VMs to do what our current RefEnqueue does to handle weak references. Existing bindings (except mmtk-openjdk in the lxr branch) do not use it, but with my new weak-ref API, the openjdk binding should use it instead of relying on mmtk-core's RefEnqueue work packet. I am considering moving this <C::VM as VMBinding>::VMCollection::vm_release(); statement to a dedicated work packets, because ref-enqueuing work does not depend on the Plan instance or the release of the Plan, and can therefore be parallelised.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can run some code when we open the Release bucket, that would be a good timing to run this on_gc_release() method.


<C::VM as VMBinding>::VMCollection::vm_release();
// We assume this is the only running work packet that accesses plan at the point of execution
#[allow(clippy::cast_ref_to_mut)]
Expand Down
Loading