Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tidy up mutator scan API #875

Merged
merged 7 commits into from
Aug 9, 2023
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 1 addition & 4 deletions src/plan/markcompact/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,12 +43,9 @@ impl<VM: VMBinding> GCWork<VM> for UpdateReferences<VM> {
#[cfg(feature = "extreme_assertions")]
mmtk.edge_logger.reset();

// TODO investigate why the following will create duplicate edges
// scheduler.work_buckets[WorkBucketStage::RefForwarding]
// .add(ScanStackRoots::<ForwardingProcessEdges<VM>>::new());
for mutator in VM::VMActivePlan::mutators() {
mmtk.scheduler.work_buckets[WorkBucketStage::SecondRoots]
.add(ScanStackRoot::<ForwardingProcessEdges<VM>>(mutator));
.add(ScanMutatorRoots::<ForwardingProcessEdges<VM>>(mutator));
}

mmtk.scheduler.work_buckets[WorkBucketStage::SecondRoots]
Expand Down
58 changes: 7 additions & 51 deletions src/scheduler/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -165,8 +165,6 @@ impl<VM: VMBinding> GCWork<VM> for ReleaseCollector {

/// Stop all mutators
///
/// Schedule a `ScanStackRoots` immediately after a mutator is paused
///
/// TODO: Smaller work granularity
#[derive(Default)]
pub struct StopMutators<ScanEdges: ProcessEdgesWork>(PhantomData<ScanEdges>);
Expand All @@ -182,33 +180,13 @@ impl<E: ProcessEdgesWork> GCWork<E::VM> for StopMutators<E> {
trace!("stop_all_mutators start");
mmtk.plan.base().prepare_for_stack_scanning();
<E::VM as VMBinding>::VMCollection::stop_all_mutators(worker.tls, |mutator| {
mmtk.scheduler.work_buckets[WorkBucketStage::Prepare].add(ScanStackRoot::<E>(mutator));
// TODO: The stack scanning work won't start immediately, as the `Prepare` bucket is not opened yet (the bucket is opened in notify_mutators_paused).
// Should we push to Unconstrained instead?
Comment on lines +195 to +196
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Someone may want some work to be done before scanning any stacks, but currently there is no such things. For Ruby, "potential pinning parents" (PPP) needs to be handled before transitive closure, but can be done at the same time as stack scanning.

We also had some issue letting OpenJDK report the yielding of individual threads. See mmtk/openjdk#9 There used to be something called report_java_thread_yield, but we removed that. So the current status is that OpenJDK can only report that all threads have come to a stop.

I think we can leave the code as is for now. After we figure out how to report the yielding of each thread (for OpenJDK 11, 17 and 22), we can make another attempt and move the ScanStackRoot work packet to Unconstrained or another bucket.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the prepare_mutator function for VM-specific behaviour. But this PR would remove that.

Copy link
Member Author

@qinsoon qinsoon Aug 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Someone may want some work to be done before scanning any stacks, but currently there is no such things. For Ruby, "potential pinning parents" (PPP) needs to be handled before transitive closure, but can be done at the same time as stack scanning.

The binding can do whatever work is needed before it tells MMTk that the thread is ready. For the Ruby case, the binding can just schedule the PPP work to a bucket prior to the Closure bucket.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the prepare_mutator function for VM-specific behaviour. But this PR would remove that.

Yeah. I think it is confusing about when we should invoke it. For the old timing where we call prepare_mutator, the binding can do it before returning stop_all_mutators. JikesRVM is the only binding we know that uses the method. I changed it to https://github.com/mmtk/mmtk-jikesrvm/pull/147/files#diff-54f4f04a18788baf9bf66cff8c4256151b680087e3490fb8e8f62e2d60ad9fabR30.

mmtk.scheduler.work_buckets[WorkBucketStage::Prepare]
.add(ScanMutatorRoots::<E>(mutator));
});
trace!("stop_all_mutators end");
mmtk.scheduler.notify_mutators_paused(mmtk);
if <E::VM as VMBinding>::VMScanning::SCAN_MUTATORS_IN_SAFEPOINT {
// Prepare mutators if necessary
// FIXME: This test is probably redundant. JikesRVM requires to call `prepare_mutator` once after mutators are paused
if !mmtk.plan.base().stacks_prepared() {
for mutator in <E::VM as VMBinding>::VMActivePlan::mutators() {
<E::VM as VMBinding>::VMCollection::prepare_mutator(
worker.tls,
mutator.get_tls(),
mutator,
);
}
}
// Scan mutators
if <E::VM as VMBinding>::VMScanning::SINGLE_THREAD_MUTATOR_SCANNING {
mmtk.scheduler.work_buckets[WorkBucketStage::Prepare]
.add(ScanStackRoots::<E>::new());
} else {
for mutator in <E::VM as VMBinding>::VMActivePlan::mutators() {
mmtk.scheduler.work_buckets[WorkBucketStage::Prepare]
.add(ScanStackRoot::<E>(mutator));
}
}
}
mmtk.scheduler.work_buckets[WorkBucketStage::Prepare].add(ScanVMSpecificRoots::<E>::new());
}
}
Expand Down Expand Up @@ -431,33 +409,11 @@ impl<VM: VMBinding> GCWork<VM> for VMPostForwarding<VM> {
}
}

#[derive(Default)]
pub struct ScanStackRoots<Edges: ProcessEdgesWork>(PhantomData<Edges>);

impl<E: ProcessEdgesWork> ScanStackRoots<E> {
pub fn new() -> Self {
Self(PhantomData)
}
}

impl<E: ProcessEdgesWork> GCWork<E::VM> for ScanStackRoots<E> {
fn do_work(&mut self, worker: &mut GCWorker<E::VM>, mmtk: &'static MMTK<E::VM>) {
trace!("ScanStackRoots");
let factory = ProcessEdgesWorkRootsWorkFactory::<E>::new(mmtk);
<E::VM as VMBinding>::VMScanning::scan_roots_in_all_mutator_threads(worker.tls, factory);
<E::VM as VMBinding>::VMScanning::notify_initial_thread_scan_complete(false, worker.tls);
for mutator in <E::VM as VMBinding>::VMActivePlan::mutators() {
mutator.flush();
}
mmtk.plan.common().base.set_gc_status(GcStatus::GcProper);
}
}

pub struct ScanStackRoot<Edges: ProcessEdgesWork>(pub &'static mut Mutator<Edges::VM>);
pub struct ScanMutatorRoots<Edges: ProcessEdgesWork>(pub &'static mut Mutator<Edges::VM>);

impl<E: ProcessEdgesWork> GCWork<E::VM> for ScanStackRoot<E> {
impl<E: ProcessEdgesWork> GCWork<E::VM> for ScanMutatorRoots<E> {
fn do_work(&mut self, worker: &mut GCWorker<E::VM>, mmtk: &'static MMTK<E::VM>) {
trace!("ScanStackRoot for mutator {:?}", self.0.get_tls());
trace!("ScanMutatorRoots for mutator {:?}", self.0.get_tls());
let base = &mmtk.plan.base();
let mutators = <E::VM as VMBinding>::VMActivePlan::number_of_mutators();
let factory = ProcessEdgesWorkRootsWorkFactory::<E>::new(mmtk);
Expand Down
4 changes: 0 additions & 4 deletions src/scheduler/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,3 @@ pub use controller::GCController;

pub(crate) mod gc_work;
pub use gc_work::ProcessEdgesWork;
// TODO: We shouldn't need to expose ScanStackRoot. However, OpenJDK uses it.
// We should do some refactoring related to Scanning::SCAN_MUTATORS_IN_SAFEPOINT
// to make sure this type is not exposed to the bindings.
pub use gc_work::ScanStackRoot;
2 changes: 1 addition & 1 deletion src/util/sanity/sanity_checker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ impl<P: Plan> GCWork<P::VM> for ScheduleSanityGC<P> {
// in openjdk binding before the second round of roots scanning.
// for mutator in <P::VM as VMBinding>::VMActivePlan::mutators() {
// scheduler.work_buckets[WorkBucketStage::Prepare]
// .add(ScanStackRoot::<SanityGCProcessEdges<P::VM>>(mutator));
// .add(ScanMutatorRoots::<SanityGCProcessEdges<P::VM>>(mutator));
// }
{
let sanity_checker = mmtk.sanity_checker.lock().unwrap();
Expand Down
16 changes: 3 additions & 13 deletions src/vm/collection.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
use crate::plan::MutatorContext;
use crate::util::alloc::AllocationError;
use crate::util::opaque_pointer::*;
use crate::vm::VMBinding;
Expand All @@ -16,9 +15,12 @@ pub trait Collection<VM: VMBinding> {
/// This method is called by a single thread in MMTk (the GC controller).
/// This method should not return until all the threads are yielded.
/// The actual thread synchronization mechanism is up to the VM, and MMTk does not make assumptions on that.
/// MMTk provides a callback function and expects the binding to use the callback for each mutator when it
/// is ready for stack scanning. Usually a stack can be scanned as soon as the thread stops in the yieldpoint.
///
/// Arguments:
/// * `tls`: The thread pointer for the GC controller/coordinator.
/// * `mutator_visitor`: A callback. Call it with a mutator as argument to notify MMTk that the mutator is ready to be scanned.
fn stop_all_mutators<F>(tls: VMWorkerThread, mutator_visitor: F)
where
F: FnMut(&'static mut Mutator<VM>);
Expand Down Expand Up @@ -54,18 +56,6 @@ pub trait Collection<VM: VMBinding> {
/// In either case, the `Box` inside should be passed back to the called function.
fn spawn_gc_thread(tls: VMThread, ctx: GCThreadContext<VM>);

/// Allow VM-specific behaviors for a mutator after all the mutators are stopped and before any actual GC work starts.
///
/// Arguments:
/// * `tls_worker`: The thread pointer for the worker thread performing this call.
/// * `tls_mutator`: The thread pointer for the target mutator thread.
/// * `m`: The mutator context for the thread.
fn prepare_mutator<T: MutatorContext<VM>>(
tls_worker: VMWorkerThread,
tls_mutator: VMMutatorThread,
m: &T,
);

/// Inform the VM of an out-of-memory error. The binding should hook into the VM's error
/// routine for OOM. Note that there are two different categories of OOM:
/// * Critical OOM: This is the case where the OS is unable to mmap or acquire more memory.
Expand Down
28 changes: 6 additions & 22 deletions src/vm/scanning.rs
Original file line number Diff line number Diff line change
Expand Up @@ -121,14 +121,6 @@ pub trait RootsWorkFactory<ES: Edge>: Clone + Send + 'static {

/// VM-specific methods for scanning roots/objects.
pub trait Scanning<VM: VMBinding> {
/// Scan stack roots after all mutators are paused.
const SCAN_MUTATORS_IN_SAFEPOINT: bool = true;

/// Scan all the mutators within a single work packet.
///
/// `SCAN_MUTATORS_IN_SAFEPOINT` should also be enabled
const SINGLE_THREAD_MUTATOR_SCANNING: bool = true;

/// Return true if the given object supports edge enqueuing.
///
/// - If this returns true, MMTk core will call `scan_object` on the object.
Expand Down Expand Up @@ -203,20 +195,12 @@ pub trait Scanning<VM: VMBinding> {
/// * `tls`: The GC thread that is performing the thread scan.
fn notify_initial_thread_scan_complete(partial_scan: bool, tls: VMWorkerThread);

/// Scan all the mutators for roots.
///
/// The `memory_manager::is_mmtk_object` function can be used in this function if
/// - the "is_mmtk_object" feature is enabled.
///
/// Arguments:
/// * `tls`: The GC thread that is performing this scanning.
/// * `factory`: The VM uses it to create work packets for scanning roots.
fn scan_roots_in_all_mutator_threads(
tls: VMWorkerThread,
factory: impl RootsWorkFactory<VM::VMEdge>,
);

/// Scan one mutator for roots.
/// Scan one mutator for stack roots. MMTk assumes a binding is able to scan
/// any given thread for its stack roots. If a binding cannot scan the stack for
/// a given thread, it can leave this method empty, and deal with stack
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
/// roots in [`Collection::scan_vm_specific_roots`]. However, in that case, MMTk
/// does not know those roots are stack roots, and cannot perform any possible
/// optimization for the stack roots.
///
/// The `memory_manager::is_mmtk_object` function can be used in this function if
/// - the "is_mmtk_object" feature is enabled.
Expand Down
9 changes: 0 additions & 9 deletions vmbindings/dummyvm/src/collection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ use mmtk::util::opaque_pointer::*;
use mmtk::vm::Collection;
use mmtk::vm::GCThreadContext;
use mmtk::Mutator;
use mmtk::MutatorContext;

pub struct VMCollection {}

Expand All @@ -24,12 +23,4 @@ impl Collection<DummyVM> for VMCollection {
}

fn spawn_gc_thread(_tls: VMThread, _ctx: GCThreadContext<DummyVM>) {}

fn prepare_mutator<T: MutatorContext<DummyVM>>(
_tls_w: VMWorkerThread,
_tls_m: VMMutatorThread,
_mutator: &T,
) {
unimplemented!()
}
}
3 changes: 0 additions & 3 deletions vmbindings/dummyvm/src/scanning.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,6 @@ use mmtk::Mutator;
pub struct VMScanning {}

impl Scanning<DummyVM> for VMScanning {
fn scan_roots_in_all_mutator_threads(_tls: VMWorkerThread, _factory: impl RootsWorkFactory<DummyVMEdge>) {
unimplemented!()
}
fn scan_roots_in_mutator_thread(
_tls: VMWorkerThread,
_mutator: &'static mut Mutator<DummyVM>,
Expand Down