Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NULL and movement check in process_edge #1032

Merged
merged 11 commits into from
Dec 20, 2023
4 changes: 1 addition & 3 deletions docs/userguide/src/tutorial/code/mygc_semispace/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,7 @@ impl<VM: VMBinding> ProcessEdgesWork for MyGCProcessEdges<VM> {
}

fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());
let worker = self.worker();
let queue = &mut self.base.nodes;
if self.plan.tospace().in_space(object) {
Expand Down
4 changes: 1 addition & 3 deletions docs/userguide/src/tutorial/mygc/ss/exercise_solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,9 +149,7 @@ In `gc_work.rs`:
the tospace and fromspace:
```rust
fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());

// Add this!
else if self.plan().youngspace().in_space(object) {
Expand Down
14 changes: 10 additions & 4 deletions src/plan/generational/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,19 +36,25 @@ impl<VM: VMBinding, P: GenerationalPlanExt<VM> + PlanTraceObject<VM>> ProcessEdg
Self { plan, base }
}
fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());

// We cannot borrow `self` twice in a call, so we extract `worker` as a local variable.
let worker = self.worker();
self.plan
.trace_object_nursery(&mut self.base.nodes, object, worker)
}
fn process_edge(&mut self, slot: EdgeOf<Self>) {
let object = slot.load();
if object.is_null() {
return;
}
let new_object = self.trace_object(object);
debug_assert!(!self.plan.is_object_in_nursery(new_object));
slot.store(new_object);
// Note: If `object` is a mature object, `trace_object` will not call `space.trace_object`,
// but will still return `object`. In that case, we don't need to write it back.
if new_object != object {
slot.store(new_object);
}
}

fn create_scan_work(
Expand Down
1 change: 1 addition & 0 deletions src/policy/copyspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,7 @@ impl<VM: VMBinding> CopySpace<VM> {
worker: &mut GCWorker<VM>,
) -> ObjectReference {
trace!("copyspace.trace_object(, {:?}, {:?})", object, semantics,);
debug_assert!(!object.is_null());

// If this is not from space, we do not need to trace it (the object has been copied to the tosapce)
if !self.is_from_space() {
Expand Down
1 change: 1 addition & 0 deletions src/policy/immix/immixspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,7 @@ impl<VM: VMBinding> crate::policy::gc_work::PolicyTraceObject<VM> for ImmixSpace
copy: Option<CopySemantics>,
worker: &mut GCWorker<VM>,
) -> ObjectReference {
debug_assert!(!object.is_null());
if KIND == TRACE_KIND_TRANSITIVE_PIN {
self.trace_object_without_moving(queue, object)
} else if KIND == TRACE_KIND_DEFRAG {
Expand Down
1 change: 1 addition & 0 deletions src/policy/immortalspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,7 @@ impl<VM: VMBinding> ImmortalSpace<VM> {
queue: &mut Q,
object: ObjectReference,
) -> ObjectReference {
debug_assert!(!object.is_null());
#[cfg(feature = "vo_bit")]
debug_assert!(
crate::util::metadata::vo_bit::is_vo_bit_set::<VM>(object),
Expand Down
1 change: 1 addition & 0 deletions src/policy/largeobjectspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ impl<VM: VMBinding> LargeObjectSpace<VM> {
queue: &mut Q,
object: ObjectReference,
) -> ObjectReference {
debug_assert!(!object.is_null());
#[cfg(feature = "vo_bit")]
debug_assert!(
crate::util::metadata::vo_bit::is_vo_bit_set::<VM>(object),
Expand Down
1 change: 1 addition & 0 deletions src/policy/markcompactspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ impl<VM: VMBinding> crate::policy::gc_work::PolicyTraceObject<VM> for MarkCompac
_copy: Option<CopySemantics>,
_worker: &mut GCWorker<VM>,
) -> ObjectReference {
debug_assert!(!object.is_null());
debug_assert!(
KIND != TRACE_KIND_TRANSITIVE_PIN,
"MarkCompact does not support transitive pin trace."
Expand Down
4 changes: 1 addition & 3 deletions src/policy/marksweepspace/malloc_ms/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -400,9 +400,7 @@ impl<VM: VMBinding> MallocSpace<VM> {
queue: &mut Q,
object: ObjectReference,
) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());

assert!(
self.in_space(object),
Expand Down
4 changes: 1 addition & 3 deletions src/policy/marksweepspace/native_ms/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -241,9 +241,7 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
queue: &mut Q,
object: ObjectReference,
) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());
debug_assert!(
self.in_space(object),
"Cannot mark an object {} that was not alloced by free list allocator.",
Expand Down
25 changes: 13 additions & 12 deletions src/scheduler/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -273,11 +273,8 @@ pub(crate) struct ProcessEdgesWorkTracer<E: ProcessEdgesWork> {
impl<E: ProcessEdgesWork> ObjectTracer for ProcessEdgesWorkTracer<E> {
/// Forward the `trace_object` call to the underlying `ProcessEdgesWork`,
/// and flush as soon as the underlying buffer of `process_edges_work` is full.
///
/// This function is inlined because `trace_object` is probably the hottest function in MMTk.
/// If this function is called in small closures, please profile the program and make sure the
/// closure is inlined, too.
fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
debug_assert!(!object.is_null());
let result = self.process_edges_work.trace_object(object);
self.flush_if_full();
result
Expand Down Expand Up @@ -663,8 +660,12 @@ pub trait ProcessEdgesWork:
/// trace the object and store back the new object reference if necessary.
fn process_edge(&mut self, slot: EdgeOf<Self>) {
let object = slot.load();
if object.is_null() {
return;
}
let new_object = self.trace_object(object);
if Self::OVERWRITE_REFERENCE {
debug_assert!(!new_object.is_null());
if Self::OVERWRITE_REFERENCE && new_object != object {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need OVERWRITE_REFERENCE anymore. It's always true anyway.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SanityGCProcessEdges sets that constant to false.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no harm to check it, and we should check it unless we remove OVERWRITE_REFERENCE. Otherwise if there is a new implementation of ProcessEdgesWork that sets OVERWRITE_REFERNECE to false, we have a bug here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OVERWRITE_REFERENCE was a hack to implement sanity GC. Sanity GC does not move objects anyway and the check can be obviated now. I'm guessing the compiler is smart enough to figure out that the constant is either always true in the case of real GC work packets and always false for sanity, so there is no performance overhead of the check. So at the end of the day, it's fine to keep it, but I just think it's a bit redundant.

slot.store(new_object);
}
}
Expand Down Expand Up @@ -721,9 +722,7 @@ impl<VM: VMBinding> ProcessEdgesWork for SFTProcessEdges<VM> {
fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
use crate::policy::sft::GCWorkerMutRef;

if object.is_null() {
return object;
}
debug_assert!(!object.is_null());

// Erase <VM> type parameter
let worker = GCWorkerMutRef::new(self.worker());
Expand Down Expand Up @@ -997,9 +996,7 @@ impl<VM: VMBinding, P: PlanTraceObject<VM> + Plan<VM = VM>, const KIND: TraceKin
}

fn trace_object(&mut self, object: ObjectReference) -> ObjectReference {
if object.is_null() {
return object;
}
debug_assert!(!object.is_null());
// We cannot borrow `self` twice in a call, so we extract `worker` as a local variable.
let worker = self.worker();
self.plan
Expand All @@ -1008,8 +1005,12 @@ impl<VM: VMBinding, P: PlanTraceObject<VM> + Plan<VM = VM>, const KIND: TraceKin

fn process_edge(&mut self, slot: EdgeOf<Self>) {
let object = slot.load();
if object.is_null() {
return;
}
let new_object = self.trace_object(object);
if P::may_move_objects::<KIND>() {
debug_assert!(!new_object.is_null());
if P::may_move_objects::<KIND>() && new_object != object {
slot.store(new_object);
}
}
Expand Down
17 changes: 13 additions & 4 deletions src/util/reference_processor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -216,20 +216,23 @@ impl ReferenceProcessor {
e: &mut E,
referent: ObjectReference,
) -> ObjectReference {
debug_assert!(!referent.is_null());
e.trace_object(referent)
}

fn get_forwarded_reference<E: ProcessEdgesWork>(
e: &mut E,
object: ObjectReference,
) -> ObjectReference {
debug_assert!(!object.is_null());
e.trace_object(object)
}

fn keep_referent_alive<E: ProcessEdgesWork>(
e: &mut E,
referent: ObjectReference,
) -> ObjectReference {
debug_assert!(!referent.is_null());
e.trace_object(referent)
}

Expand Down Expand Up @@ -285,23 +288,29 @@ impl ReferenceProcessor {
reference: ObjectReference,
) -> ObjectReference {
let old_referent = <E::VM as VMBinding>::VMReferenceGlue::get_referent(reference);
let new_referent = ReferenceProcessor::get_forwarded_referent(trace, old_referent);
<E::VM as VMBinding>::VMReferenceGlue::set_referent(reference, new_referent);
let new_reference = ReferenceProcessor::get_forwarded_reference(trace, reference);
{
use crate::vm::ObjectModel;
trace!(
"Forwarding reference: {} (size: {})",
reference,
<E::VM as VMBinding>::VMObjectModel::get_current_size(reference)
);
}

if !<E::VM as VMBinding>::VMReferenceGlue::is_referent_cleared(old_referent) {
let new_referent = ReferenceProcessor::get_forwarded_referent(trace, old_referent);
<E::VM as VMBinding>::VMReferenceGlue::set_referent(reference, new_referent);

trace!(
" referent: {} (forwarded to {})",
old_referent,
new_referent
);
trace!(" reference: forwarded to {}", new_reference);
}

let new_reference = ReferenceProcessor::get_forwarded_reference(trace, reference);
trace!(" reference: forwarded to {}", new_reference);

debug_assert!(
!new_reference.is_null(),
"reference {:?}'s forwarding pointer is NULL",
Expand Down
34 changes: 30 additions & 4 deletions src/vm/edge_shape.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,13 @@ use atomic::Atomic;
use crate::util::constants::{BYTES_IN_ADDRESS, LOG_BYTES_IN_ADDRESS};
use crate::util::{Address, ObjectReference};

/// An abstract edge. An edge holds an object reference. When we load from it, we get an
/// ObjectReference; we can also store an ObjectReference into it.
/// An `Edge` represents a slot in an object (a.k.a. a field), on the stack (i.e. a local variable)
/// or any other places (such as global variables). A slot may hold an object reference. We can
/// load the object reference from it, and we can store an ObjectReference into it. For some VMs,
/// a slot may sometimes not hold an object reference. For example, it can hold a special `NULL`
/// pointer which does not point to any object, or it can hold a tagged non-reference value, such
/// as small integers and special values such as `true`, `false`, `null` (a.k.a. "none", "nil",
/// etc. for other VMs), `undefined`, etc.
///
/// This intends to abstract out the differences of reference field representation among different
/// VMs. If the VM represent a reference field as a word that holds the pointer to the object, it
Expand Down Expand Up @@ -40,10 +45,31 @@ use crate::util::{Address, ObjectReference};
/// semantics, such as whether it holds strong or weak references. If a VM holds a weak reference
/// in a word as a pointer, it can also use `SimpleEdge` for weak reference fields.
pub trait Edge: Copy + Send + Debug + PartialEq + Eq + Hash {
/// Load object reference from the edge.
/// Load object reference from the slot.
///
/// If the slot is not holding an object reference (For example, if it is holding NULL or a
/// tagged non-reference value. See trait-level doc comment.), this method should return
/// `ObjectReference::NULL`.
///
/// If the slot holds an object reference with tag bits, the returned value shall be the object
/// reference with the tag bits removed.
fn load(&self) -> ObjectReference;

/// Store the object reference `object` into the edge.
/// Store the object reference `object` into the slot.
///
/// If the slot holds an object reference with tag bits, this method must preserve the tag
/// bits while updating the object reference so that it points to the forwarded object given by
/// the parameter `object`.
///
/// FIXME: This design is inefficient for handling object references with tag bits. Consider
/// introducing a new updating function to do the load, trace and store in one function.
/// See: <https://github.com/mmtk/mmtk-core/issues/1033>
///
/// FIXME: This method is currently used by both moving GC algorithms and the subsuming write
/// barrier ([`crate::memory_manager::object_reference_write`]). The two reference writing
/// operations have different semantics, and need to be implemented differently if the VM
/// supports offsetted or tagged references.
/// See: <https://github.com/mmtk/mmtk-core/issues/1038>
fn store(&self, object: ObjectReference);

/// Prefetch the edge so that a subsequent `load` will be faster.
Expand Down
11 changes: 7 additions & 4 deletions src/vm/scanning.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,14 @@ impl<ES: Edge, F: FnMut(ES)> EdgeVisitor<ES> for F {

/// Callback trait of scanning functions that directly trace through edges.
pub trait ObjectTracer {
/// Call this function for the content of each edge,
/// and assign the returned value back to the edge.
/// Call this function to trace through an object graph edge which points to `object`.
/// `object` must point to a valid object, and cannot be `ObjectReference::NULL`.
///
/// Note: This function is performance-critical.
/// Implementations should consider inlining if necessary.
/// The return value is the new object reference for `object` if it is moved, or `object` if
/// not moved. If moved, the caller should update the slot that holds the reference to
/// `object` so that it points to the new location.
///
/// Note: This function is performance-critical, therefore must be implemented efficiently.
fn trace_object(&mut self, object: ObjectReference) -> ObjectReference;
}

Expand Down
Loading