Skip to content

Commit

Permalink
Merge #38
Browse files Browse the repository at this point in the history
38: Update libgc to use single allocator r=ltratt a=jacob-hughes

`libgc_internal` now uses a single allocator for both the `Allocator` and `GlobalAlloc` trait. This PR updates `libgc` to use that.

Co-authored-by: Jake Hughes <jh@jakehughes.uk>
  • Loading branch information
bors[bot] and jacob-hughes authored Jan 22, 2021
2 parents 73aafc1 + bc0a28a commit 8e71db2
Show file tree
Hide file tree
Showing 8 changed files with 473 additions and 31 deletions.
2 changes: 1 addition & 1 deletion .buildbot.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ rustup toolchain link rustgc rustgc/build/x86_64-unknown-linux-gnu/stage1

cargo clean

cargo +rustgc test --features "rustgc"
cargo +rustgc test --features "rustgc" -- --test-threads=1
1 change: 0 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ gc_stats = []

[dependencies]
libc = "*"
libgc_internal = { git = "https://github.com/softdevteam/libgc_internal" }

[build-dependencies]
rerun_except = "0.1"
Expand Down
24 changes: 5 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,14 @@ libgc is a garbage collector for Rust. It works by providing a garbage-collected

# Structure

There are three repositories which make up the gc infrastructure:
- **libgc** the main library which provides the `Gc<T>` smart pointer and its
There are two repositories which make up the gc infrastructure:

* **libgc** the main library which provides the `Gc<T>` smart pointer and its
API.
- **libgc_internal** contains the gc allocation and collector logic. This is
collector specific, and can be conditionally compiled to support different
implementations. At the moment, it only supports a single collector
implementation: the Boehm-Demers-Weiser GC. Users should never interact
directly with this crate. Instead, any relevant APIs are re-exported
through libgc.
- **rustgc** a fork of rustc with GC-aware optimisations. This can be used to
* **rustgc** a fork of rustc with GC-aware optimisations. This can be used to
compile user programs which use `libgc`, giving them better GC
performance. Use of rustgc is not mandated, but it enables further
optimisations for programs which use `libgc`.

This seperation between libgc and rustgc exists so that a stripped-down form of
garbage collection can be used without compiler support. The further split
between libgc and libgc_core exists to make linkage easier when the rustgc
compiler is used.

rustgc needs access to the GC's `Allocator` implementation. This exists in the
libgc_internal crate so that it can be linked to the target binary either as
part of libgc, or as part of the rust standard library (if compiled with
rustgc). libgc contains code which would not compile if it was packaged as part
of rustgc. To prevent duplication, the libgc_interal crate will link correctly
as either a standard cargo crate, or as part of the rust core library.
garbage collection can be used without compiler support.
133 changes: 133 additions & 0 deletions src/allocator.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
//! This library acts as a shim to prevent static linking the Boehm GC directly
//! inside library/alloc which causes surprising and hard to debug errors.

#![allow(dead_code)]

use core::{
alloc::{AllocError, Allocator, GlobalAlloc, Layout},
ptr::NonNull,
};

pub struct GcAllocator;

use crate::boehm;
#[cfg(feature = "rustgc")]
use crate::specializer;

#[cfg(feature = "rustgc")]
pub(crate) static ALLOCATOR: GcAllocator = GcAllocator;

unsafe impl GlobalAlloc for GcAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
#[cfg(feature = "rustgc")]
return boehm::GC_malloc(layout.size()) as *mut u8;
#[cfg(not(feature = "rustgc"))]
return boehm::GC_malloc_uncollectable(layout.size()) as *mut u8;
}

unsafe fn dealloc(&self, ptr: *mut u8, _: Layout) {
boehm::GC_free(ptr);
}

unsafe fn realloc(&self, ptr: *mut u8, _: Layout, new_size: usize) -> *mut u8 {
boehm::GC_realloc(ptr, new_size) as *mut u8
}

#[cfg(feature = "rustgc_internal")]
unsafe fn alloc_precise(&self, layout: Layout, bitmap: usize, bitmap_size: usize) -> *mut u8 {
let gc_descr = boehm::GC_make_descriptor(&bitmap, bitmap_size);
boehm::GC_malloc_explicitly_typed(layout.size(), gc_descr) as *mut u8
}

#[cfg(feature = "rustgc_internal")]
fn alloc_conservative(&self, layout: Layout) -> *mut u8 {
unsafe { boehm::GC_malloc(layout.size()) as *mut u8 }
}

#[cfg(feature = "rustgc_internal")]
unsafe fn alloc_atomic(&self, layout: Layout) -> *mut u8 {
boehm::GC_malloc_atomic(layout.size()) as *mut u8
}
}

unsafe impl Allocator for GcAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
let ptr = unsafe { boehm::GC_malloc(layout.size()) } as *mut u8;
assert!(!ptr.is_null());
let ptr = unsafe { NonNull::new_unchecked(ptr) };
Ok(NonNull::slice_from_raw_parts(ptr, layout.size()))
}

unsafe fn deallocate(&self, _: NonNull<u8>, _: Layout) {}
}

impl GcAllocator {
#[cfg(feature = "rustgc_internal")]
pub fn maybe_optimised_alloc<T>(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
let sp = specializer::AllocationSpecializer::new();
sp.maybe_optimised_alloc::<T>(layout)
}

pub fn force_gc() {
unsafe { boehm::GC_gcollect() }
}

pub unsafe fn register_finalizer(
&self,
obj: *mut u8,
finalizer: Option<unsafe extern "C" fn(*mut u8, *mut u8)>,
client_data: *mut u8,
old_finalizer: *mut extern "C" fn(*mut u8, *mut u8),
old_client_data: *mut *mut u8,
) {
boehm::GC_register_finalizer_no_order(
obj,
finalizer,
client_data,
old_finalizer,
old_client_data,
)
}

pub fn unregister_finalizer(&self, gcbox: *mut u8) {
unsafe {
boehm::GC_register_finalizer(
gcbox,
None,
::core::ptr::null_mut(),
::core::ptr::null_mut(),
::core::ptr::null_mut(),
);
}
}

pub fn get_stats() -> GcStats {
let mut ps = boehm::ProfileStats::default();
unsafe {
boehm::GC_get_prof_stats(
&mut ps as *mut boehm::ProfileStats,
core::mem::size_of::<boehm::ProfileStats>(),
);
}
let total_gc_time = unsafe { boehm::GC_get_full_gc_total_time() };

GcStats {
total_gc_time,
num_collections: ps.gc_no,
total_freed: ps.bytes_reclaimed_since_gc,
total_alloced: ps.bytes_allocd_since_gc,
}
}

pub fn init() {
unsafe { boehm::GC_start_performance_measurement() };
}
}

#[derive(Debug)]
pub struct GcStats {
total_gc_time: usize, // In milliseconds.
num_collections: usize,
total_freed: usize, // In bytes
total_alloced: usize, // In bytes
}
74 changes: 74 additions & 0 deletions src/boehm.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
#[repr(C)]
#[derive(Default)]
pub struct ProfileStats {
/// Heap size in bytes (including area unmapped to OS).
pub(crate) heapsize_full: usize,
/// Total bytes contained in free and unmapped blocks.
pub(crate) free_bytes_full: usize,
/// Amount of memory unmapped to OS.
pub(crate) unmapped_bytes: usize,
/// Number of bytes allocated since the recent collection.
pub(crate) bytes_allocd_since_gc: usize,
/// Number of bytes allocated before the recent collection.
/// The value may wrap.
pub(crate) allocd_bytes_before_gc: usize,
/// Number of bytes not considered candidates for garbage collection.
pub(crate) non_gc_bytes: usize,
/// Garbage collection cycle number.
/// The value may wrap.
pub(crate) gc_no: usize,
/// Number of marker threads (excluding the initiating one).
pub(crate) markers_m1: usize,
/// Approximate number of reclaimed bytes after recent collection.
pub(crate) bytes_reclaimed_since_gc: usize,
/// Approximate number of bytes reclaimed before the recent collection.
/// The value may wrap.
pub(crate) reclaimed_bytes_before_gc: usize,
/// Number of bytes freed explicitly since the recent GC.
pub(crate) expl_freed_bytes_since_gc: usize,
}

#[link(name = "gc")]
extern "C" {
pub(crate) fn GC_malloc(nbytes: usize) -> *mut u8;

#[cfg(not(feature = "rustgc"))]
pub(crate) fn GC_malloc_uncollectable(nbytes: usize) -> *mut u8;

pub(crate) fn GC_realloc(old: *mut u8, new_size: usize) -> *mut u8;

pub(crate) fn GC_free(dead: *mut u8);

pub(crate) fn GC_register_finalizer(
ptr: *mut u8,
finalizer: Option<unsafe extern "C" fn(*mut u8, *mut u8)>,
client_data: *mut u8,
old_finalizer: *mut extern "C" fn(*mut u8, *mut u8),
old_client_data: *mut *mut u8,
);

pub(crate) fn GC_register_finalizer_no_order(
ptr: *mut u8,
finalizer: Option<unsafe extern "C" fn(*mut u8, *mut u8)>,
client_data: *mut u8,
old_finalizer: *mut extern "C" fn(*mut u8, *mut u8),
old_client_data: *mut *mut u8,
);

pub(crate) fn GC_gcollect();

pub(crate) fn GC_start_performance_measurement();

pub(crate) fn GC_get_full_gc_total_time() -> usize;

pub(crate) fn GC_get_prof_stats(prof_stats: *mut ProfileStats, stats_size: usize) -> usize;

#[cfg(feature = "rustgc")]
pub(crate) fn GC_malloc_explicitly_typed(size: usize, descriptor: usize) -> *mut u8;

#[cfg(feature = "rustgc")]
pub(crate) fn GC_make_descriptor(bitmap: *const usize, len: usize) -> usize;

#[cfg(feature = "rustgc")]
pub(crate) fn GC_malloc_atomic(nbytes: usize) -> *mut u8;
}
10 changes: 5 additions & 5 deletions src/gc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ use std::{
ptr::NonNull,
};

use crate::GC_ALLOCATOR;
use crate::ALLOCATOR;

/// This is usually a no-op, but if `gc_stats` is enabled it will setup the GC
/// for profiliing.
Expand Down Expand Up @@ -182,7 +182,7 @@ struct GcBox<T: ?Sized>(ManuallyDrop<T>);
impl<T> GcBox<T> {
fn new(value: T) -> *mut GcBox<T> {
let layout = Layout::new::<T>();
let ptr = unsafe { GC_ALLOCATOR.allocate(layout).unwrap().as_ptr() } as *mut GcBox<T>;
let ptr = ALLOCATOR.allocate(layout).unwrap().as_ptr() as *mut GcBox<T>;
let gcbox = GcBox(ManuallyDrop::new(value));

unsafe {
Expand All @@ -196,7 +196,7 @@ impl<T> GcBox<T> {

fn new_from_layout(layout: Layout) -> NonNull<GcBox<MaybeUninit<T>>> {
unsafe {
let base_ptr = GC_ALLOCATOR.allocate(layout).unwrap().as_ptr() as *mut usize;
let base_ptr = ALLOCATOR.allocate(layout).unwrap().as_ptr() as *mut usize;
NonNull::new_unchecked(base_ptr as *mut GcBox<MaybeUninit<T>>)
}
}
Expand All @@ -214,7 +214,7 @@ impl<T> GcBox<T> {
}

unsafe {
GC_ALLOCATOR.register_finalizer(
ALLOCATOR.register_finalizer(
self as *mut _ as *mut u8,
Some(fshim::<T>),
::std::ptr::null_mut(),
Expand All @@ -225,7 +225,7 @@ impl<T> GcBox<T> {
}

fn unregister_finalizer(&mut self) {
unsafe { GC_ALLOCATOR.unregister_finalizer(self as *mut _ as *mut u8) };
ALLOCATOR.unregister_finalizer(self as *mut _ as *mut u8);
}
}

Expand Down
15 changes: 10 additions & 5 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,28 @@
#![feature(alloc_layout_extra)]
#![feature(arbitrary_self_types)]
#![feature(dispatch_from_dyn)]
#![feature(specialization)]
#![feature(nonnull_slice_from_raw_parts)]
#![feature(raw_vec_internals)]
#![feature(const_fn)]
#![feature(coerce_unsized)]
#![feature(unsize)]
#![feature(maybe_uninit_ref)]
#![feature(negative_impls)]
#![allow(incomplete_features)]
#[cfg(not(all(target_pointer_width = "64", target_arch = "x86_64")))]
compile_error!("Requires x86_64 with 64 bit pointer width.");

pub mod gc;
#[cfg(feature = "gc_stats")]
pub mod stats;

pub use gc::Gc;
mod allocator;
mod boehm;
#[cfg(feature = "rustgc")]
mod specializer;

use libgc_internal::GcAllocator;
pub use libgc_internal::GlobalAllocator;
pub use allocator::GcAllocator;
pub use gc::Gc;

static GC_ALLOCATOR: GcAllocator = GcAllocator;
pub static GLOBAL_ALLOCATOR: GlobalAllocator = GlobalAllocator;
pub static ALLOCATOR: GcAllocator = GcAllocator;
Loading

0 comments on commit 8e71db2

Please sign in to comment.