A fast bump allocator that supports allocation scopes / checkpoints. Aka an arena for values of arbitrary types.
A bump allocator owns a big chunk of memory. It has a pointer that starts at one end of that chunk. When an allocation is made that pointer gets aligned and bumped towards the other end of the chunk by the allocation's size. When its chunk is full, this allocator allocates another chunk with twice the size.
This makes allocations very fast. The drawback is that you can't reclaim memory like you do with a more general allocator.
Memory for the most recent allocation can be reclaimed. You can also use scopes, checkpoints and reset
to reclaim memory.
A bump allocator is great for phase-oriented allocations where you allocate objects in a loop and free them at the end of every iteration.
use bump_scope::Bump;
let mut bump: Bump = Bump::new();
loop {
// use bump ...
bump.reset();
}
The fact that the bump allocator allocates ever larger chunks and reset
only keeps around the largest one means that after a few iterations, every bump allocation
will be done on the same chunk and no more chunks need to be allocated.
The introduction of scopes makes this bump allocator also great for temporary allocations and stack-like usage.
Comparison to bumpalo
Bumpalo is a popular crate for bump allocation. This crate was inspired by bumpalo and Always Bump Downwards (but ignores the title).
Unlike bumpalo
, this crate...
- Supports scopes and checkpoints.
- Drop is always called for allocated values unless explicitly leaked or forgotten.
alloc*
methods return aBumpBox<T>
which owns and dropsT
. Types that don't need dropping can be turned into references withinto_ref
andinto_mut
.
- You can allocate a slice from any
Iterator
withalloc_iter
. - Every method that panics on allocation failure has a fallible
try_*
counterpart. Bump
's base allocator is generic.- Won't try to allocate a smaller chunk if allocation failed.
- No built-in allocation limit. You can provide an allocator that enforces an allocation limit (see
tests/limit_memory_usage.rs
). - Allocations are a bit more optimized. (see
crates/inspect-asm/out/x86-64
and benchmarks) - You can choose the bump direction. Bumps upwards by default.
- You can choose the minimum alignment.
1
by default.
You can create scopes to make allocations that live only for a part of its parent scope. Entering and exiting scopes is virtually free. Allocating within a scope has no overhead.
You can create a new scope either with a scoped
closure or with a scope_guard
:
use bump_scope::Bump;
let mut bump: Bump = Bump::new();
// you can use a closure
bump.scoped(|mut bump| {
let hello = bump.alloc_str("hello");
assert_eq!(bump.stats().allocated(), 5);
bump.scoped(|bump| {
let world = bump.alloc_str("world");
println!("{hello} and {world} are both live");
assert_eq!(bump.stats().allocated(), 10);
});
println!("{hello} is still live");
assert_eq!(bump.stats().allocated(), 5);
});
assert_eq!(bump.stats().allocated(), 0);
// or you can use scope guards
{
let mut guard = bump.scope_guard();
let mut bump = guard.scope();
let hello = bump.alloc_str("hello");
assert_eq!(bump.stats().allocated(), 5);
{
let mut guard = bump.scope_guard();
let bump = guard.scope();
let world = bump.alloc_str("world");
println!("{hello} and {world} are both live");
assert_eq!(bump.stats().allocated(), 10);
}
println!("{hello} is still live");
assert_eq!(bump.stats().allocated(), 5);
}
assert_eq!(bump.stats().allocated(), 0);
You can also use the unsafe checkpoint
api to reset the bump pointer to a previous location.
let checkpoint = bump.checkpoint();
{
let hello = bump.alloc_str("hello");
assert_eq!(bump.stats().allocated(), 5);
}
unsafe { bump.reset_to(checkpoint); }
assert_eq!(bump.stats().allocated(), 0);
bump-scope
provides bump allocated variants of Vec
and String
called BumpVec
and BumpString
. They also come in a different flavors:
Fixed*
for fixed capacity collectionsMut*
for collections optimized for a mutable bump allocator
Bump
is !Sync
which means it can't be shared between threads.
To bump allocate in parallel you can use a BumpPool
.
Bump
and BumpScope
implement allocator_api2
's Allocator
trait.
They can be used to allocate collections.
A bump allocator can grow, shrink and deallocate the most recent allocation. When bumping upwards it can even do so in place. Growing allocations other than the most recent one will require a new allocation and the old memory block becomes wasted space. Shrinking or deallocating allocations other than the most recent one does nothing, which means wasted space.
A bump allocator does not require deallocate
or shrink
to free memory.
After all, memory will be reclaimed when exiting a scope or calling reset
.
You can wrap a bump allocator in a type that makes deallocate
and shrink
a no-op using WithoutDealloc
and WithoutShrink
.
use bump_scope::{ Bump, WithoutDealloc };
use allocator_api2::boxed::Box;
let bump: Bump = Bump::new();
let boxed = Box::new_in(5, &bump);
assert_eq!(bump.stats().allocated(), 4);
drop(boxed);
assert_eq!(bump.stats().allocated(), 0);
let boxed = Box::new_in(5, WithoutDealloc(&bump));
assert_eq!(bump.stats().allocated(), 4);
drop(boxed);
assert_eq!(bump.stats().allocated(), 4);
std
(enabled by default) — AddsBumpPool
and implementations ofstd::io
traits forBumpBox
and vectors.alloc
(enabled by default) — AddsGlobal
as the default base allocator,BumpBox::into_box
and some interactions withalloc
collections.panic-on-alloc
(enabled by default) — Adds functions and traits that will panic when the allocation fails. Without this feature, allocation failures cannot cause panics, and onlytry_
-prefixed allocation methods will be available.serde
— AddsSerialize
implementations forBumpBox
, strings and vectors, andDeserializeSeed
for strings and vectors.zerocopy
— Addsalloc_zeroed(_slice)
,init_zeroed
,resize_zeroed
andextend_zeroed
.
nightly-allocator-api
— Enablesallocator-api2
'snightly
feature which makes it reexport the nightly allocator api instead of its own implementation. With this you can bump allocate collections from the standard library.nightly-coerce-unsized
— MakesBumpBox<T>
implementCoerceUnsized
. With thisBumpBox<[i32;3]>
coerces toBumpBox<[i32]>
,BumpBox<dyn Debug>
and so on.nightly-exact-size-is-empty
— Implementsis_empty
manually for some iterators.nightly-trusted-len
— ImplementsTrustedLen
for some iterators.
Bump direction is controlled by the generic parameter const UP: bool
. By default, UP
is true
, so the allocator bumps upwards.
Bumping upwards has the advantage that the most recent allocation can be grown and shrunk in place.
This benefits collections as well as alloc_iter(_mut)
and alloc_fmt(_mut)
with the exception of MutBumpVecRev
and alloc_iter_mut_rev
.
MutBumpVecRev
can be grown and shrunk in place iff bumping downwards.
Bumping downwards shaves off a few non-branch instructions per allocation.
The minimum alignment is controlled by the generic parameter const MIN_ALIGN: usize
. By default, MIN_ALIGN
is 1
.
For example changing the minimum alignment to 4
makes it so allocations with the alignment of 4
don't need to align the bump pointer anymore.
This will penalize allocations whose sizes are not a multiple of 4
as their size now needs to be rounded up the next multiple of 4
.
The overhead of aligning and rounding up is 1 (UP = false
) or 2 (UP = true
) non-branch instructions on x86-64.
If GUARANTEED_ALLOCATED
is true
then the bump allocator is guaranteed to have at least one allocated chunk.
This is usually the case unless it was created with Bump::unallocated
.
You need a guaranteed allocated Bump(Scope)
to create scopes via scoped
and scope_guard
.
You can convert a maybe unallocated Bump(Scope)
into a guaranteed allocated one with
guaranteed_allocated
,
guaranteed_allocated_ref
, and
guaranteed_allocated_mut
.
The point of this is so Bump
s can be created without allocating memory and even const
constructed since rust version 1.83.
At the same time Bump
s that have already allocated a chunk don't suffer runtime checks for entering scopes and creating checkpoints.
Running cargo test
requires a nightly compiler.
This is because we use tests copied from std
which make heavy use of nightly features.
Licensed under either of:
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)
- Apache License, Version 2.0, (LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.