Skip to content

Commit

Permalink
Decrease default guard size from 2G to 32M
Browse files Browse the repository at this point in the history
This commit follows in the footsteps of SpiderMonkey to reduce the size
of the default guard region from 2GiB to 32MiB. SpiderMonkey performance
an analysis of some wasm modules and found the largest static offset was
20MiB so 32 is the rounded up version of that.

This will reduce the size of the virtual memory reservation per
linear-memory by default. Previously it was 8G due to guards being both
before and after linear memory being 2G in size. Now it'll be 4G+64M
with before/after guards taken into account. This should in theory make
it easier to pack more instances in the pooling allocator for example
and overall reduce the virtual memory footprint.

This is not expected to have any major impact on the performance of
wasm modules as all bounds checks should still practically be elided.
We've been fuzzing differently sized guard regions for quite a long time
as well so there should be a low risk of this having any issues
specifically connected to a smaller guard region.
  • Loading branch information
alexcrichton committed Nov 14, 2024
1 parent 0e6c711 commit 4b07e91
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 16 deletions.
9 changes: 6 additions & 3 deletions crates/environ/src/tunables.rs
Original file line number Diff line number Diff line change
Expand Up @@ -199,10 +199,13 @@ impl Tunables {
// address space reservations liberally by default, allowing us to
// help eliminate bounds checks.
//
// Coupled with a 2 GiB address space guard it lets us translate
// wasm offsets into x86 offsets as aggressively as we can.
// A 32MiB default guard size is then allocated so we can remove
// explicit bounds checks if any static offset is less than this
// value. SpiderMonkey found, for example, that in a large corpus of
// wasm modules 20MiB was the maximum offset so this is the
// power-of-two-rounded up from that and matches SpiderMonkey.
memory_reservation: 1 << 32,
memory_guard_size: 0x8000_0000,
memory_guard_size: 32 << 20,

// We've got lots of address space on 64-bit so use a larger
// grow-into-this area, but on 32-bit we aren't as lucky. Miri is
Expand Down
26 changes: 13 additions & 13 deletions crates/wasmtime/src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1554,10 +1554,10 @@ impl Config {
///
/// ## Default
///
/// The default value for this property is 2GiB on 64-bit platforms. This
/// The default value for this property is 32MiB on 64-bit platforms. This
/// allows eliminating almost all bounds checks on loads/stores with an
/// immediate offset of less than 2GiB. On 32-bit platforms this defaults to
/// 64KiB.
/// immediate offset of less than 32MiB. On 32-bit platforms this defaults
/// to 64KiB.
pub fn memory_guard_size(&mut self, bytes: u64) -> &mut Self {
self.tunables.memory_guard_size = Some(bytes);
self
Expand Down Expand Up @@ -2711,19 +2711,19 @@ pub enum WasmBacktraceDetails {
/// Additionally the main cost of the pooling allocator is that it requires a
/// very large reservation of virtual memory (on the order of most of the
/// addressable virtual address space). WebAssembly 32-bit linear memories in
/// Wasmtime are, by default 4G address space reservations with a 2G guard
/// Wasmtime are, by default 4G address space reservations with a small guard
/// region both before and after the linear memory. Memories in the pooling
/// allocator are contiguous which means that we only need a guard after linear
/// memory because the previous linear memory's slot post-guard is our own
/// pre-guard. This means that, by default, the pooling allocator uses 6G of
/// virtual memory per WebAssembly linear memory slot. 6G of virtual memory is
/// 32.5 bits of a 64-bit address. Many 64-bit systems can only actually use
/// 48-bit addresses by default (although this can be extended on architectures
/// nowadays too), and of those 48 bits one of them is reserved to indicate
/// kernel-vs-userspace. This leaves 47-32.5=14.5 bits left, meaning you can
/// only have at most 64k slots of linear memories on many systems by default.
/// This is a relatively small number and shows how the pooling allocator can
/// quickly exhaust all of virtual memory.
/// pre-guard. This means that, by default, the pooling allocator uses roughly
/// 4G of virtual memory per WebAssembly linear memory slot. 4G of virtual
/// memory is 32 bits of a 64-bit address. Many 64-bit systems can only
/// actually use 48-bit addresses by default (although this can be extended on
/// architectures nowadays too), and of those 48 bits one of them is reserved
/// to indicate kernel-vs-userspace. This leaves 47-32=15 bits left,
/// meaning you can only have at most 32k slots of linear memories on many
/// systems by default. This is a relatively small number and shows how the
/// pooling allocator can quickly exhaust all of virtual memory.
///
/// Another disadvantage of the pooling allocator is that it may keep memory
/// alive when nothing is using it. A previously used slot for an instance might
Expand Down

0 comments on commit 4b07e91

Please sign in to comment.