-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wasmtime slow due to lock contention in kernel during munmap() #177
Comments
We did a bit of digging and it seems like the crux of the matter is that wasmtime is making an extremely large mapping to reserve address space, so that other things don't end up surprise-mapped half way through one of the slots that is set aside for a potential WASM instance. Today it appears that mapping is I would also note, if useful, that while we strongly encourage people not to depend on the Linux-specific |
I gave this a shot but ran into the fact that wasmtime doesn't really centralize its memory allocation tracking anywhere -- several independent components all call sunshowers/wasmtime#1 is my WIP (see |
Ended up deciding to use unsafe code to store a |
Filed an issue on the wasmtime repo talking about my prototype, the issues I ran into while writing it, and possible solutions: bytecodealliance/wasmtime#9544 |
Encountered an interesting bug while trying to port wasmtime to illumos, as documented in bytecodealliance/wasmtime#9535.
STR (note
+beta
, compiling wasmtime on illumos requires Rust 1.83 or above):This takes around 0.07 seconds on Linux but around 5-6 seconds on illumos.
DTrace samples:
From my naive reading of particularly the kernel stacks, it seems like most of the time is being spent waiting on locks to various degrees.
Per Alex Crichton in this comment:
This corresponds to
PoolingInstanceAllocator
in wasmtime. Alex suggests possibly tweaking how the allocator works either on illumos or generally, but given the performance difference between illumos and Linux it seems that a kernel-level improvement might help.cc @iximeow, @rmustacc who I briefly chatted with about this.
The text was updated successfully, but these errors were encountered: