Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow heap to start not at a page boundary #88

Open
sffc opened this issue Jan 31, 2020 · 2 comments
Open

Allow heap to start not at a page boundary #88

sffc opened this issue Jan 31, 2020 · 2 comments
Labels
enhancement New feature or request

Comments

@sffc
Copy link

sffc commented Jan 31, 2020

Motivation

We're trying to build an application where WASM is used to run "microfunctions", small stateless functions that can be written once in Rust and then ported via WASM to run in a variety of runtimes. A WASM Memory may be built once and then used again and again for multiple microfunction invocations. The buffers backing the WASM Memories would be owned and destroyed by the host environment.

One problem we're running into is that the WASM Memories, at 64 KiB page sizes, are unsuitable for scaling to dozens of microfunctions. It seems unlikely that the WASM spec would ever allow smaller page sizes, so the best alternative would be to guarantee that each microfunction fits within one page.

We've managed to get the Rust call stack and static memory to fit in a small chunk of linear memory, such that most of the first page is empty. However, per #61, wee_alloc seems to always add a new page for its heap, with no option to re-use empty space in the existing linear memory space.

Proposed Solution

Allow wee_alloc to set the head of its heap to an arbitrary location. The location could be provided by a Global, for example. If wee_alloc runs out of space in that initial block (between the start position and the current memory size), then it can allocate more pages as usual.

Alternatives

  1. Decreasing the page size, but this would require a fundamental change to the WASM MVP spec, which seems unlikely.
  2. Accepting that code written with wee_alloc always requires at least 128 KiB of linear memory.

Additional Context

The project is called OmnICU. We're hoping to share more details soon. For now, you can track some of our work at https://github.com/i18n-concept/rust-discuss

CC @hagbard @nciric @echeran

@pepyakin
Copy link
Member

pepyakin commented Feb 3, 2020

Interesting! I think in theory it should be possible.

There is a global variable provided by LLD call __heap_base. We can link against it to get the address where we can start laying out the heap.

this works for me
extern "C" {
    static __heap_base: usize;
}

#[no_mangle]
pub extern "C" fn get_heap_base() -> *const () {
    unsafe { &__heap_base as *const _ as *const () }
}

Then, I think it might be fine, if we return a not full page from the first call to imp_wasm32::alloc_pages, given that we align properly it and update the bookkeeping correspondingly. There is one caveat though: wee_alloc expects that the allocation always succeeds after refilling the pages, so I guess we will have to be prepared to this somehow.

In anycase, I think @fitzgen would know better here.

@fitzgen
Copy link
Member

fitzgen commented Feb 3, 2020

I tried hooking up __heap_base once before and had problems with it always being zero, but if that snippet works consistently, then we should start using it in general when initializing the main free list (rather than on the first alloc_pages call).

Then, as long as memory.grow instructions fail, wee_alloc will only use the initial heap laid out by lld.

We've managed to get the Rust call stack and static memory to fit in a small chunk of linear memory, such that most of the first page is empty.

This is going to be very prone to hard-to-debug stack overflows, so beware.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants