-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Metaissue] Memory allocators in Gramine, their performance and possible alternatives #1767
Comments
Description of current memory allocatorsThere are two memory allocators in Gramine: MEMMGR and SLAB. Both allocators rely on the following shared logic:
MEMMGR fixed-size allocatorUsed to allocate specific objects in specific subsystems. Currently used only in LibOS. Each subsystem of LibOS that uses MEMMGR specifies its own (global to the subsystem) lock. Thus, MEMMGR object allocs/frees in the same subsystem are synchronized on this lock, but object allocs/frees in different subsystems can run in parallel. The current users of MEMMGR:
Every managed object is wrapped into a Design and implementation are very simple:
The MEMMGR memory managers are never "reset", or shrunk, or deleted. Thus, if LibOS allocated a lot of MEMMGR objects initially, and then freed them all, then this MEMMGR memory is leaked. This should be a very rare and unimportant case though. Backend-memory (page-sized) allocation happens via Backend-memory (page-sized) deallocation, as mentioned above, doesn't really happen. But if it would, then it would be via Support for ASan
Open issues
SLAB variable-size allocatorGeneric backend for malloc and free in all other subsystems. Used both in LibOS and in PAL. When any (random size) object needs to be allocated/freed in LibOS or in PAL, the traditional
Backend-memory (page-sized) allocation and deallocation is implemented via:
There is a single global slab manager and corresponding lock for LibOS and similarly a single global slab manager and corresponding lock for PAL. See these:
NOTE We have a Design and implementation is based on the MEMMGR allocator for the common case, and has a trivial fallback for the large-object case:
Deallocation happens similarly to the allocation description above:
Open issues
|
Description of the feature
Memory allocators in Gramine were written ~15 years ago. It's time to re-evaluate them and propose alternatives.
I want to start a discussion on this topic. In particular, the next posts will contain information on:
malloc
andfree
in all other subsystems (both in LibOS and in PAL)Why Gramine should implement it?
We see more and more performance bottlenecks in memory/object allocation inside Gramine itself. They all stem from the fact that our memory allocators are old, ad-hoc, not scalable, and not CPU- and cache-friendly; also our allocators do not perform object caching.
Just a couple of recent examples:
sendfile()
#1615For possible alternatives (in Rust), see this comment: #1723 (review)
The text was updated successfully, but these errors were encountered: