-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: minimise the amount of unmanaged memory #124
Comments
I've thought about doing something like this, but for temporal benefits instead of spacial. For the use-case you've specified perhaps we can add a Region structure: type Region struct {
pages [][]byte // array of page-sized blocks
data []byte // reference to entire region
} And then LockedBuffers can also export this perhaps, although it would trivially leak the canary if this was inadvertently passed somewhere, so maybe not. Another solution I was thinking about is a Pool structure that is essentially a queue of byte slices acting like an allocator or a buffer pool. For performance reasons this is useful as it means we don't need to allocate three pages and set them up every time we need a 32 byte region, but it's also advantageous from a spacial perspective as a single LockedBuffer region can be split into N slices and added to this pool, retaining the guard pages but minimizing wasted space. Two birds with one stone. |
We could also implement a custom allocator that's backed by LockedBuffers. It's potentially the most versatile solution but also the most complicated. |
:: Adding a new container without guard pagesProposal: Add a new LockedBuffer-like container which is essentially the same as a LockedBuffer but without the guard pages or canary. For
Against
Conclusion: If we go with this then I would rather implement it in the :: Adding a buffer pool implementationProposal: Add a queue/stack of fixed-size buffers that are backed by one or more LockedBuffer objects. For
Against:
:: Implementing a custom allocatorProposal: Implement a custom allocator that can be queried for N bytes and will return a byte slice N bytes in length, backed by some memory within a LockedBuffer. For
Against
|
Another solution: you can use memcall to allocate unmanaged memory regions of specific sizes, as well as mlock & mprotect them, and disable core dumps. |
Is your feature request related to a problem? Please describe.
Using guard pages has a tradeoff of taking up more unmanaged memory pages, which could be a potential concern in high traffic scenarios.
Describe the solution you'd like
Would be nice if there was an option to skip use guard pages when creating
LockedBuffer
s. Can obviously keep using them by default but allow the user to pass in an option to avoid them. Whether this is exposed as part of theEnclave
related usage should be considered, but could probably be handled separately, if needed.Describe alternatives you've considered
Considered using
Enclave
, but in the expected traffic/usage patterns (cached keys, multiple concurrent users), it would likely lead to more unmanaged memory usage.Additional context
N/A
The text was updated successfully, but these errors were encountered: