You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using memguard in the Telegraf project to protect credentials or other secrets in memory. We love the guarantees this project gives and the clean interface, so thank you for the great project!
Telegraf is a data collection agent and might query dozen, hundreds or even thousands of endpoint in parallel gathering the data. Each of these endpoint might require credentials (we call them secrets) and thus will access an enclave to retrieve a LockedBuffer and hold it for a short amount of time. This is where we currently run into issues if too many instances do access at the same time.
Each LockedBuffer is locking (at least) two pages of memory. Assuming a 4kb page-size in the system and a worst case of all plugins access their secrets at the same time we end-up with 8000kb of locked memory for 1000 secrets. On some systems the ulimit is set to 64kb or lower only as e.g. described here. Increasing the ulimit might help for those cases but we do have extreme instances querying hundred-thousands or even millions of endpoints rendering the ulimit approach infeasible.
Please note, if you assume a credential to be around 64 byte on average (I think that's probably a high value already) and do assume an extreme case of 1 million of those credentials you will end up at ~64Mb of data, whereas with the current implementation you will end up at ~8Gb of locked memory (1.000.000*2*4kb )!
Request
Please reduce the number of locked pages held by memguard to allow managing a massive amount of guarded secrets.
Proposal
In my view, memguard could allocate a fixed amount of memory, lock it and "compact" all LockedBuffers into this chunk of memory. We could (and should) still keep the canary before and after the actual data. De-facto this is equivalent to an own memory management and memory allocator. Thankfully there are already projects (e.g. https://github.com/steveyen/go-slab) providing a custom allocator.
This proposal sounds similar to what is discussed in #124 but the main difference is that #124 wants to reduce the additional data (i.e. the canaries) while this proposal discusses a reduction of used memory pages.
The text was updated successfully, but these errors were encountered:
Problem
We are using memguard in the Telegraf project to protect credentials or other secrets in memory. We love the guarantees this project gives and the clean interface, so thank you for the great project!
Telegraf is a data collection agent and might query dozen, hundreds or even thousands of endpoint in parallel gathering the data. Each of these endpoint might require credentials (we call them secrets) and thus will access an enclave to retrieve a
LockedBuffer
and hold it for a short amount of time. This is where we currently run into issues if too many instances do access at the same time.Each
LockedBuffer
is locking (at least) two pages of memory. Assuming a 4kb page-size in the system and a worst case of all plugins access their secrets at the same time we end-up with 8000kb of locked memory for 1000 secrets. On some systems the ulimit is set to 64kb or lower only as e.g. described here. Increasing the ulimit might help for those cases but we do have extreme instances querying hundred-thousands or even millions of endpoints rendering the ulimit approach infeasible.Please note, if you assume a credential to be around 64 byte on average (I think that's probably a high value already) and do assume an extreme case of 1 million of those credentials you will end up at ~64Mb of data, whereas with the current implementation you will end up at ~8Gb of locked memory (
1.000.000*2*4kb
)!Request
Please reduce the number of locked pages held by memguard to allow managing a massive amount of guarded secrets.
Proposal
In my view, memguard could allocate a fixed amount of memory, lock it and "compact" all
LockedBuffers
into this chunk of memory. We could (and should) still keep the canary before and after the actual data. De-facto this is equivalent to an own memory management and memory allocator. Thankfully there are already projects (e.g. https://github.com/steveyen/go-slab) providing a custom allocator.This proposal sounds similar to what is discussed in #124 but the main difference is that #124 wants to reduce the additional data (i.e. the canaries) while this proposal discusses a reduction of used memory pages.
The text was updated successfully, but these errors were encountered: