-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max memory limit to protect from out of memory #32
Comments
I agree, that would be super helpful! Since the cache is generic over keys and values, my first thought is that the cache could accept a function in it's constructor that accepts a reference to a key and value and returns the size in bytes of the two: let limit = 2^16
let mut cache: LruCache<isize, &str> = LruCache::with_max_memory(limit, |&k, &v| {
// Calculate size of key and value.
size
}); What do you think? |
That’s good idea. Either it takes the max memory size or number of entries/capacity |
Any plans to prioritize this work? |
Hey @rohitjoshi! Sorry, I don't have a clear idea on how we can implement this, so I'm not sure how long it will take and I don't have the time at the moment to really devote the amount of attention it will likely require. |
We are probably needing this as well and if this functionality is desired here, I am willing to implement it. I propose the following API: the By the way, I don't think the crate can reasonably caclulate the size of elements automatically because you can have all kind of indiretion to heap allocated memory. This becomes even harder when two elements have shared ownership of some data (e.g. via Rc). |
@webmaster128 I agree that it would be extremely difficult to determine, in a general way, the cost of elements inserted into the cache and so it would be preferable to expose an API based on "points" that allows the caller to set the cost of an element explicitly. I don't think I'll have any time to implement this in the near-future though, but would be more than happy to review any pull requests that do! :) |
Alternative way I am considering is to enforce dynamic cache size based on memory available from external trigger. Eg when available memory is less than 10%, set that as a cache size and force eviction for new elements. This would be easier to achieve. Alternative way is as elements are added, increment the total size by length of key and Val. |
Is it possible to have dynamic e.g. Let's say initial capacity is 100 but. When len() is 75, I want to freeze() the additional growth but still allow inserting new elements and evicting old. Alternatively we can resize() but this would be expensive operation for latency sensitive applications. |
Hi @rohitjoshi! Currently, the resize method is: pub fn resize(&mut self, cap: usize) {
// return early if capacity doesn't change
if cap == self.cap {
return;
}
while self.map.len() > cap {
self.remove_last();
}
self.map.shrink_to_fit();
self.cap = cap;
} If I understand your question correctly, it sounds like you want an option to skip calling |
@jeromefroe that's correct. |
I have worked around this by getting the current len() and resizing with that capacity which solved my problem. I am closing for now. |
Hello,
Is there any way to restrict the max memory usage for LRU? E.g. I am expecting 100M key/value pair but value length varies and ends up killing the process when it exceeds available memory.
Is there any way to enforce configurable max memory usage even though LRU capacity is not filled?
May be allocator counter: https://doc.rust-lang.org/beta/std/alloc/struct.System.html
The text was updated successfully, but these errors were encountered: