-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement weighted lru cache semantic #24
Conversation
cc @webmaster128 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool stuff. My main question is how this changes when a second limit is introduced.
fn clone(&self) -> Self { | ||
Self(self.0.clone()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this different from the default implementation of Clone
?
@@ -411,12 +417,14 @@ impl<Q: ?Sized, K: Borrow<Q>> Borrow<KeyRef<Q>> for Key<K> { | |||
struct CLruNode<K, V> { | |||
key: Key<K>, | |||
value: V, | |||
weight: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
weight: usize, | |
weight: NonZeroUsize, |
} | ||
|
||
/// An LRU cache with constant time operations. | ||
pub struct CLruCache<K, V, S = RandomState> { | ||
lookup: HashMap<Key<K>, usize, S>, | ||
storage: FixedSizeList<CLruNode<K, V>>, | ||
weight: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Curious to see how API evolves when max_weight
comes in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I am not sure I want to add another limit. The way I think it should work is that the capacity
is the max_weight
you are talking about.
We might want to differentiate the concept of pre-allocation and actual maximum capacity of the cache though.
It would require a new constructor method like with_capacity_and_pre_allocated_nodes()
but with a better name obviously. That change, if possible, should probably be added first in a separate PR. I will think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy if only a single weight limit is needed.
We might want to differentiate the concept of pre-allocation and actual maximum capacity of the cache though.
If that works, great. My impression was that the codebase heavily depends on the knowledge of element counts in multiple places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have successfully split the notion of capacity and preallocated memory in #31
weight: NonZeroUsize, | ||
) -> Result<Option<V>, ()> { | ||
let weight = weight.get(); | ||
if weight > self.capacity() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If capacity serves as an element count limits and a weight limit at the same time, there are cases where we reserve way too many elements. E.g. if an application uses memory consumtion in bytes as weight and wants to limit weight to 2 GB at an average element size of 100 MB, 2 billion elements are reserved instead of 20.
fn test_size_of_node() { | ||
assert_eq!(std::mem::size_of::<CLruNode<String, usize>>(), 24); | ||
assert_eq!(std::mem::size_of::<Option<CLruNode<String, usize>>>(), 24); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That passes? Interesting. However, we probably do not want to make assumption like this as it will fail on non-64bit platforms and when the memory layout of structs changes.
Hello @marmeladema, are you planning to merge this? I've noticed tests for the weights functionality are missing. I can write them, as well as working on what's needed to finish this. |
Closing in favor of #32 |
See #11 for more information