Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement weighted lru cache semantic #24

Closed
wants to merge 1 commit into from
Closed

Conversation

marmeladema
Copy link
Owner

See #11 for more information

@marmeladema
Copy link
Owner Author

cc @webmaster128
This is a first prototype. Not finished yet, need to add some tests, doc etc

Copy link
Contributor

@webmaster128 webmaster128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool stuff. My main question is how this changes when a second limit is introduced.

fn clone(&self) -> Self {
Self(self.0.clone())
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this different from the default implementation of Clone?

@@ -411,12 +417,14 @@ impl<Q: ?Sized, K: Borrow<Q>> Borrow<KeyRef<Q>> for Key<K> {
struct CLruNode<K, V> {
key: Key<K>,
value: V,
weight: usize,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
weight: usize,
weight: NonZeroUsize,

}

/// An LRU cache with constant time operations.
pub struct CLruCache<K, V, S = RandomState> {
lookup: HashMap<Key<K>, usize, S>,
storage: FixedSizeList<CLruNode<K, V>>,
weight: usize,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Curious to see how API evolves when max_weight comes in

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I am not sure I want to add another limit. The way I think it should work is that the capacity is the max_weight you are talking about.

We might want to differentiate the concept of pre-allocation and actual maximum capacity of the cache though.
It would require a new constructor method like with_capacity_and_pre_allocated_nodes() but with a better name obviously. That change, if possible, should probably be added first in a separate PR. I will think about it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy if only a single weight limit is needed.

We might want to differentiate the concept of pre-allocation and actual maximum capacity of the cache though.

If that works, great. My impression was that the codebase heavily depends on the knowledge of element counts in multiple places.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have successfully split the notion of capacity and preallocated memory in #31

weight: NonZeroUsize,
) -> Result<Option<V>, ()> {
let weight = weight.get();
if weight > self.capacity() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If capacity serves as an element count limits and a weight limit at the same time, there are cases where we reserve way too many elements. E.g. if an application uses memory consumtion in bytes as weight and wants to limit weight to 2 GB at an average element size of 100 MB, 2 billion elements are reserved instead of 20.

fn test_size_of_node() {
assert_eq!(std::mem::size_of::<CLruNode<String, usize>>(), 24);
assert_eq!(std::mem::size_of::<Option<CLruNode<String, usize>>>(), 24);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That passes? Interesting. However, we probably do not want to make assumption like this as it will fail on non-64bit platforms and when the memory layout of structs changes.

@maurolacy
Copy link
Contributor

maurolacy commented Jan 22, 2021

Hello @marmeladema, are you planning to merge this?

I've noticed tests for the weights functionality are missing. I can write them, as well as working on what's needed to finish this.

@marmeladema
Copy link
Owner Author

Closing in favor of #32

@marmeladema marmeladema deleted the weighted-lru branch February 21, 2021 11:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants