-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow limiting by memory limit instead of item count #11
Comments
I am definitely not against it, but as always, it depends on the trade-offs. I like the API you proposed with |
Sounds good, thanks. I can write a few doc domments to the API functions in the next days to explain the user's view of things. I like the renaming cost -> weight as it described a little better that this is carried around for the lifetime of the cache entry, not a one-time thing. |
Looking into this a bit more, I think there are two core changes that the implementation would need:
|
I don't think it's really related to the current work. Moreover,
The current linked list implementation is already doubly-linked and supports constant time remove. I don't really see a value in using Having said that, I think adding support for
One thing that we would have to be careful of is the fact, as you noted in the PR, that The last thing I am unsure is the type of the
|
After thinking about this a little more, I also don't think the STL's LinkedList will add the index table that we have here and that is required. The only property of it we might want to have is variable length. I think the core difference between my original proposal and your proposal is the limits. There are two options:
I think both ways can work. The first one feels more elegant to me, but the second one probably makes some of the implementation more performant. Some of your open questions can probably only answered after choosing one route or the other.
Since you have the worst case scenario where the cache is filled with N elements of weight 1 and you need to remove all of them in order to put in an element of weight N, I don't think O(1) is achievable. |
Since |
Nice catch, yeah, put and put_with_weight have different worst case characteristics then. This makes |
I kind of think we will have to do this nonetheless. There is the case of someone calling |
Right. I think it is reasonable to skip the put if it does not fit in just as the zero capacity case. It like filling water in a bucket. You can do that as much as you want but it can happen that the amount if water inside does not increase by doing so. Configuring the cache in a way that it makes sense for the use case and produces cache hits needs to be done by the application anyways. I don't have a strong opinion here. Both APIs seem solid to me. |
Done in #32 |
Same issue as described here: jeromefroe/lru-rs#32. Would you be interested in implementing the solution proposed here?
If such a change is desired, that means the total number of elements is not known anymore. So this would require some changes and probably tradeoffs. Is this something you would like to see happening in this crate?
The text was updated successfully, but these errors were encountered: