You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a cache that stores files as Byte values and we set the weigher as Bytes::len() and the max capacity to some number of bytes (i.e, 1gb or something).
When populating the cache either with try_get_with or insert, or whatever, there is a point in time where new memory is allocated in either the init future, or equivalent.
So what ends up happening is that your total size in memory of the cache is roughly equal to:
max_capacity + memory of init futures
When those init futures resolve, an eviction happens, and memory goes down. But for a period of time, there can be a lot of overhead in those init futures.
What I think might work is that, since within the init futures we know what the size of the thing will be when it goes in the cache before allocating, if we could say to the cache something like "evict up to n amount of capacity", then it would allow us to keep memory under control.
Something like:
cache.invalidate(capacity_num)
Let me know if that doesn't make sense, or there are any alternatives
The text was updated successfully, but these errors were encountered:
Just to update this: I've managed to work around this by using a shared semaphore in the init futures. I still think it might be valuable to have a way to resize the cache dynamically, but I will close this ticket off.
I'll try my best to explain the scenario.
We have a cache that stores files as
Byte
values and we set the weigher asBytes::len()
and the max capacity to some number of bytes (i.e, 1gb or something).When populating the cache either with
try_get_with
orinsert
, or whatever, there is a point in time where new memory is allocated in either the init future, or equivalent.So what ends up happening is that your total size in memory of the cache is roughly equal to:
When those init futures resolve, an eviction happens, and memory goes down. But for a period of time, there can be a lot of overhead in those init futures.
What I think might work is that, since within the init futures we know what the size of the thing will be when it goes in the cache before allocating, if we could say to the cache something like "evict up to
n
amount of capacity", then it would allow us to keep memory under control.Something like:
Let me know if that doesn't make sense, or there are any alternatives
The text was updated successfully, but these errors were encountered: