Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION]: Why is Ristretto so slow and has such a small hit ratio? #363

Open
maypok86 opened this issue Nov 15, 2023 · 8 comments
Open
Labels
kind/question Something requiring a response.

Comments

@maypok86
Copy link

Question.

Hi, I am currently writing a similar package and tried to compare my implementation https://github.com/maypok86/otter with Ristretto. In the end I found that Ristretto loses very badly (about 5-7 times) and has a hit ratio on most traces around zero. I can roughly understand why there is a terrible loss in speed (Ristretto even though it says it is contention-free, it actually locks its shards very often which ends up not giving enough perfomance to fight Otter).

I attached benchmarks with the best cache libraries at the moment.

goos: darwin
goarch: arm64
pkg: github.com/maypok86/benchmarks
BenchmarkCache/Zipf_Otter_reads=100%,writes=0%-10         	21643555	        46.95 ns/op	       1 B/op	       0 allocs/op
BenchmarkCache/Zipf_Theine_reads=100%,writes=0%-10        	12177036	        97.41 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Ristretto_reads=100%,writes=0%-10     	20823691	        54.00 ns/op	      16 B/op	       1 allocs/op
BenchmarkCache/Zipf_Otter_reads=75%,writes=25%-10         	17760416	        66.58 ns/op	       6 B/op	       0 allocs/op
BenchmarkCache/Zipf_Theine_reads=75%,writes=25%-10        	 2540319	       462.3 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Ristretto_reads=75%,writes=25%-10     	 2442692	       464.9 ns/op	      51 B/op	       2 allocs/op
BenchmarkCache/Zipf_Otter_reads=50%,writes=50%-10         	13647469	        78.17 ns/op	      11 B/op	       0 allocs/op
BenchmarkCache/Zipf_Theine_reads=50%,writes=50%-10        	 2298838	       537.8 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Ristretto_reads=50%,writes=50%-10     	 2603936	       444.1 ns/op	      72 B/op	       2 allocs/op
BenchmarkCache/Zipf_Otter_reads=25%,writes=75%-10         	10211748	       100.6 ns/op	      16 B/op	       0 allocs/op
BenchmarkCache/Zipf_Theine_reads=25%,writes=75%-10        	 2140611	       563.8 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Ristretto_reads=25%,writes=75%-10     	 2401822	       571.2 ns/op	     102 B/op	       3 allocs/op
BenchmarkCache/Zipf_Otter_reads=0%,writes=100%-10         	19658750	        57.54 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Theine_reads=0%,writes=100%-10        	 1958818	       656.7 ns/op	       0 B/op	       0 allocs/op
BenchmarkCache/Zipf_Ristretto_reads=0%,writes=100%-10     	 2606988	       450.6 ns/op	     124 B/op	       3 allocs/op
PASS
ok  	github.com/maypok86/benchmarks	171.538s

But I would still like to hear an answer about the small hit ratio in Ristretto because in README there are some numbers but in practice they are around 0. This was also asked here and remained unanswered😢

I also attach the results of my hit ratio calculations, it seems several people and different benchmark implementations can't be wrong...😔
oltp

@maypok86 maypok86 added the kind/question Something requiring a response. label Nov 15, 2023
@MUCZ
Copy link

MUCZ commented May 14, 2024

In your graph, tests are run using capacities under 2000, according to the code I assume that this will result in a NumCounters<20000 and a MaxCost <2000 for the ristretto cache instance which is absurdly far from the normal settings of it. In the readme.md they used

NumCounters: 1e7,     // number of keys to track frequency of (10M).
MaxCost:     1 << 30, // maximum cost of cache (1GB).

So I assume that this might be the reason. I look forward to see your updated tests result.

@maypok86
Copy link
Author

I assume that this will result in a NumCounters<20000 and a MaxCost <2000 for the ristretto cache instance which is absurdly far from the normal settings of it.

Unfortunately, this is not how it works. MaxCost is the number of entries stored in the cache (if the cost of each entry in the cache is 1). If you specify MaxCost = 1 << 30, then 1 << 30 = 1073741824 entries will be stored in the cache. This will lead to huge memory usage and unfairness of benchmarks.

@maypok86
Copy link
Author

Everything indicates at the moment that the latest versions of ristretto have very serious bugs that have not been fixed for a long time. (everything is fine with hit ratio on v0.0.2).

@MUCZ
Copy link

MUCZ commented May 20, 2024

Unfortunately, this is not how it works. MaxCost is the number of entries stored in the cache (if the cost of each entry in the cache is 1). If you specify MaxCost = 1 << 30, then 1 << 30 = 1073741824 entries will be stored in the cache. This will lead to huge memory usage and unfairness of benchmarks.

There's a thing in ristretto called InternalCost, so even when you specify the cost to 1 in the code, the underlying cost for each item saved is much bigger than that. I think ristretto does not have a config field for 'exact' capacity.
In my case where memory usage is not a limiting factor, ristretto works well as long as a big enough MaxCost like 1<<30 is used.

@maypok86
Copy link
Author

the underlying cost for each item saved is much bigger than that.

Oh my God, I realized what happened and it's not good at all.

The fact is that at some point ristretto added the IgnoreInternalCost flag to its config. This led to the fact that all its clients (which specified cost = 1) broke down. Moreover, they even broke their tests. And I have not seen the use of IgnoreInternalCost in the vast majority of new clients, that is, they are also broken... And it's not obvious from the README either. It's not described there and you have to read the library code to figure it out.

In my case where memory usage is not a limiting factor

Unfortunately, this is not a common scenario and you do not understand the size of the cache.

Yes, hit ratio performance improves with this flag but it still does not give ristretto the opportunity to fight against the canonical W-TinyLFU.

p8

But the throughput has decreased significantly and now ristretto loses ccache on some types of load.

goos: darwin
goarch: arm64
pkg: github.com/maypok86/benchmarks/throughput
BenchmarkCache/zipf_otter_reads=100%,writes=0%-8                207136872                5.118 ns/op     195404670 ops/s
BenchmarkCache/zipf_theine_reads=100%,writes=0%-8               10925358               107.3 ns/op         9323651 ops/s
BenchmarkCache/zipf_ristretto_reads=100%,writes=0%-8            39178822                30.47 ns/op       32814142 ops/s
BenchmarkCache/zipf_ccache_reads=100%,writes=0%-8               35809983                44.41 ns/op       22518921 ops/s
BenchmarkCache/zipf_gcache_reads=100%,writes=0%-8                4137457               301.5 ns/op         3316661 ops/s
BenchmarkCache/zipf_ttlcache_reads=100%,writes=0%-8              2556433               454.4 ns/op         2200669 ops/s
BenchmarkCache/zipf_golang-lru_reads=100%,writes=0%-8            5192370               238.2 ns/op         4197925 ops/s
BenchmarkCache/zipf_otter_reads=75%,writes=25%-8                152412055                8.296 ns/op     120545846 ops/s
BenchmarkCache/zipf_theine_reads=75%,writes=25%-8               10727758               115.2 ns/op         8681015 ops/s
BenchmarkCache/zipf_ristretto_reads=75%,writes=25%-8            19170259                63.38 ns/op       15778332 ops/s
BenchmarkCache/zipf_ccache_reads=75%,writes=25%-8               19926837                56.00 ns/op       17857662 ops/s
BenchmarkCache/zipf_gcache_reads=75%,writes=25%-8                4015016               302.1 ns/op         3309800 ops/s
BenchmarkCache/zipf_ttlcache_reads=75%,writes=25%-8              3016176               418.7 ns/op         2388181 ops/s
BenchmarkCache/zipf_golang-lru_reads=75%,writes=25%-8            4415922               256.2 ns/op         3903844 ops/s
BenchmarkCache/zipf_otter_reads=50%,writes=50%-8                100895930               11.46 ns/op       87245999 ops/s
BenchmarkCache/zipf_theine_reads=50%,writes=50%-8               10697534               110.6 ns/op         9044843 ops/s
BenchmarkCache/zipf_ristretto_reads=50%,writes=50%-8            11715498                89.41 ns/op       11184716 ops/s
BenchmarkCache/zipf_ccache_reads=50%,writes=50%-8               11967110               101.7 ns/op         9831291 ops/s
BenchmarkCache/zipf_gcache_reads=50%,writes=50%-8                3883863               307.1 ns/op         3256162 ops/s
BenchmarkCache/zipf_ttlcache_reads=50%,writes=50%-8              3125329               357.5 ns/op         2797596 ops/s
BenchmarkCache/zipf_golang-lru_reads=50%,writes=50%-8            4535643               275.1 ns/op         3635185 ops/s
BenchmarkCache/zipf_otter_reads=25%,writes=75%-8                53571258                25.38 ns/op       39397649 ops/s
BenchmarkCache/zipf_theine_reads=25%,writes=75%-8                9444111               122.7 ns/op         8149415 ops/s
BenchmarkCache/zipf_ristretto_reads=25%,writes=75%-8             7211848               186.4 ns/op         5365390 ops/s
BenchmarkCache/zipf_ccache_reads=25%,writes=75%-8                8722424               133.6 ns/op         7482404 ops/s
BenchmarkCache/zipf_gcache_reads=25%,writes=75%-8                3865218               310.9 ns/op         3216943 ops/s
BenchmarkCache/zipf_ttlcache_reads=25%,writes=75%-8              3390970               308.0 ns/op         3247153 ops/s
BenchmarkCache/zipf_golang-lru_reads=25%,writes=75%-8            4562470               267.2 ns/op         3742059 ops/s
BenchmarkCache/zipf_otter_reads=0%,writes=100%-8                 3237279               348.8 ns/op         2866629 ops/s
BenchmarkCache/zipf_theine_reads=0%,writes=100%-8                4302932               370.5 ns/op         2698702 ops/s
BenchmarkCache/zipf_ristretto_reads=0%,writes=100%-8             2302149               495.7 ns/op         2017283 ops/s
BenchmarkCache/zipf_ccache_reads=0%,writes=100%-8                2053848               640.9 ns/op         1560212 ops/s
BenchmarkCache/zipf_gcache_reads=0%,writes=100%-8                3781105               323.3 ns/op         3093007 ops/s
BenchmarkCache/zipf_ttlcache_reads=0%,writes=100%-8              4412658               275.7 ns/op         3626876 ops/s
BenchmarkCache/zipf_golang-lru_reads=0%,writes=100%-8            4986291               232.8 ns/op         4295432 ops/s
PASS
ok      github.com/maypok86/benchmarks/throughput       59.164s

I will try to update the README in otter soon and ideally I need to come with a pull request to ristretto of course but I doubt that this will fix anything with the current level of support.

Copy link

This issue has been stale for 60 days and will be closed automatically in 7 days. Comment to keep it open.

@github-actions github-actions bot added the Stale label Jul 19, 2024
@jbduncan
Copy link

Still applicable.

@github-actions github-actions bot removed the Stale label Jul 20, 2024
@maypok86
Copy link
Author

maypok86 commented Jul 30, 2024

I recalculated the benchmark results with the updated ristretto configuration, and also added gcache, ttlcache and golang-lru. The results look quite plausible. It's only necessary to make a reservation that the ristretto memory usage may be less than the rest in reality, since ristretto uses a dirty hack and doesn't store keys. I will try to update the README in the next couple of days.

P.S. I'm very sorry for the long delay :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Something requiring a response.
Development

No branches or pull requests

3 participants