Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add growable write buffer #66

Closed
maypok86 opened this issue Mar 8, 2024 · 1 comment · Fixed by #67
Closed

Add growable write buffer #66

maypok86 opened this issue Mar 8, 2024 · 1 comment · Fixed by #67
Labels
enhancement New feature or request

Comments

@maypok86
Copy link
Owner

maypok86 commented Mar 8, 2024

Clarification and motivation

Unfortunately, at the moment, otter memory consumption in small cache sizes is quite large compared to other caches. This is caused by the buffers that otter uses, and creates them too large without knowing in advance the characteristics of the workloads. Therefore, it is proposed to implement a growable write buffer that will be able to grow from minimum capacity to maximum capacity.

@maypok86 maypok86 added the enhancement New feature or request label Mar 8, 2024
@maypok86
Copy link
Owner Author

maypok86 commented Mar 8, 2024

I tried to write a growable queue and here are the conclusions:

  1. A queue with mutexes can be written so that its speed practically coincides with the speed of the channels, but I never managed to overtake them, and it is unlikely to work. After all, channels use a lower-level api.
  2. I also tried to write a bounded version of the queue from this page. It was able to overtake the channels, but unfortunately allocates memory, and if you add sync.Pool to it, then its speed becomes equal to the speed of the channels. :(

Also, according to observations, even with a load profile of reads=25%,writes=75%, the queue speed does not particularly affect the speed of the entire cache, but with only writes it becomes a bottleneck.

And some benchmarks.

goos: darwin
goarch: arm64
pkg: github.com/maypok86/otter/internal/queue
BenchmarkChanProdConsWork100-10                  3815607               324.0 ns/op             0 B/op          0 allocs/op
BenchmarkBlockingProdConsWork100-10              3406512               354.1 ns/op             0 B/op          0 allocs/op
BenchmarkMPSCProdConsWork100-10                  5416998               220.2 ns/op            16 B/op          0 allocs/op
BenchmarkMPSCPoolProdConsWork100-10              3699946               319.6 ns/op             0 B/op          0 allocs/op
PASS
ok      github.com/maypok86/otter/internal/queue        6.260s

So I'll probably really just take a queue with mutexes as a good enough solution. Maybe later I will get bored and I will try to implement the lock-free version, but it will definitely be much later.

maypok86 added a commit that referenced this issue Mar 8, 2024
@maypok86 maypok86 mentioned this issue Mar 8, 2024
7 tasks
maypok86 added a commit that referenced this issue Mar 9, 2024
maypok86 added a commit that referenced this issue Mar 9, 2024
maypok86 added a commit that referenced this issue Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant