Skip to content
/ tinycache Public

A in memory kv cache in Go with http and grpc interface. Toy project, do NOT use in production

License

Notifications You must be signed in to change notification settings

at15/tinycache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tinycache

A in memory KV cache in Go with http API. Supports LRU and TTL. Toy project, do NOT use in production.

Usage

See cmd/tinycache/main.go for using the server and client, which uses the library in server/http.go and server/grpc.go.

Server

# Install to $GOPATH/bin
make install
# Run server on localhost:8080
# View prometheus metrics on http://localhost:8080/stats
tinycache server
# gRPC server
tinycache server --grpc

Client

curl

NOTE: Only works for HTTP server.

# set
curl -X PUT http://localhost:8080/cache/b1/k1 -d "v1"
# set with ttl and policy
curl -X PUT "http://localhost:8080/cache/b1/k1?ttl=1s&policy=lru" -d "v1"

# get
curl -X GET http://localhost:8080/cache/b1/k1

# delete
curl -X DELETE http://localhost:8080/cache/b1/k1

REPL

NOTE: Only works for gRPC right now.

tinycache client
TinyCache CLI (type 'help' for commands, 'exit' to quit)
Connected to localhost:8080
> set k1 v1
Usage: set <bucket> <key> <value> [ttl_ms]
> set b1 k1 v1
OK
> set b2 k2 v3
OK
> get b1 k1
v1
> get b2 k3
Error: rpc error: code = Unknown desc = key k3 not found
> del b1 k1
OK
> get b1 k1
Error: rpc error: code = Unknown desc = bucket b1 not found
> exit

Docker

make docker-build
make docker-run

Development

gRPC/Protobuf

To regenerate pb files after chainging the proto

brew install protobuf
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
make proto

How it works

Just using a linked list to track the insertion order and recent usage. We are using a single linked list to track different policies, so the behavior can be strange when mixed policies are used ... Ideally the eviction policy should be same for entire cache and not specified in each operation.

TODO

KV

  • copy the interface
  • in memory cache
    • bucket, max 255 keys is per bucket or entire cache? Should be entire cache otherwise there is no limit on number of buckets.
    • eviction policy, each operation can have different policy in options???
    • ttl (lazy or run in background)
    • test

Server

  • grpc
  • http
  • metrics, using prometheus
  • client in the cli
  • redis protocol? (if I have time)

References

About

A in memory kv cache in Go with http and grpc interface. Toy project, do NOT use in production

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published