This project is a fault-tolerant, distributed key-value store built in Go. It uses the Raft consensus algorithm to ensure data consistency and high availability across a cluster of nodes. The entire application is designed to be run in a containerized environment using Docker.
- Distributed Key-Value Store: Supports basic
PUT
,GET
, andDELETE
operations. - Raft Consensus: Implements the Raft algorithm for leader election, log replication, and fault tolerance.
- High Availability: The cluster can tolerate the failure of up to
(N-1)/2
nodes in an N-node cluster. - Automatic Peer Discovery: Nodes automatically discover each other using DNS service discovery within a Docker network.
- HTTP API: Simple and clean HTTP-based API for interacting with the store.
- Request Forwarding: Follower nodes automatically forward write requests (
PUT
,DELETE
) to the cluster leader. - State Persistence: The state of each node is periodically saved to a snapshot on disk, allowing for recovery after a restart.
- Graceful Shutdown: On shutdown, a final snapshot is saved to prevent data loss.
The system is designed to run as a multi-node cluster (typically 5 nodes for a production-ready setup).
- Leader Election: When the cluster starts, nodes enter an election to choose a single leader.
- Client Interaction:
- The leader is responsible for handling all write operations (
PUT
,DELETE
). It replicates the operation to its log and then broadcasts it to all follower nodes. - All nodes (leaders and followers) can handle read operations (
GET
) directly, providing fast reads.
- The leader is responsible for handling all write operations (
- Log Replication: The leader ensures that all follower nodes have a consistent and up-to-date log of operations. An operation is only committed and applied to the key-value store after a majority of nodes have acknowledged it.
- Fault Tolerance: If the leader node fails, the remaining nodes will detect its absence and automatically begin a new election to choose a new leader, ensuring the service remains available.
- Go (version 1.24 or later)
- Docker
- Docker Compose
You can run the cluster in two ways: with a load-balancing proxy (recommended for a production-like setup) or with each node's port exposed directly (useful for testing).
This setup starts 5 kvstore
nodes and an NGINX proxy that exposes a single entry point on port 80.
-
Build and start the services:
docker compose -f docker-compose.yml up --build kvstore=5
-
Interact with the cluster via the proxy on port 80:
# Set a value curl -X PUT -H "Content-Type: application/json" -d '{"value":"Hello World"}' # Get the value curl http://localhost/get/mykey
This setup starts 5 kvstore
nodes and maps their ports directly to the host (8080
through 8084
). This is useful for inspecting the status of individual nodes.
-
Build and start the services:
docker-compose -f docker-compose.test.yml up --build
-
Interact with any node directly:
# Send a request to the first node curl -X PUT -d "some_value" http://localhost:8080/put/anotherkey # Read it back from a different node curl http://localhost:8083/get/anotherkey
The following endpoints are available on each node.
Method | Endpoint | Description |
---|---|---|
PUT |
/put/{key} |
Creates or updates a value for the given key. If sent to a follower, it will be forwarded to the leader. The value is sent in the body. |
GET |
/get/{key} |
Retrieves the value for the given key. |
DELETE |
/delete/{key} |
Deletes the key-value pair. If sent to a follower, it will be forwarded to the leader. |
GET |
/status |
Returns the internal Raft status of the node (e.g., its role, current term, leader address). |
GET |
/health |
A simple health check endpoint that returns OK if the node is running. |