A sharded, Raft-replicated, log-structured key–value store with portable SDKs and first-class observability.
NoriKV is a distributed key-value database designed for high availability, strong consistency, and operational transparency. Built with modern storage and consensus algorithms, it provides predictable performance with comprehensive observability.
- Sharded Architecture: Horizontal scaling with Jump Consistent Hashing
- Raft Consensus: Strong consistency with leader-based replication
- LSM Storage: Log-structured merge-tree with leveled compaction
- Multi-Language SDKs: TypeScript, Python, Go, and Java clients
- First-Class Observability: Built-in metrics, tracing, and live visualization
- SWIM Membership: Fast failure detection and cluster health monitoring
- High Performance: Zero-copy operations, optimized hot paths
┌─────────────────────────────────────────────────────────────┐
│ Client SDKs (TypeScript, Python, Go, Java) │
│ - Smart routing to shard leaders │
│ - Automatic retry with exponential backoff │
│ - Connection pooling & health checking │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ NoriKV Cluster │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Node 1 │ │ Node 2 │ │ Node 3 │ │
│ │ (Shard 0) │ │ (Shard 1) │ │ (Shard 2) │ │
│ │ │ │ │ │ │ │
│ │ ┌────────┐ │ │ ┌────────┐ │ │ ┌────────┐ │ │
│ │ │ Raft │ │ │ │ Raft │ │ │ │ Raft │ │ │
│ │ └────────┘ │ │ └────────┘ │ │ └────────┘ │ │
│ │ ┌────────┐ │ │ ┌────────┐ │ │ ┌────────┐ │ │
│ │ │ LSM │ │ │ │ LSM │ │ │ │ LSM │ │ │
│ │ └────────┘ │ │ └────────┘ │ │ └────────┘ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ SWIM Membership: Gossip-based failure detection │
└─────────────────────────────────────────────────────────────┘
go get github.com/norikv/norikv-goimport norikv "github.com/norikv/norikv-go"
client, _ := norikv.NewClient(ctx, norikv.DefaultClientConfig(
[]string{"localhost:9001", "localhost:9002"},
))
defer client.Close()
// Put a value
version, _ := client.Put(ctx, []byte("key"), []byte("value"), nil)
// Get a value
result, _ := client.Get(ctx, []byte("key"), nil)npm install @norikv/clientimport { createClient } from '@norikv/client';
const client = createClient({
nodes: ['localhost:9001', 'localhost:9002'],
});
await client.put('key', 'value');
const result = await client.get('key');pip install norikvfrom norikv import Client
async with Client(['localhost:9001', 'localhost:9002']) as client:
await client.put('key', 'value')
result = await client.get('key')# Build the server
cargo build --release -p norikv-server
# Run with default configuration
./target/release/norikv-server| Crate | Status | Description |
|---|---|---|
| nori-observe | Complete | Vendor-neutral observability framework |
| nori-wal | Complete | Write-ahead log with recovery |
| nori-sstable | Complete | Sorted string tables with bloom filters |
| nori-lsm | Complete | LSM tree engine with compaction |
| nori-swim | Complete | SWIM failure detection protocol |
| nori-raft | Complete | Raft consensus implementation |
| Language | Status | Features | Tests |
|---|---|---|---|
| TypeScript | Production | Smart routing, retries, pooling, ephemeral server | 100+ passing |
| Python | Production | Async/await API, type hints, ephemeral server | 80+ passing |
| Go | Production | Connection pooling, topology watching, integration tests | 102+ passing |
| Java | In Progress | Maven/Gradle, gRPC client | Pending |
| Component | Status | Description |
|---|---|---|
| norikv-server | In Progress | Main server binary |
| norikv-placement | Complete | Shard assignment and routing |
| norikv-transport-grpc | In Progress | gRPC/HTTP transport layer |
| norikv-vizd | Planned | Visualization daemon |
| norikv-dashboard | Planned | Real-time web dashboard |
All SDKs provide consistent functionality:
- Smart Client Routing: Client-side shard assignment with Jump Consistent Hashing
- Leader-Aware Operations: Direct requests to shard leaders with automatic failover
- Retry Logic: Exponential backoff with jitter for transient failures
- Connection Pooling: Efficient connection management per node
- Conditional Operations: Compare-and-swap (CAS) with version matching
- Consistency Levels: Lease-based, linearizable, or stale reads
- Idempotency Keys: Safe retries for write operations
- Cluster Topology: Dynamic cluster membership tracking
- Ephemeral Server: In-memory server for testing (no external dependencies)
Critical: All SDKs use identical hash functions to ensure consistent shard routing:
- Key Hashing: xxhash64 (seed=0)
- Shard Assignment: Jump Consistent Hash
- Cross-Validated: Test vectors ensure identical results across all languages
- Rust 1.75+ (for server and core crates)
- Node.js 18+ (for TypeScript SDK)
- Python 3.9+ (for Python SDK)
- Go 1.21+ (for Go SDK)
- Java 11+ (for Java SDK)
# Build all Rust crates
cargo build --all
# Run tests
cargo test --all
# Build specific SDK
cd sdks/go && go build ./...
cd sdks/typescript && npm install && npm run build
cd sdks/python && pip install -e .# Rust core tests
cargo test --all
# Go SDK tests (unit + integration)
cd sdks/go && go test ./...
# TypeScript SDK tests
cd sdks/typescript && npm test
# Python SDK tests
cd sdks/python && pytest- Point Reads: ~10µs (p99)
- Point Writes: ~20µs (p99)
- Bloom Filter Hit: ~80ns (zero allocation)
- Compaction: Leveled strategy with size-tiered L0
- xxhash64: ~2.5ns per operation (Go), ~8ns (Python/TypeScript)
- Jump Consistent Hash: ~14ns per operation (Go)
- Combined Routing: ~23ns (Go), <100ns (TypeScript/Python)
NoriKV is built with observability as a first-class concern:
- Vendor-Neutral:
nori-observetrait for pluggable backends - Prometheus: Built-in Prometheus exporter
- OTLP: OpenTelemetry support with trace exemplars
- Low Overhead: <100ns per metric operation
- Live Dashboard: Real-time cluster visualization (planned)
- VizEvent Stream: Typed events for custom tooling
- Health Endpoints: HTTP health checks and readiness probes
- Architecture Guide: System design and components
- Storage Layer: WAL, SSTable, and LSM details
- Consensus: Raft implementation specifics
- SDKs: Individual SDK documentation in each directory
- Operations: Deployment and monitoring guides
- Core storage engine (WAL, SSTable, LSM)
- Raft consensus with read-index and leases
- SWIM membership protocol
- TypeScript, Python, and Go SDKs
- Ephemeral servers for testing
- Cross-SDK hash validation
- Java SDK
- Server application and transport layer
- gRPC/HTTP API implementation
- Integration testing with real server
- Live visualization dashboard
- Multi-shard transactions
- Streaming operations (watch API)
- Backup and restore
- Chaos testing framework
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
- Java SDK: Complete implementation following Go/TypeScript patterns
- Server Development: gRPC handlers, sharding coordinator
- Dashboard: Real-time visualization UI
- Documentation: Tutorials, examples, API reference
- Testing: Property tests, chaos engineering
- Performance: Benchmarking, optimization
This project is dual-licensed under MIT OR Apache-2.0.
See LICENSE-MIT and LICENSE-APACHE for details.
- Python SDK:
sdks/python/ - TypeScript SDK:
sdks/typescript/ - Go SDK:
sdks/go/ - Java SDK:
sdks/java/
Built with modern distributed systems research:
- LSM Trees: Original LevelDB/RocksDB design
- Raft Consensus: Diego Ongaro's dissertation
- SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol
- Jump Consistent Hash: Google's consistent hashing algorithm
Status: Active development | Stability: Alpha | Production Ready: SDKs only (server in progress)
