Adaptive Protection for Your Backend Services
🛡️ Dynamically shield your services from traffic overload using smart, metrics-driven congestion control.
See Netflix concurrency-limits for the inspiration.
Throttle Proxy is the best solution for ensuring the safety and stability of your distributed systems during load spikes. Here's why:
- Proven Algorithm: Utilizes the Additive Increase/Multiplicative Decrease (AIMD) algorithm, inspired by TCP congestion control, to dynamically adjust request concurrency.
- Real-Time Metrics: Leverages Prometheus metrics to make real-time decisions, ensuring your system adapts quickly to changing loads.
- Configurable and Flexible: Allows you to set custom thresholds and monitor multiple signals, providing fine-grained control over your traffic management.
- Prevents Failures: Helps prevent cascading failures and maintains system stability under unpredictable load conditions.
- 📊 Adaptive Traffic Management: Automatically adjusts request concurrency based on real-time Prometheus metrics
- 🔀 Smart Scaling: Uses Additive Increase/Multiplicative Decrease (AIMD) algorithm
- 🚦 Configurable Limits: Set min and max concurrent request thresholds
- 🔍 Multi-Signal Monitoring: Track system health across multiple metrics simultaneously
config := proxymw.BackpressureConfig{
EnableBackpressure: true,
BackpressureQueries: []BackpressureQuery{
{
Query: `sum(rate(http_server_errors_total[5m]))`,
// Start to throttle when error rate reaches 50%
WarningThreshold: 0.5,
// Hard throttling up to 100 CongestionWindowMax when error rate is >80%
EmergencyThreshold: 0.8,
}
},
CongestionWindowMin: 10,
CongestionWindowMax: 100,
}
- 🔭 Continuously monitor system metrics
- 📈 Dynamically adjust request throughput
- 🛑 Automatically throttle when system stress detected
- Protecting microservices from sudden traffic spikes
- Preventing cascading failures
- Maintaining system stability under unpredictable load
- Configure backpressure queries as Prometheus metrics
- Define min/max request windows
- Choose the server-side http proxy or client-side roundtripper
- Import the starter Grafana dashboard
- Let the proxy handle the rest!
Build the docker-compose stack
make all
docker compose down
docker compose up --build
- Generate fake traffic with
./scripts/traffic_generator.py
- View metrics in the local Grafana instance
make test
make lintfix
make deps