Skip to content

Safeguard backends by using backpressure signals configured from Prometheus metrics

License

Notifications You must be signed in to change notification settings

kevindweb/throttle-proxy

Repository files navigation

Latest Release Go Report Card Go Code reference

Prometheus Backpressure Proxy

Adaptive Protection for Your Backend Services

🛡️ Dynamically shield your services from traffic overload using smart, metrics-driven congestion control.

See Netflix concurrency-limits for the inspiration.

Why Use This Project?

Throttle Proxy is the best solution for ensuring the safety and stability of your distributed systems during load spikes. Here's why:

  • Proven Algorithm: Utilizes the Additive Increase/Multiplicative Decrease (AIMD) algorithm, inspired by TCP congestion control, to dynamically adjust request concurrency.
  • Real-Time Metrics: Leverages Prometheus metrics to make real-time decisions, ensuring your system adapts quickly to changing loads.
  • Configurable and Flexible: Allows you to set custom thresholds and monitor multiple signals, providing fine-grained control over your traffic management.
  • Prevents Failures: Helps prevent cascading failures and maintains system stability under unpredictable load conditions.

Key Features

  • 📊 Adaptive Traffic Management: Automatically adjusts request concurrency based on real-time Prometheus metrics
  • 🔀 Smart Scaling: Uses Additive Increase/Multiplicative Decrease (AIMD) algorithm
  • 🚦 Configurable Limits: Set min and max concurrent request thresholds
  • 🔍 Multi-Signal Monitoring: Track system health across multiple metrics simultaneously

Quick Example

config := proxymw.BackpressureConfig{
    EnableBackpressure: true,
    BackpressureQueries: []BackpressureQuery{
        {
            Query:              `sum(rate(http_server_errors_total[5m]))`,
            // Start to throttle when error rate reaches 50%
            WarningThreshold:   0.5,
            // Hard throttling up to 100 CongestionWindowMax when error rate is >80%
            EmergencyThreshold: 0.8,
        }
    },
    CongestionWindowMin: 10,
    CongestionWindowMax: 100,
}

How It Works

  1. 🔭 Continuously monitor system metrics
  2. 📈 Dynamically adjust request throughput
  3. 🛑 Automatically throttle when system stress detected

When to Use

  • Protecting microservices from sudden traffic spikes
  • Preventing cascading failures
  • Maintaining system stability under unpredictable load

Quick Start

  1. Configure backpressure queries as Prometheus metrics
  2. Define min/max request windows
  3. Choose the server-side http proxy or client-side roundtripper
  4. Import the starter Grafana dashboard
  5. Let the proxy handle the rest!

Development

Installation

Build the docker-compose stack

make all
docker compose down
docker compose up --build

Lint and Test

make test
make lintfix

Upgrade dependencies

make deps

Contributing

CONTRIBUTING.md

About

Safeguard backends by using backpressure signals configured from Prometheus metrics

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published