⬆️ For table of contents, click the above icon
Easy clustering-native, multi-tenancy aware loadbalancer for Docker services, AWS Lambda functions and S3 static websites.
- Prometheus metrics
- Clustering-native
- All state that is required to exist on all loadbalancer nodes come from an event bus (which is durable and has exact-once msg semantics), so each nodes easily reach the same config.
- Dynamically discovers Docker services (Swarm and standalone containers supported).
- Kubernetes is not currently supported.
- Serves as a research platform for new technologies:
- CertBus integration for always up-to-date TLS.
- Turbocharger implementation for lightning-fast static file delivery and cacheability.
- Supports MicroWebApp-style apps (TODO: publish spec) certificates
- Emulates AWS API Gateway for calling Lambda functions
- S3 static website support
- Why use LB in front of S3? Deploys to plain S3 are not atomic. That means users can see broken, in-progress, updates or worse yet - canceled deploy can end up in unknown state. We support atomic deploys on top of S3 with great caching characteristics.
- This also makes it possible to overlay dynamic stuff "on top of" a static website.
Think
/
mounted to S3 but/api
mounted as a Lambda function.
- Manually defined applications (this hostname should be proxied to this IP..)
- Authorization support
- For simple websites like (static websites) or backoffice interactive HTTP services that you don't have control of (likes of Prometheus, Grafana), it's derirable for loadbalancer to enforce backend-wide authentication.
- For any advanced use, it's of course preferred to do in-app authentication so you can have advanced control of things like different auth for interactive vs. API users etc.
- Opinionated
- Not meant to support everyone's use cases. Do the few things we do, really well.
TODO features: look at issues.
- Installation
- Also covers setting up, configuration, AWS IAM permissions
- Managing S3 static websites
- Enabling the admin UI
- Also explains authentication middleware
TODO: more documentation
Edgerouter consumes these EventHorizon streams for realtime updates:
/t-1/certbus
- TLS certificate updates happen here
/t-1/loadbalancer
- Static application definitions are updated here. "Static" doesn't mean the applications don't evolve - it means that they-re semi permanent. The static definition is updated each time a S3 static website is deployed. Lambda definitions rarely change.
Services/containers discovered from Docker are mostly
Traefik-notation compliant,
so labels like traefik.frontend.rule
, traefik.port
etc are parsed into an app config.
See test cases for supported directives.
"Static" application configs can be published via EventHorizon and all Edgerouter nodes in the cluster will pick up the same changes.
All application configs, whether they're dynamically created from Docker or retrieved via EventHorizon follow this structure:
{
"id": "example.com",
"frontends": [
{
"kind": "hostname",
"hostname": "example.com",
"path_prefix": "/"
}
],
"backend": {
"kind": "s3_static_website",
"s3_static_website_opts": {
"bucket_name": "mycompany-staticwebsites",
"region_id": "eu-central-1",
"deployed_version": "v1"
}
}
}
An application always has an ID, at least one frontend (= hostname or hostname pattern), and a single backend (one backend can have multiple replicas for loadbalancing/high availability though).
Here's an example of a Docker-discovered service with 2 replicas (remember, this config is autogenerated):
{
"id": "app.example.com",
"frontends": [
{
"kind": "hostname",
"hostname": "app.example.com",
"path_prefix": "/"
}
],
"backend": {
"kind": "peer_set",
"peer_set_opts": {
"addrs": [
"http://192.168.1.2"
"http://192.168.1.3"
]
}
}
}