Martin Helmich
Mittwald CM Service GmbH & Co. KG
This code is MIT-licensed.
This repository contains Salt formulas for easily deploying a small, Docker-based microservice architecture, implementing the following features:
- Container-based deployment
- Using Consul for service discovery
- Using NGINX for load balancing and service routing
- Using Prometheus and Grafana for monitoring and alerting
- Zero-downtime (re)deployment of services using a few custom Salt modules
HTTP (Port 80, 443) │ │ HTTP (Port 80, 443)
identity.services.acme.co │ │ finances.services.acme.co
▼ ▼
┌───────────────────────────────────────┐ ┌────────┐
│ NGINX ├──────────────────────►│ │
└────────────┬─────────────────────┬────┘ │ │
│ │ │ │
HTTP (Port 10000) ┌──────┴───────┐ │ HTTP (Port 10010) │ │
identity.services.consul │ │ │ finances.service.consul │ │
▼ ▼ ▼ │ │
┌──────────┐ ┌──────────┐ ┌─────────┐ │ Consul │
│ Identity │ │ Identity │ │ Finance │ │ │
│ Service │ │ Service │ │ Service │ │ │
└──────────┘ └──────────┘ └─────────┘ │ │
│ │
┌───────────────────────────────────────┐ │ │
│ Docker │ │ │
└───────────────────────────────────────┘ └────────┘
The movitation of this project was to be able to host a small (!) microservice infrastructure without adding the complexity of additional solutions like Mesos or Kubernetes.
This probably does not scale beyond a few nodes, but sometimes that's all you need.
-
This formula was tested on Debian Linux. With a bit of luck, it should work on recent Ubuntu versions, as well.
-
You need to have Docker installed. Since there are a number of ways to install Docker, installing Docker is out of scope for this formula. If you want to get up and running quickly, use the following Salt states for installing Docker:
docker-repo: repo.managed: - name: deb https://apt.dockerproject.org/repo debian-jessie main - dist: debian-jessie - file: /etc/apt/sources.list.d/docker.list - gpgcheck: 1 - key_url: https://get.docker.com/gpg docker-engine: pkg.installed: - require: - repo: docker-repo docker: service.running: - require: - pkg: docker-engine
Remember to change the distribution above from
debian-jessie
to whatever you happen to be running. -
You need the docker-py Python library. The Docker states in this repository require at least version 1.3 of this library (tested with 1.3.1). The easiest way to install this package is using pip:
$ pip install "docker-py>=1.3.0,<1.4.0"
Alternatively, use the following Salt states to install docker-py:
pip: pkg.installed: [] docker-py: pip.installed: - name: docker-py>=1.3.0,<1.4.0 - require: - pkg: pip
See the official documentation on how to use formulas in your
Salt setup. Assuming you have GitFS set up and configured on your Salt master,
simply configure this repository as a GitFS remote in your Salt configuration
file (typically /etc/salt/master
):
gitfs_remotes:
- https://github.com/mittwald/salt-microservices
To quote a very important warning from the official documentation:
We strongly recommend forking a formula repository into your own GitHub account to avoid unexpected changes to your infrastructure.
Many Salt Formulas are highly active repositories so pull new changes with care. Plus any additions you make to your fork can be easily sent back upstream with a quick pull request!
This formula uses the Salt mine feature to discover servers in the
infrastructure. Specifically, the network.ip_addrs
function needs to be
callable as a mine function. For this, ensure that the following pillar is set
for all servers of your infrastructure:
mine_functions:
network.ip_addrs:
- eth0
You need at least one Consul server in your cluster (althoug a number of three or five should be used in production for a high-availability setup).
First of all, you need to configure a targeting expression
(and optionally a targeting mode) that can be used to match your consul servers.
The default expression will simply match the minion ID against the pattern
consul-server*
. Use the pillar consul:server_pattern
for this:
consul:
server_pattern: "<your-consul-server-pattern>"
server_target_mode: "<glob|pcre|...>"
You can install these servers by using the mwms.consul.server
state in your
top.sls
file:
'consul-server*':
- mwms.consul.server
During the next highstate call, Salt will install a Consul server on each of these nodes and configure them to run in a cluster. The servers will also be installed with the Consul UI. You should be able to access the UI by opening any of your Consul servers on port 8500 in your browser.
On the Docker nodes, a Consul agent has to be installed (or a Consul server).
You can use the mwms.consul.agent
state for this. Furthermore, use the
mwms.services
state to deploy the actual services:
'service-node*':
- mwms.consul.agent
- mwms.services
For monitoring services, you can use the mwms.monitoring
state for every node:
'service-node*':
- mwms.monitoring
To install a Prometheus server to actually gather metrics, use the
mwms.prometheus
state on one of your nodes:
'service-node-monitoring':
- mwms.prometheus
Services that should be deployed in your infrastructure are defined using Salt pillars.
#!jinja|yaml|gpg
microservices:
example:
hostname: example.services.acme.co
ssl_certificate: /etc/ssl/certs/your-domain.pem
ssl_key: /etc/ssl/private/your-domain.key
ssl_force: True
containers:
web:
instances: 2
docker_image: docker-registry.acme.co/services/example:latest
stateful: False
http: True
http_internal_port: 8000
base_port: 10000
links:
database: db
volumes:
- ["persistent", "/var/lib/persistent-app-data", "rw"]
environment:
DATABASE_PASSWORD: |
-----BEGIN PGP MESSAGE-----
...
db:
instances: 1
docker_image: mariadb:10
stateful: True
volumes:
- ["database", "/var/lib/mysql", "rw"]
Supposing that each service pillar resides in it's own file (like for example,
<pillar-root>/services/example.sls
), you can manually distribute your services
on your hosts in your top file:
service-node*:
- services.identity
service-node-001:
- services.finance
service-node-002:
- services.mailing
Note: Yes, you need to distribute services manually to your hosts. Yes, this is done on purpose to offer a low-complexity solution for small-scale architectures. If you want more, use Kubernetes or Marathon.
For updating existing services, you can use the microservice.redeploy
Salt
module. This module will try to pull a newer version of the image that the
application containers were created from. If a newer image exists, this module
will delete and re-create the containers from the newer image:
$ salt-call microservice.redeploy example
This is done sequentially and with a grace time of 60 seconds. If your service
consists of more than one instance of the same container, the deployment will
not cause any significant downtime.
All deployed microservices are configured using Salt pillars. The root pillar
is the microservices
pillar. The microservices
pillar is a map of service
definitions. Each key of this map will be used as service name.
A service definition is a YAML object consisting of the following properties:
-
hostname
(required): Describes the public hostname used to address this service. This especially important when the service exposes a HTTP API or GUI; in this case NGINX will be configured to use this hostname to route requests to respective containers. -
containers
(required): A map of container definitions. Each key of this map will be as container name. -
ssl_certificate
andssl_key
(optional): The path to a SSL certificate and the associated private key to use to encrypt the connections to the respective service. -
ssl_force
(optional): When assl_certificate
is defined, by default all unencrypted HTTP traffic will be redirected to SSL. Set this value toFalse
to still allow unencrypted HTTP traffic. -
checks
(optional): Can contain additional health checks for Consul. See the Consul documentation on health checks for more information. -
check_url
(optional): The URL to use for the Consul health check. This option will only be used when the service is accessibly via HTTP. If not specified, the service'shostname
property will be used as URL.
A container definition is a YAML object consisting of the properties defined
below. Each container definition may result in one or more actual Docker
containers being created (the exact number of containers depends on the
instances
property). Each container will follow the naming pattern
<service-name>-<container-name>-<instance-number>
. This means that the example
from above would create three containers:
example-web-0
example-web-1
example-db-0
You can use the following configuration options for each container:
-
instances
(default:1
): How many instances of this container to spawn. -
docker_image
(required): The Docker image to use for building this container. If the image is not present on the host, themwdocker
Salt states will try to pull this image. -
stateful
(default:False
): You can set this property toTrue
if any kind of persistent data is stored inside the container (usually, you should try to avoid this, for example by using host directories mounted as volumes). If a container is as stateful, the Salt states will not delete and re-create this when a newer version of the image is present. -
http
(default:False
): Set toTrue
when this container exposes a HTTP service. This will cause a respective NGINX configuration to be created to make the service be accessible by the public. -
http_internal_port
(default:80
): Set this property to the port that the HTTP service listens on inside the container! -
base_port
(required): Port that container ports should be mapped on on the host. Note that this property does not configure one port, but rather the beginning of a port range ofinstances
length. -
links
: A map of other containers (of the same service) that should be linked into this container. The format is<container-name>: <alias>
.Example:
links: database: db storage: data
-
volumes
: A list of volumes that should be created for this container. Each volume is a list of three items:- The volume name on the host side. Do not specify an absolute path
here! The Salt states will automatically create a new host directory in
/var/lib/services/<service-name>/<path>
for you. - The volume path inside the container.
ro
orrw
to denote read-only or read&write access.
Example:
volumes: - ["database", "/var/lib/mysql", "rw"] - ["files", "/opt/service-resources", "ro"]
- The volume name on the host side. Do not specify an absolute path
here! The Salt states will automatically create a new host directory in
-
volumes_from
: A list of containers from which volumes should be used. Use the service-local container name, here. Containers should be specified as a simple list.Example:
volumes_from: - database
-
environment
: A map of environment variables to set for this container. Note: When using this feature for setting confidential data like API tokens or passwords, consider using Salt's GPG encryption features.
These states install a Consul server or agent on a node. The Consul version that is installed is shipped statically with this formula. The Consul agent will be run using the Supervisor process manager
Configures a node to use the Consul servers as DNS server. This is done by
placing a custom resolv.conf
file.
This states read the microservice
pillar (see above) and defines the necessary
states for operating the application containers for these services. This will
include the following:
- Creating as many Docker containers as defined in the pillar
- Adjusting the NGINX configuration to make your services accessible to the world.
- Configure maintenance cron jobs for each service as defined in the pillar. These will be run in temporary docker containers.
Installs cAdvisor on your host that gathers host and container metrics. This is
probably most useful when you're also using the mwms.prometheus
state on one
or more of your nodes.
Note that due to Docker issue #17902, cAdvisor is started directly on the host, not within a Docker container.
Sets up a Prometheus server for gathering metrics and alerting. The setup consists of the actual Prometheus service, the Alertmanager and a Grafana frontend.
This state registers a new external node using the Consul REST API. This state
requires Consul to be already installed on the node (use one of the
mwms.consul.*
SLSes for that) and requires the Python requests
package.
Example:
external-node:
consul.node:
- address: url-to-external.service.acme.com
- datacenter: dc1
- service:
ID: example
Port: 80
Tags:
- master
- v1
This module will re-deploy a microservice. This module will try to pull a newer version of the image that the application containers were created from. If a newer image exists, this module will delete and re-create the containers from the newer image.
This is done sequentially and with a grace time of 60 seconds. If your service
consists of more than one instance of the same container, the deployment will
not cause any significant downtime.