Auto linking multi-host docker cluster
Arpanet is a wrapper around the following tools:
- docker - for running containers
- consul - for service discovery
- cadvisor - for container metrics
- ambassadord - for auto tcp routing
- registrator - for announcing services to consul
- fleetstreet - for publishing container env to consul
It is an opinionated layer upon which you can create a Platform As A Service.
The quickstart list of commands:
On each machine that is part of the cluster:
$ export ARPANET_IP=192.168.8.120
$ curl -sSL https://get.docker.io/ubuntu/ | sudo sh
$ sudo sh -c 'curl -L https://raw.githubusercontent.com/binocarlos/arpanet/master/wrapper > /usr/local/bin/arpanet'
$ sudo chmod a+x /usr/local/bin/arpanet
$ sudo -E arpanet setup
$ arpanet pull
On the first machine (192.168.8.120):
$ arpanet start:consul boot
On the other 2 'server' nodes:
$ ssh node2 arpanet start:consul server 192.168.8.120
$ ssh node3 arpanet start:consul server 192.168.8.120
Then start the service stack on all 3 servers:
$ arpanet start:stack
$ ssh node2 arpanet start:stack
$ ssh node3 arpanet start:stack
Now we can join more nodes in consul client mode:
$ ssh node4 arpanet start:consul client 192.168.8.120
$ ssh node4 arpanet start:stack
The variables you should set in your environment before running the arpanet container:
Make sure the hostname of the machine is set correctly and is different to other hostnames on your arpanet.
The IP address of the interface to use for cross host communication.
This should be the IP of a private network on the host.
$ export ARPANET_IP=192.168.8.120
$ curl -sSL https://get.docker.io/ubuntu/ | sudo sh
Arpanet runs in a docker container that starts and stops containers on the main docker host.
Because of this, the container must be run with the docker socket mounted as a volume.
There is a wrapper script that will handle this neatly - to install the wrapper:
$ curl -L https://raw.githubusercontent.com/binocarlos/arpanet/v0.2.4/wrapper > /usr/local/bin/arpanet
$ chmod a+x /usr/local/bin/arpanet
Next - pull the arpanet image (optional - it will pull automatically in the next step):
$ docker pull binocarlos/arpanet
Run the setup command as root - it will create the data folder, configure the docker DNS bridge and bind it to the ARPANET_IP tcp endpoint:
$ sudo -E $(arpanet setup)
Finally pull the docker images for the various services:
$ arpanet pull
Everything is now installed - you can arpanet start
and arpanet stop
The arpanet script runs in a docker container - this means the docker socket must be mounted as a volume each time we run.
The wrapper script (installed to /usr/local/bin) will handle this for you.
Or, if you want to run arpanet manually - here is an example of pretty much what the wrapper script does:
$ docker run --rm \
-h $HOSTNAME \
-v /var/run/docker.sock:/var/run/docker.sock \
-e ARPANET_IP \
binocarlos/arpanet help
$ sudo -E arpanet setup
This should be run as root and will perform the following steps:
- bind docker to listen on the the tcp://$ARPANET_IP interface
- connect the docker DNS resolver to consul
- create a host directory for the consul data volume
- restart docker
This will pull the images used by arpanet services.
$ arpanet pull
Start the consul container on this host.
There are 3 modes to boot a node:
- boot - used for the very first node
- server - used for other servers (consul server)
- client - used for other nodes (consul agent)
$ arpanet start:consul server 192.168.8.120
You can pass consul arguments after the JOINIP (or after boot):
$ arpanet start:consul server 192.168.8.120 -node mycustomname -dc dc34
Before you start the arpanet services the consul cluster must be booted and operational.
This means you must run the start:consul
command on all 3 (or 5 etc) server nodes before running arpanet start:stack
on any of them.
If you are adding a client node then the start:stack
command can be run directly after the start:consul
command (because the consul cluster is already up and running).
Stop the arpanet containers.
$ arpanet stop
Print information about this node
A CLI tool to read and write to the consul key value store.
Commands:
To delete a key recursively:
$ arpanet kv del folder/a?recurse
Boot a cluster of 5 nodes, with 3 server and 2 client nodes.
First stash the ip of the first node - we will 'join' the other nodes to here and the consul gossip protocol will catch up.
$ export JOINIP=192.168.8.120
Then boot the first node:
$ arpanet start:consul boot
Now - boot the other 2 servers:
$ ssh node2 arpanet start:consul server $JOINIP
$ ssh node3 arpanet start:consul server $JOINIP
When all 3 servers are started - it means we have an operational consul cluster and can start the rest of the arpanet service stack on the nodes:
$ arpanet start:stack
$ ssh node2 arpanet start:stack
$ ssh node3 arpanet start:stack
Now we can setup further clients:
$ ssh node4 arpanet start:consul client $JOINIP
$ ssh node4 arpanet start:stack
$ ssh node5 arpanet start:consul client $JOINIP
$ ssh node5 arpanet start:stack
We can now use consul members
to check our cluster:
$ arpanet consul members
there are other environment variables that control arpanet behaviour:
- DOCKER_PORT - the TCP port docker should listen on (2375)
- CADVISOR_PORT - the port to expose for the cadvisor api (8080)
- CONSUL_PORT - the port to expose the consul HTTP api (8500)
- CONSUL_EXPECT - the number of server nodes to auto bootstrap (3)
- CONSUL_DATA - the host folder to mount for consul state (/mnt/arpanet-consul)
- CONSUL_KV_PATH - the Key/Value path to use to keep state (/arpanet)
You can control the images used by arpanet services using the following variables:
- CONSUL_IMAGE (progrium/docker-consul)
- CADVISOR_IMAGE (google/cadvisor)
- REGISTRATOR_IMAGE (progrium/registrator)
- AMBASSADORD_IMAGE (binocarlos/ambassadord) - will change to progrium
- FLEETSTREET_IMAGE (binocarlos/fleetstreet)
You can control the names of the launched services using the following variables:
- CONSUL_NAME (arpanet_consul)
- CADVISOR_NAME (arpanet_cadvisor)
- REGISTRATOR_NAME (arpanet_registrator)
- AMBASSADOR_NAME (arpanet_backends)
- FLEETSTREET_NAME (arpanet_fleetstreet)
The wrapper will source these variables from ~/.arpanetrc
and will inject them all into the arpanet docker container.
If you are running arpanet manually then pass these variables to docker using -e CONSUL_NAME=...
.
A basic arpanet will use the private network of a single data centre.
Securing the network is left up to the user to allow for multiple approaches - for example:
- use iptables to block unknown hosts
- use a VPN solution to encrypt traffic between hsots
Future versions of arpanet will allow for consul TLS encryption meaning it can bind onto public Internet ports and use the multi data-centre feature securely.
- TLS encryption between consul nodes & for docker server
- Make the service stack configurable so services become plugins
- Replicate the service stack via consul so we can manage services across the cluster
MIT