Docker Swarm is native clustering for Docker. It allows you to create and access a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
Swarm Manager: Docker Swarm has a Manager that is a pre-defined Docker Host and is a single point for all administration. Swarm manager orchestrates and schedules containers in the entire cluster. Swarm Manager can be configured to enable High Availability.
Swarm Nodes: The containers are deployed on Nodes that are additional Docker Hosts. Each Swarm Node must be accessible by the manager, each node must listen to the same network interface (TCP port). Each node runs a Docker Swarm agent that registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node’s status. The containers run on a node.
Scheduler Strategy: Different scheduler strategies (“binpack”, “spread” (default), and “random”) can be applied to pick the best node to run your container. The default strategy optimizes the node for least number of running containers. There are multiple kinds of filters, such as constraints and affinity. This should allow for a decent scheduling algorithm.
Node Discovery Service: Swarm manager talks to a hosted discovery service. This service maintains a list of IPs in the Swarm cluster. Docker Hub hosts a discovery service that can be used during development. In production, this is replaced by other services such as etcd
, consul
, or zookeeper
. You can even use a static file. This is particularly useful if there is no Internet access or you are running in a closed network.
Standard Docker API: Docker Swarm serves the standard Docker API and thus any tool that talks to a single Docker host will seamlessly scale to multiple hosts now. That means that if you were using shell scripts using Docker CLI to configure multiple Docker hosts, the same CLI could now talk to the Swarm cluster.
There are a lots of other concepts but these are the main ones.
This section will deploy an application that will provide a CRUD/REST interface on a data bucket in Couchbase. This is achieved by using a Java EE application deployed on WildFly to access the database.
-
Create a Machine that will host discovery service:
docker-machine create -d=virtualbox consul-machine Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... To see how to connect Docker to this machine, run: docker-machine env consul-machine
-
Connect to this Machine:
eval $(docker-machine env consul-machine)
-
Run Consul service using the following Compose file:
myconsul: image: progrium/consul restart: always hostname: consul ports: - 8500:8500 command: "-server -bootstrap"
This Compose file is available at https://github.com/arun-gupta/docker-images/blob/master/consul/docker-compose.yml.
Start this service as:
docker-compose up -d Pulling myconsul (progrium/consul:latest)... latest: Pulling from progrium/consul 3b4d28ce80e4: Pull complete e5ab901dcf2d: Pull complete 30ad296c0ea0: Pull complete 3dba40dec256: Pull complete f2ef4387b95e: Pull complete 53bc8dcc4791: Pull complete 75ed0b50ba1d: Pull complete 17c3a7ed5521: Pull complete 8aca9e0ecf68: Pull complete 4d1828359d36: Pull complete 46ed7df7f742: Pull complete b5e8ce623ef8: Pull complete 049dca6ef253: Pull complete bdb608bc4555: Pull complete 8b3d489cfb73: Pull complete c74500bbce24: Pull complete 9f3e605442f6: Pull complete d9125e9e799b: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest Creating consul_myconsul_1
Started container can be verified as:
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f05d8dd11e7f progrium/consul "/bin/start -server -" 30 seconds ago Up 29 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 0.0.0.0:8500->8500/tcp, 8301-8302/udp consul_myconsul_1
Swarm is fully integrated with Machine, and so is the easiest way to get started.
-
Create a Swarm Master and point to the Consul discovery service:
docker-machine create -d virtualbox --virtualbox-disk-size "5000" --swarm --swarm-master --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-master Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... To see how to connect Docker to this machine, run: docker-machine env swarm-master
Few options to look here:
-
--swarm
configures the Machine with Swarm -
--swarm-master
configures the created Machine to be Swarm master -
--swarm-discovery
defines address of the discovery service -
--cluster-advertise
advertise the machine on the network -
--cluster-store
designate a distributed k/v storage backend for the cluster -
--virtualbox-disk-size
sets the disk size for the created Machine to 5GB. This is required so that WildFly and Couchbase image can be downloaded on any of the nodes.
-
-
Find some information about this machine:
docker-machine inspect --format='{{json .Driver}}' swarm-master {"Boot2DockerImportVM":"","Boot2DockerURL":"","CPU":1,"DiskSize":5000,"HostOnlyCIDR":"192.168.99.1/24","HostOnlyNicType":"82540EM","HostOnlyPromiscMode":"deny","IPAddress":"192.168.99.102","MachineName":"swarm-master","Memory":1024,"NoShare":false,"SSHPort":51972,"SSHUser":"docker","StorePath":"/Users/arungupta/.docker/machine","SwarmDiscovery":"consul://192.168.99.100:8500","SwarmHost":"tcp://0.0.0.0:3376","SwarmMaster":true,"VBoxManager":{}}
Note that the disk size is 5GB.
-
Connect to the Swarm cluster by using the command:
eval "$(docker-machine env --swarm swarm-master)"
Note--swarm
is specified to connect to the Swarm cluster. Otherwise the command will connect toswarm-master
Machine only. However, if you’re on Windows, then use thedocker-machine env swarm-master
command only. Then copy the output into an editor to replace all appearances of EXPORT with SET, remove the quotes, and all appearances of "/" with "\". Finally, issue the three commands at your command prompt. -
Get information about the cluster:
docker info Containers: 2 Images: 1 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 1 swarm-master: 192.168.99.102:2376 └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.021 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker, operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015, provider=virtualbox, storagedriver=aufs CPUs: 1 Total Memory: 1.021 GiB Name: d074fd97682e
This cluster has one node and that node has two containers.
-
Create the first Swarm node to join this cluster:
docker-machine create -d virtualbox --virtualbox-disk-size "5000" --swarm --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-node-01 Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... To see how to connect Docker to this machine, run: docker-machine env swarm-node-01
Notice no
--swarm-master
is specified in this command. This ensure that the created Machines are worker nodes. -
Create a second Swarm node to join this cluster:
docker-machine create -d virtualbox --virtualbox-disk-size "5000" --swarm --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-node-02 Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... To see how to connect Docker to this machine, run: docker-machine env swarm-node-02
-
List all the created Machines:
docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM consul-machine - virtualbox Running tcp://192.168.99.100:2376 swarm-master * virtualbox Running tcp://192.168.99.101:2376 swarm-master (master) swarm-node-01 - virtualbox Running tcp://192.168.99.102:2376 swarm-master swarm-node-02 - virtualbox Running tcp://192.168.99.103:2376 swarm-master
The machines that are part of the cluster have cluster’s name in the SWARM column. If the SWARM column is blank, then it is a standalone machine. For example,
consul-machine
is a standalone machine as opposed to all other machines which are part of theswarm-master
cluster. The Swarm master is identified by (master) in the SWARM column. -
Get information about the cluster again:
docker info Containers: 4 Images: 3 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 3 swarm-master: 192.168.99.102:2376 └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.021 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker, operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015, provider=virtualbox, storagedriver=aufs swarm-node-01: 192.168.99.103:2376 └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.021 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker, operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015, provider=virtualbox, storagedriver=aufs swarm-node-02: 192.168.99.104:2376 └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.021 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker, operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015, provider=virtualbox, storagedriver=aufs CPUs: 3 Total Memory: 3.064 GiB Name: d074fd97682e
Now there are 3 nodes – one Swarm master and 2 Swarm worker nodes. There are a total of 4 containers running in this cluster – a swarm-agent on each node and an additional swarm-agent-master running on the master. This can be verified by connecting to the master Machine (without specifying
--swarm
) and listing all the containers:eval "$(docker-machine env swarm-master)" docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf5edbf4ada7 swarm:latest "/swarm join --advert" 49 seconds ago Up 49 seconds 2375/tcp swarm-agent c7c3b80bceb4 swarm:latest "/swarm manage --tlsv" 49 seconds ago Up 49 seconds 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master
-
You can also query the Consul discovery service while connected to the Swarm cluster, master, or any worker node in the cluster to list the nodes in the cluster:
docker run swarm list consul://$(docker-machine ip consul-machine):8500 192.168.99.102:2376 192.168.99.103:2376 192.168.99.104:2376
-
Connect to the Swarm cluster:
eval "$(docker-machine env --swarm swarm-master)"
-
List all the networks created by Docker so far:
docker network ls NETWORK ID NAME DRIVER 33a619ddc5d2 swarm-node-02/bridge bridge e0b73c96ffec swarm-node-02/none null b315e67f0363 swarm-node-02/host host 879d6167be47 swarm-master/bridge bridge f771ddc7d957 swarm-node-01/none null e042754df336 swarm-node-01/host host d2f3b512f9dc swarm-node-01/bridge bridge 5b5bcf135d7b swarm-master/none null fffc34eae907 swarm-master/host host
Docker creates three networks for each host automatically:
Network Name Purpose bridge
Default network that containers connect to. This is
docker0
network in all Docker installations.none
Container-specific networking stack
host
Adds a container on hosts networking stack. Network configuration is identical to the host.
A total of nine networks are created for this three-node Swarm cluster because three networks are created for each node.
-
Use Compose file to start WildFly and Couchbase:
mycouchbase: container_name: "db" image: couchbase/server ports: - 8091:8091 - 8092:8092 - 8093:8093 - 11210:11210 mywildfly: image: arungupta/wildfly-admin environment: - COUCHBASE_URI=db ports: - 8080:8080 - 9990:9990
In this Compose file:
-
Couchbase service has a custom container name defined by
container_name
. This name is used when creating a new environment variableCOUCHBASE_URI
during WildFly startup. -
arungupta/wildfly-admin
image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.Source for this file is at https://github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml.
This application environment can be started as:
docker-compose --x-networking up -d Creating network "wildflycouchbasejavaee7" with driver "None" Pulling mywildfly (arungupta/wildfly-admin:latest)... swarm-node-02: Pulling arungupta/wildfly-admin:latest... : downloaded swarm-master: Pulling arungupta/wildfly-admin:latest... : downloaded swarm-node-01: Pulling arungupta/wildfly-admin:latest... : downloaded Creating wildflycouchbasejavaee7_mywildfly_1 Pulling mycouchbase (couchbase/server:latest)... swarm-node-02: Pulling couchbase/server:latest... : downloaded swarm-master: Pulling couchbase/server:latest... : downloaded swarm-node-01: Pulling couchbase/server:latest... : downloaded Creating db
--x-networking
creates an overlay network for the Swarm cluster. This can be verified by listing networks again:docker network ls NETWORK ID NAME DRIVER 5e93fc34b4d9 swarm-node-01/docker_gwbridge bridge 1c041242f51d wildflycouchbasejavaee7 overlay cc8697c6ce13 swarm-master/docker_gwbridge bridge f771ddc7d957 swarm-node-01/none null 879d6167be47 swarm-master/bridge bridge 5b5bcf135d7b swarm-master/none null fffc34eae907 swarm-master/host host e042754df336 swarm-node-01/host host d2f3b512f9dc swarm-node-01/bridge bridge 33a619ddc5d2 swarm-node-02/bridge bridge e0b73c96ffec swarm-node-02/none null b315e67f0363 swarm-node-02/host host
Three new networks are created:
-
Containers connected to the multi-host network are automatically connected to the
docker_gwbridge
network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node. -
A new overlay network
wildflycouchbasejavaee7
is created. Connect to different Swarm nodes and check that the overlay network exists on them.Lets begin with master:
eval "$(docker-machine env swarm-master)" docker network ls NETWORK ID NAME DRIVER 1c041242f51d wildflycouchbasejavaee7 overlay 879d6167be47 bridge bridge 5b5bcf135d7b none null fffc34eae907 host host cc8697c6ce13 docker_gwbridge bridge
Next, with
swarm-node-01
:eval "$(docker-machine env swarm-node-01)" docker network ls NETWORK ID NAME DRIVER 1c041242f51d wildflycouchbasejavaee7 overlay d2f3b512f9dc bridge bridge f771ddc7d957 none null e042754df336 host host 5e93fc34b4d9 docker_gwbridge bridge
Finally, with
swarm-node-02
:eval "$(docker-machine env swarm-node-02)" docker network ls NETWORK ID NAME DRIVER 1c041242f51d wildflycouchbasejavaee7 overlay e0b73c96ffec none null b315e67f0363 host host 33a619ddc5d2 bridge bridge
As seen,
wildflycouchbasejavaee7
overlay network exists on all Machines. This confirms that the overlay network created for Swarm cluster was added to each host in the cluster.docker_gwbridge
only exists on Machines that have application containers running.Read more about Docker Networks.
-
-
-
Connect to the Swarm cluster and verify that WildFly and Couchbase are running:
eval "$(docker-machine env --swarm swarm-master)" docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 23a581295a2b couchbase/server "/entrypoint.sh couch" 9 seconds ago Up 8 seconds 192.168.99.102:8091-8093->8091-8093/tcp, 11207/tcp, 11211/tcp, 192.168.99.102:11210->11210/tcp, 18091-18092/tcp swarm-master/db 7a8a885b23f3 arungupta/wildfly-admin "/opt/jboss/wildfly/b" 9 seconds ago Up 8 seconds 192.168.99.103:8080->8080/tcp, 192.168.99.103:9990->9990/tcp swarm-node-01/wildflycouchbasejavaee7_mywildfly_1
Note that, in this example, the Couchbase server is running on
swarm-master
node and WildFly is running onswarm-node-01
. Take a note on which nodes your Couchbase and WildFly servers are running and update the following commands accordingly.
-
Clone https://github.com/arun-gupta/couchbase-javaee.git. This workspace contains a simple Java EE application that is deployed on WildFly and provides a REST API over
travel-sample
bucket in Couchbase. -
Couchbase server can be configured using REST API. The application contains a Maven profile that allows to configure the Couchbase server with
travel-sample
bucket. This can be invoked as: (Note that you may need to replaceswarm-master
with the node in your cluster that is running Couchbase)mvn install -Pcouchbase -Ddocker.host=$(docker-machine ip swarm-master) . . . * Server auth using Basic with user 'Administrator' > POST /sampleBuckets/install HTTP/1.1 > Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA== . . . } [data not shown] * upload completely sent off: 17 out of 17 bytes < HTTP/1.1 202 Accepted * Server Couchbase Server is not blacklisted < Server: Couchbase Server . . .
Deploy the application to WildFly by specifying three parameters:
-
Host IP address where WildFly is running (swarm-node-01 in this example but update as needed for your cluster)
-
Username of a user in WildFly’s administrative realm
-
Password of the user specified in WildFly’s administrative realm
mvn install -Pwildfly -Dwildfly.hostname=$(docker-machine ip swarm-node-01) -Dwildfly.username=admin -Dwildfly.password=Admin#007
. . .
Nov 29, 2015 12:11:14 AM org.xnio.Xnio <clinit>
INFO: XNIO version 3.3.1.Final
Nov 29, 2015 12:11:14 AM org.xnio.nio.NioXnio <clinit>
INFO: XNIO NIO Implementation Version 3.3.1.Final
Nov 29, 2015 12:11:15 AM org.jboss.remoting3.EndpointImpl <clinit>
INFO: JBoss Remoting version 4.0.9.Final
[INFO] Authenticating against security realm: ManagementRealm
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
. . .
Now that the WildFly and Couchbase servers have started, lets access the application. You need to specify the IP address of the Machine where WildFly is running (swarm-node-01 in this example but update as needed for your cluster):
curl http://$(docker-machine ip swarm-node-01):8080/couchbase-javaee/resources/airline
[{"travel-sample":{"id":10123,"iata":"TQ","icao":"TXW","name":"Texas Wings","callsign":"TXW","type":"airline","country":"United States"}}, {"travel-sample":{"id":10642,"iata":null,"icao":"JRB","name":"Jc royal.britannica","callsign":null,"type":"airline","country":"United Kingdom"}}, {"travel-sample":{"id":112,"iata":"5W","icao":"AEU","name":"Astraeus","callsign":"FLYSTAR","type":"airline","country":"United Kingdom"}}, {"travel-sample":{"id":1355,"iata":"BA","icao":"BAW","name":"British Airways","callsign":"SPEEDBIRD","type":"airline","country":"United Kingdom"}}, {"travel-sample":{"id":10765,"iata":"K5","icao":"SQH","name":"SeaPort Airlines","callsign":"SASQUATCH","type":"airline","country":"United States"}}, {"travel-sample":{"id":13633,"iata":"WQ","icao":"PQW","name":"PanAm World Airways","callsign":null,"type":"airline","country":"United States"}}, {"travel-sample":{"id":139,"iata":"SB","icao":"ACI","name":"Air Caledonie International","callsign":"AIRCALIN","type":"airline","country":"France"}}, {"travel-sample":{"id":13391,"iata":"-+","icao":"--+","name":"U.S. Air","callsign":null,"type":"airline","country":"United States"}}, {"travel-sample":{"id":1191,"iata":"UU","icao":"REU","name":"Air Austral","callsign":"REUNION","type":"airline","country":"France"}}, {"travel-sample":{"id":1316,"iata":"FL","icao":"TRS","name":"AirTran Airways","callsign":"CITRUS","type":"airline","country":"United States"}}]
Check state of the cluster by connecting to different nodes.
Add container visualization using arun-gupta#55.