Demo of course catalog related APIs:
This application is implemented with Django framework (version 1.10.1), so make sure you have installed Django first. You can simply install it with pip:
$ pip install Django==1.10.1
-
Register your application to use the Class Search API, Course Subjects API, and Terms API at the OSU Developer Portal.
-
Put your
configuration.json
file which containsclient_id
andclient_secret
in the root folder of this application."client_id": "secret", "client_secret": "sauce"
-
Execute the following commands to run the server:
$ cd catalog_api_demo $ python manage.py runserver
-
While the server is running locally, visit
http://127.0.0.1:8000/catalog_api_demo/
with your Web browser.
-
Build the docker image by running:
$ docker build --tag="catalog_api_demo" .
Warning: Do not put your
configuration.json
file in the root of the application when building the image. You should mount the configuration file as a readonly volume. -
Run the docker container:
$ docker run \ > --name=catalog_api_demo \ > --publish 8000:8000 \ > --volume /path/to/configuration.json:/demo/catalog-api-demo/configuration.json:ro \ > catalog_api_demo
This section shows how to deploy this application with Docker Swarm.
You can check you swarm version by running docker run --rm swarm -v
.
Prepare multiple machines as nodes in swarm cluster. I am going to use VirtulBox here to create three manager nodes
and seven worker nodes
as an example. According to Docker documentation, it is recommended to have odd number of nodes according to the organization’s high-availability requirements.
- A three-manager swarm tolerates a maximum loss of one manager.
- An N manager cluster will tolerate the loss of at most (N-1)/2 managers.
- Docker recommends a maximum of seven manager nodes for a swarm.
$ docker-machine create --driver virtualbox manager01
$ docker-machine create --driver virtualbox manager02
$ docker-machine create --driver virtualbox manager03
$ docker-machine create --driver virtualbox worker01
$ docker-machine create --driver virtualbox worker02
...
$ docker-machine create --driver virtualbox worker07
After creating machines, you can list all machines you have by typing:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager01 - virtualbox Running tcp://<manager01_ip>:2376 v1.12.3
manager02 - virtualbox Running tcp://<manager02_ip>:2376 v1.12.3
manager03 - virtualbox Running tcp://<manager03_ip>:2376 v1.12.3
worker01 - virtualbox Running tcp://<worker01_ip>:2376 v1.12.3
worker02 - virtualbox Running tcp://<worker02_ip>:2376 v1.12.3
...
worker07 - virtualbox Running tcp://<worker07_ip>:2376 v1.12.3
Now we are going to login into the manager machine and have it become the primary manager node
in the swarm cluster.
$ MANAGER01_IP=$(docker-machine ip manager01)
$ docker-machine ssh manager01 docker swarm init --advertise-addr $MANAGER01_IP:2377
Swarm initialized: current node (<node_id>) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token <worker_node_token> \
<manager01_ip>:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Make sure you have the swarm token after you initializing it. We will need the token later, but please DO NOT store it as an environment variable.
We can also add more manager nodes
to our swarm.
$ docker-machine ssh manager01 docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token <manager_node_token> \
<manager01_ip>:2377
$ docker-machine ssh manager02 docker swarm join \
> --token <manager_node_token> \
> <manager01_ip>:2377
Repeat this process to add all the other two managers to the cluster.
The following is an example that adding worker01
, worker02
and worker03
to manager01
. Use the following command to get the token first:
$ docker-machine ssh manager01 docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join \
--token <worker_node_token> \
<manager01_ip>:2377
Now add worker nodes
to manager01
:
$ WORKER01_IP=$(docker-machine ip worker01)
$ docker-machine ssh worker01 docker swarm join \
> --token <worker_node_token> \
> $MANAGER01_IP:2377
Repeat this process to construct our cluster structure as following:
worker01
, worker02
and worker03
to manager01
;
worker04
and worker05
to manager02
;
worker06
and worker07
to manager03
You are allowed to check every nodes status in a swarm via the manager node
.
$ docker-machine ssh manager01 docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
<manager01_node_id> * manager01 Ready Active Leader
<worker01_node_id> worker01 Ready Active
<worker02_node_id> worker02 Ready Active
<worker03_node_id> worker03 Ready Active
<manager02_node_id> manager02 Ready Active Reachable
<worker04_node_id> worker04 Ready Active
<worker05_node_id> worker05 Ready Active
<manager03_node_id> manager03 Ready Active Reachable
<worker06_node_id> worker06 Ready Active
<worker07_node_id> worker07 Ready Active
-
Clone this repo to your
manager node
and prepare a properconfiguration.json
file. -
Build the
catalog_api_demo
image on youmanager01
:docker-machine ssh manager01 docker build --tag="catalog_api_demo" /path/to/catalog-api-demo
-
Create a
catalog_api_demo
service on yourmanager01
:$ docker-machine ssh manager01 docker service create \ > --name catalog_api_demo \ > --replicas 10 \ > --publish 8000:8000 \ > --mount type=bind,src=/path/to/configuration.json,dst=/demo/catalog-api-demo/configuration.json,readonly \ > catalog_api_demo
* Note that
--replicas
is the number of instances of the image specified. You can scale your services by using the following command:$ docker-machine ssh manager01 docker service scale catalog_api_demo=<number_of_replicas>
-
You can list all services on you
manager01
:$ docker-machine ssh manager01 docker service ls ID NAME REPLICAS IMAGE COMMAND <service_id> catalog_api_demo 3/15 catalog_api_demo
-
Now you should be able to access the service through
manager01
:$ curl -I http://<manager01_ip>:8000/catalog_api_demo/ HTTP/1.0 200 OK Date: Tue, 01 Nov 2016 17:16:51 GMT Server: WSGIServer/0.1 Python/2.7.12 X-Frame-Options: SAMEORIGIN Content-Type: text/html; charset=utf-8
* Note that in order to handle the situation if the leader node (primary manager node) is down for some reason, we should build the images on multiple manager nodes. However, there is no need to create multiple services on them. Be aware that in three-manager swarm, the maximum loss of manager is only one.