Groupe 3
Corentin Cournac - Maxime Dadoua - Mathieu Dauré - Pierre Présent - Guillaume Thiollière ##Architecture
(Please note that the location of the container services is not accurate but only to give an idea of how it looks like. There are more of them and Docker Swarm distributes them fairly between the nodes. Also, there may be more than one manager : on Helion, there are 3 managers and 3 workers).
Application deployment :
- Docker : each service and the web server are in their own container. Docker allowed us to develop the application easily, without having to worry about the details of its future deployment.
- Docker Swarm : turns a set of VM hosts into a managed cluster of Docker engine. With Swarm, Docker services on one host can directly communicate with services on other hosts : no need to configure networking between the hosts or to change our code from dev to prod, Swarm takes care of that from us. We can also easily define Docker "subnets" to prevent a particular service to communicate with others (used here to isolate W from the S database). Other features include fault tolerance, service replication, internal load-balancing...
- Docker-Machine : it is used for creating the VM hosts on which Docker will be installed, so that these hosts can join or manage the Swarm. With this tool, the Bastion VM can be seen as an entry point to the Swarm cluster (easy SSH connection, add or remove nodes...)
- Docker-Flow HAProxy : a HAProxy specifically made for Docker Swarm. It is used here is to make the web server completely hidden to the internet, the only way to reach it is to go through the proxy, which of course only allows connections on the HTTP/S ports.
- Heat and instance snapshot : a Heat template is used to create a Ubuntu-based instance with Docker installed, along with all the images of the application services. Then, a snapshot of this instance is created so that we can use Docker-Machine to make many "copies" of this snapshot in a short amount of time. It is much faster than installing Docker on every instance generated by Docker-Machine.
- Openstack Cinder and Rex-Ray : in order to provide persistant storage for the S database, we used the Docker Rex-ray volume driver which allows us to create (or use a pre-existing one) an Openstack Cinder volume. It works, but it has limitations, mainly because of Cinder. It is too difficult to preemptively detach the volume on one host and attach it on another (thankfully it will be improved in the future), so we had no choice but to make sure the S database service always stays on the same host (here, the "Master" of the swarm, which is its first elected leader). Because of this, the fault tolerance on the S Database is suboptimal.
Application :
- Flask : we took inspiration from the W service and used Flask for the entirety of our application. A big advantage is that it integrates well with Docker, with public images from Docker Hub that make it even easier for us (especially tiangolo/uwsgi-nginx-flask)
- Nginx and uwsgi : included in the above image and necessary for the Flask-based application to have satisfying performance.
- MySQL : we used a simple MySQL database to store the customer's status, the S database (whether he has already played or not).
- Step 1 : it generates a snapshot of an instance with Docker and every Docker image needed by the application.
- Step 2 : using Docker-Machine, more instances are created from this snapshot.
- Step 3 : the first instance to be created initiates a Swarm, and the others join it. The services are then launched.
##Requirements :
-
A Bastion VM, accessible from the outside with a floating IP.
-
An Openstack private network (replace the NETWORK variable below with yours !)
-
An Ubuntu-based image, tested with 1404, should work with 1604. Replace SSH_USER if necessary.
-
A V2 OPENRC file. Get yours at "Access and Security -> API Access" on the Openstack dashboard.
-
Basically, the requirements are the same steps we followed during the 2nd or 3rd lab session to set up the Bastion VM.
##How to deploy
-
SSH into Bastion VM.
-
git clone the project.
-
Run the script ./init.sh. Provide arguments for the OPENRC file and optionally the number of workers and managers.
You'll be asked for your Openstack password at the beginning, the rest is 100% automated.
The script takes roughly 25min to complete.