888
888
888
.d8888b 8888b. .d88888 .d8888b .d88b. 888d888 888 888 .d88b. 888d888
88K "88b d88" 888 88K d8P Y8b 888P" 888 888 d8P Y8b 888P"
"Y8888b. .d888888 888 888 "Y8888b. 88888888 888 Y88 88P 88888888 888
X88 888 888 Y88b 888 X88 Y8b. 888 Y8bd8P Y8b. 888
88888P' "Y888888 "Y88888 88888P' "Y8888 888 Y88P "Y8888 888
This repository contains Ansible playbooks and relevant documentation for the IT infrastructure at Study Association Sticky. The name of this repository, and the association's production server, is a reference to this Twitter account.
In this README, we try to give some reasons as to why the project is organised the way it is. Don't try to reinvent the wheel, but feel free to periodically reconsider the way things are done!
Quick links:
It was important for this committee to make Sticky's infrastructure easy to convey to new members of this committee. This is so important, because the composition of committees often changes a lot from year to year, and at least some of the infrastructure is critical for the association to function. After a lot of discussion, we decided to use configuration management to set up the current server, which replaced the previous "snowflake" skyblue. Because of experiences present in the committee, Ansible was chosen to do the trick. This makes it possible to script everything that is needed for a clean OS-imaged server to become Sticky's production server.
The code in this repository depends on the following software:
- nix
- Two Discord webhooks, which should be put in
ansible/.env
Furthermore, the Ansible playbooks assume a vanilla Ubuntu 20.04 host to be deployed on.
The ansible/
directory in this repository contains Ansible
playbooks, which are a runnable specification of commands that should be
executed when configuring a Linux system.
You might notice this project doesn't follow all of Ansible's guidelines regarding the structure of (a set of) playbooks. This is partially so, because it would add some complexity that doesn't have many benefits when your infrastructure consists of only one or two servers.
There is one main playbook, ansible/main.yml
, that includes many files that
consist of tasks. This is the playbook that sets up an entire server that hosts
all the applications Sticky self-hosts. This playbook is completely idempotent,
which means it can be run multiple times on the same server without any
unintended consequences. You can't deploy this playbook on a host without
bootstrapping it first. Our other playbooks are stored in
ansible/playbooks/
. These are more specific and consist of not necessarily
idempotent tasks. These are used to e.g. create a new admin user in Koala, and
to restart Koala or nginx.
Templates that have to be parsed and copied to the host, reside in the
ansible/templates
directory. That directory follows the hierarchy of the root
filesystem on the host, so a template that has to be placed in /home/koala
on
the host resides in ansible/templates/home/koala
in the repository. The file
names should also be the same as their target name where possible. For more
information look at our Ansible styleguide.
All variables we use are stored in a separate repository, sadserver-secrets,
because these include encrypted secrets. This repository is referenced as a
submodule in ansible/group_vars/
, and an example of this structure can be
found in ansible/group_vars_example/. This folder utilizes a subfolder for
each inventory group, in addition to the general all
. The appropriate folders
are automatically loaded by Ansible when running a playbook. In these folders
are listed:
- The Linux users, either admins or committees
(ansible/group_vars/all/users.yml)
- The SSH keys that can be used to SSH in with these accounts
- The websites we host (ansible/group_vars/all/websites.yml)
- Application-specific variables
(
ansible/group_vars/all|production|staging/vars.yml
)
Our secrets are stored in ansible/group_vars/all|production|staging/vault.yml
,
per environment. These files are all encrypted using Ansible Vault. These
secrets should all be cycled when someone's access to the corresponding
passphrase is revoked.
In docs/
you can find all documentation that has been written about this
project, apart from this README, as well as styleguides and templates.
There is currently one server, sadserver.svsticky.nl
, used in production, and
one server, dev.svsticky.nl
, used as a staging server. The staging server
enables the administrators to test changes (to either the infrastructure or
specific applications) in an environment that mimics the production environment
as much as possible, while existing completely independent of the production
environment. Ansible uses an inventory file to list all hosts,
which is kept in this repository in ansible/hosts
.
To make a host ready to run regular Ansible playbooks on, a special playbook
should be used that bootstraps the server. It installs Ansible's dependencies,
and sets up a non-root user for Ansible to use. A playbook should be applied to
a host by means of a wrapper script around ansible-playbook
, that posts
progress notifications to the committee's Discord team, among a few other things.
After the bootstrapping, the main playbook can be run to completely set up the server. The main playbook can be applied in the same way as the bootstrap playbook.
When this has successfully finished, a server exists that matches one of the environments.
These are the steps to follow to set up a new development or production server. Some of the steps require you to specify which of the two you are setting up.
If you want to migrate from an existing server, a few additional tasks should be performed, which are explained in detail in this guide.
- Create a droplet (ansible assumes Ubuntu 18.04) named either
dev.svsticky.nl
(staging) orsvsticky.nl
(production).- for staging: 2GB RAM on 1CPU should suffice.
- for production: 4GB RAM on 2CPUs is the standard. Make sure IPv6 in enabled.
- Server should be run on AMS whenever possible.
- Make sure your SSH key is sent to the server as it is needed in a later step.
- Assign a floating IP to the new droplet. Floating IP's are already in DNS, which avoids DNS cache problems.
- In the side bar click Networking
- Click on Floating IPs
- Next to the Floating ip click assign a droplet (if you didn't delete the droplet: click More > Reassign)
-
Install the Nix package manager via the steps on this page: https://nixos.org/download.html
-
Download the repository and enter the folder.
$ git clone https://github.com/svsticky/sadserver
$ cd sadserver/ansible
-
Copy
sample.env
to.env
and fill in the missing discord webhooks. You will need to login to bitwarden asitcrowd@svsticky.nl
to read this secret. (If you find theslack_notifications_webhook_url
, do not change the name of the secret for legacy reasons. Ansible's code is dependent on the name.)
To install all required dependencies, run the following command to enter a nix shell.
$ nix-shell
Only the first time, will these dependencies be installed.
-
To run the deploy script, an active session with bitwarden is required. To do this, run
$ bw login
and follow the instructions. The account required is managed by the IT Crowd. You will have these credentials if you are a member of the IT Crowd. -
Bootstrap the host for either production or staging.
$ ./deploy.py --host=(production|staging) --playbook playbooks/bootstrap-new-host.yml
You do not need to enter a SUDO password, but you do need to enter the correct Vault password. (Can usually be found in bitwarden). On staging, if the playbook fails immediately, you might have an old ssh key. To solve this type:$ ssh root@dev.svsticky.nl
SSH will guide you the rest of the way. -
Run the main playbook for either production or staging.
$ ./deploy.py --host=(production|staging)
Enter the password from the previous step when prompted for. -
To create a new database and start Koala, you will also need to run these two playbooks.
$ ./deploy.py --host=(production|staging) --playbook playbooks/koala/db-setup.yml
$ ./deploy.py --host=(production|staging) --playbook playbooks/koala/start.yml
For help and questions, contact the relevant committee -- at the time of writing, this is the IT Crowd.
Godspeed!