An Ansible role to backup MySQL or MariaDB databases to an S3 bucket or SFTP server.
- The
s3cmdcommand must be installed and executable by the user running the role. - The
mysqlcommand must be installed and executable by the user running the role. - The
scpcommand must be installed and executable by the user running the role. - The
sshcommand must be installed and executable by the user running the role.
Please see s3cmd's documentation on how to install the command.
The following collections must be installed:
- cloud.common
- amazon.aws
- community.general
- community.mysql
This role requires one dictionary as configuration, mysql_backup:
mysql_backup:
s3cmd: "/usr/local/bin/s3cmd"
debug: true
stopOnFailure: false
sources: {}
remotes: {}
backups: []Where:
s3cmdis the full path to thes3cmdexecutable. Optional, defaults tos3cmd.debugistrueto enable debugging output. Optional, defaults tofalse.stopOnFailureistrueto stop the entire role if any one backup fails. Optional, defaults tofalse.sourcesis a dictionary of sites and environments. Required.remotesis a dictionary of remote upload locations. Required.backupsis a list of backups to perform. Required.
In this role, "sources" specify the source from which to download backups. Each must have a unique key which is later used in the mysql_backup.backups list.
mysql_backup:
sources:
my-prod-db:
host: "mysql.example.com"
port: 3306
usernameFile: "/path/to/username.txt"
passwordFile: "/path/to/password.txt"
tlsCertFile: "/path/to/tls.cert"
tlsKeyFile: "/path/to/tls.key"
retryCount: 3
retryDelay: 30Where, in each entry:
hostis the hostname of the database server.portis the port with which to connect to the database.usernameFileis the path to a file containing the username. Optional ifusernameis specified.usernameis the username with which to connect to the database. Ignored ifusernameFileis specified.passwordFileis the path to a file containing the password. Optional ifpasswordis specified.passwordis the password with which to connect to the database. Ignored ifpasswordFileis specified.tlsCertFileis the path to the TLS certificate necessary to connect to the database. Optional.tlsKeyFileis the path to the TLS certificate key necessary to connect to the database. Optional.retryCountis the number of time to retryplatformcommands if they fail. Optional, defaults to3.retryDelayis the time in seconds to wait before retrying a failedplatformcommand. Optional, defaults to30.
In this role "remotes" are upload destinations for backups. This role supports S3 or SFTP as remotes. Each remote must have a unique key which is later used in the mysql_backup.backups list.
- hosts: servers
vars:
mysql_backup:
remotes:
example-s3-bucket:
type: "s3"
bucket: "my-s3-bucket"
provider: "AWS"
accessKeyFile: "/path/to/aws-s3-key.txt"
secretKeyFile: "/path/to/aws-s3-secret.txt"
hostBucket: "my-example-bucket.s3.example.com"
s3Url: "https://my-example-bucket.s3.example.com"
region: "us-east-1"
sftp.example.com:
type: "sftp"
host: "sftp.example.com"
user: "example_user"
keyFile: "/config/id_example_sftp"
pubKeyFile: "/config/id_example_sftp.pub"For s3 type remotes:
bucketis the name of the S3 bucket.provideris the S3 provider. See rclone's S3 documenation on--s3-providerfor possible values. Optional, defaults toAWS.accessKeyFileis the path to a file containing the access key. Optional ifaccessKeyis specified.accessKeyis the value of the access key necessary to access the bucket. Ignored ifaccessKeyFileis specified.secretKeyFileis the path to a file containing the secret key. Optional ifsecretKeyis specified.secretKeyis the value of the access key necessary to secret the bucket. Ignored ifsecretKeyFileis specified.regionis the AWS region in which the bucket resides. Required if using AWS S3, may be optional for other providers.endpointis the S3 endpoint to use. Optional if using AWS, required for other providers.
For sftp type remotes:
hostis the hostname of the SFTP server. Required.useris the username necessary to login to the SFTP server. Required.keyFileis the path to a file containing the SSH private key. Required.pubKeyFilesi the path to a file containing the SSH public key. Required for database backups, ignored for file backups.
The mysql_backup.backups list specifies the database backups perform, referencing the mysql_backup.sources and mysql_backup.remotes sections for connectivity details.
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/source/bucket"
disabled: false
targets: []Where:
nameis the display name of the backup. Optional, but makes the logs easier.sourceis the name of the key undermysql_backups.sourcesfrom which to generate the backup. Required.databaseis the name of the database to backup. Required.pathis the path in the source bucket from which to get files. Optional.disabledistrueto disable (skip) the backup. Optional, defaults tofalse.targetsis a list of remotes and additional destination information about where to upload backups. Required.
Backup targets reference a key in mysql_backup.remotes, and combine that with additional information used to upload this specific backup.
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
disabled: falseWhere:
remoteis the key undermysql_backup.remotesto use when uploading the backup. Required.pathis the path on the remote to upload the backup. Optional.disabledistrueto skip uploading to the specifedremote. Optional, defaults tofalse.
When a backup completes, you have the option to ping an URL via HTTP:
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
healthcheckUrl: "https://pings.example.com/path/to/service"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
disabled: falseWhere:
healthcheckUrlis the URL to ping when the backup completes successfully. Optional.
Backups are uploaded to the remote with the <database_name>.<host>.<port>-0.sql.gz. Often, you'll want to retain previous backups in the case an older backup can aid in research or recovery. This role supports retaining and rotating multiple backups using the retainCount key.
mysql_backup:
backups:
- name: "example.com database"
source: "example.com"
database: "my-site-live"
element: "database"
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retainCount: 3
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
retainCount: 3
disabled: falseWhere:
retainCountis the total number of backups to retain in the directory. Optional. Defaults to1, or no rotation.
During a backup, if retainCount is set:
- The backup with the ending
<retainCount - 1>.tar.gzis deleted. - Starting with
<retainCount - 2>.tar.gz, each backup is renamed incremending the ending index. - The new backup is uploaded with a
0index as<database_name>.<host>.<port>-0.sql.gz.
This feature works both in S3 and SFTP.
Sometimes an upload will fail due to transient network issues. You can specify how to control the backup using the retries and retryDelay keys on each target:
mysql_backup:
backups:
- name: "example.com database"
source: "example.com"
database: "my-site-live"
element: "database"
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retries: 3
retryDelay: 30
- remote: "sftp.example.com"
path: "backups/example.com/database"
retries: 3
retryDelay: 30Where:
retriesis the total number of retries to perform if the upload should fail. Defaults to no retries.retryDelaythe number of seconds to wait between retries. Defaults to no delay.
- hosts: servers
vars:
mysql_backup:
sources:
my-prod-db:
host: "mysql.example.com"
port: 3306
usernameFile: "/path/to/username.txt"
passwordFile: "/path/to/password.txt"
tlsCertFile: "/path/to/tls.cert"
tlsKeyFile: "/path/to/tls.key"
retryCount: 3
retryDelay: 30
remotes:
example-s3-bucket:
type: "s3"
bucket: "my-s3-bucket"
provider: "AWS"
accessKeyFile: "/path/to/aws-s3-key.txt"
secretKeyFile: "/path/to/aws-s3-secret.txt"
hostBucket: "my-example-bucket.s3.example.com"
s3Url: "https://my-example-bucket.s3.example.com"
region: "us-east-1"
sftp.example.com:
type: "sftp"
host: "sftp.example.com"
user: "example_user"
keyFile: "/config/id_example_sftp"
pubKeyFile: "/config/id_example_sftp.pub"
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
healthcheckUrl: "https://pings.example.com/path/to/service"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retainCount: 3
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
retainCount: 3
disabled: false
roles:
- { role: ten7.mysql_backup }GPL v3
This role was created by TEN7.