Skip to content

kintoproj/catalog-mysql

Repository files navigation

MySQL

MySQL is a fast, reliable, scalable, and easy to use open-source relational database system. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.

TL;DR;

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/mysql

Introduction

This chart bootstraps a MySQL replication cluster deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of Bitnami Kubernetes Production Runtime (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.

Prerequisites

  • Kubernetes 1.12+
  • Helm 2.12+ or Helm 3.0-beta3+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/mysql

These commands deploy MySQL on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

The following tables lists the configurable parameters of the MySQL chart and their default values.

Parameter Description Default
global.imageRegistry Global Docker image registry nil
global.imagePullSecrets Global Docker registry secret names as an array [] (does not add image pull secrets to deployed pods)
global.storageClass Global storage class for dynamic provisioning nil
image.registry MySQL image registry docker.io
image.repository MySQL Image name bitnami/mysql
image.tag MySQL Image tag {TAG_NAME}
image.pullPolicy MySQL image pull policy IfNotPresent
image.pullSecrets Specify docker-registry secret names as an array [] (does not add image pull secrets to deployed pods)
image.debug Specify if debug logs should be enabled false
nameOverride String to partially override mysql.fullname template with a string (will prepend the release name) nil
fullnameOverride String to fully override mysql.fullname template with a string nil
clusterDomain Kubernetes DNS Domain name to use cluster.local
volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work) false
volumePermissions.image.registry Init container volume-permissions image registry docker.io
volumePermissions.image.repository Init container volume-permissions image name bitnami/minideb
volumePermissions.image.tag Init container volume-permissions image tag buster
volumePermissions.image.pullPolicy Init container volume-permissions image pull policy Always
volumePermissions.resources Init container resource requests/limit nil
existingSecret Specify the name of an existing secret for password details (root.password, db.password, replication.password will be ignored and picked up from this secret). The secret has to contain the keys mysql-root-password, mysql-replication-password and mysql-password. nil
root.password Password for the root user random 10 character alphanumeric string
root.forcePassword Force users to specify a password. That is required for 'helm upgrade' to work properly false
root.injectSecretsAsVolume Mount admin user password as a file instead of using an environment variable false
db.user Username of new user to create (should be different from replication.user) nil
db.password Password for the new user random 10 character alphanumeric string if db.user is defined
db.name Name for new database to create my_database
db.forcePassword Force users to specify a password. That is required for 'helm upgrade' to work properly false
db.injectSecretsAsVolume Mount user password as a file instead of using an environment variable false
replication.enabled MySQL replication enabled true
replication.user MySQL replication user (should be different from db.user) replicator
replication.password MySQL replication user password random 10 character alphanumeric string
replication.forcePassword Force users to specify a password. That is required for 'helm upgrade' to work properly false
replication.injectSecretsAsVolume Mount user password as a file instead of using an environment variable false
initdbScripts Dictionary of initdb scripts nil
initdbScriptsConfigMap ConfigMap with the initdb scripts (Note: Overrides initdbScripts) nil
serviceAccount.create Specifies whether a ServiceAccount should be created true
serviceAccount.name If serviceAccount.create is enabled, what should the serviceAccount name be - otherwise defaults to the fullname nil
master.config Config file for the MySQL Master server _default values in the values.yaml file_
master.updateStrategy.type Master statefulset update strategy policy RollingUpdate
master.podAnnotations Pod annotations for master nodes {}
master.affinity Map of node/pod affinities for master nodes {} (The value is evaluated as a template)
master.nodeSelector Node labels for pod assignment on master nodes {} (The value is evaluated as a template)
master.tolerations Tolerations for pod assignment on master nodes [] (The value is evaluated as a template)
master.securityContext.enabled Enable security context for master nodes true
master.securityContext.fsGroup Group ID for the master nodes' containers 1001
master.securityContext.runAsUser User ID for the master nodes' containers 1001
master.containerSecurityContext Container security context for master nodes' containers {}
master.resources CPU/Memory resource requests/limits for master nodes' containers {}
master.livenessProbe.enabled Turn on and off liveness probe (master nodes) true
master.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (master nodes) 120
master.livenessProbe.periodSeconds How often to perform the probe (master nodes) 10
master.livenessProbe.timeoutSeconds When the probe times out (master nodes) 1
master.livenessProbe.successThreshold Minimum consecutive successes for the probe (master nodes) 1
master.livenessProbe.failureThreshold Minimum consecutive failures for the probe (master nodes) 3
master.readinessProbe.enabled Turn on and off readiness probe (master nodes) true
master.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (master nodes) 30
master.readinessProbe.periodSeconds How often to perform the probe (master nodes) 10
master.readinessProbe.timeoutSeconds When the probe times out (master nodes) 1
master.readinessProbe.successThreshold Minimum consecutive successes for the probe (master nodes) 1
master.readinessProbe.failureThreshold Minimum consecutive failures for the probe (master nodes) 3
master.extraEnvVars Array containing extra env vars to configure MySQL master replicas nil
master.extraEnvVarsCM Configmap containing extra env vars to configure MySQL master replicas nil
master.extraEnvVarsSecret Secret containing extra env vars to configure MySQL master replicas nil
master.persistence.enabled Enable persistence using a PersistentVolumeClaim (master nodes) true
master.persistence.mountPath Configure PersistentVolumeClaim mount path (master nodes) /bitnami/mysql
master.persistence.annotations Persistent Volume Claim annotations (master nodes) {}
master.persistence.storageClass Persistent Volume Storage Class (master nodes) ``
master.persistence.accessModes Persistent Volume Access Modes (master nodes) [ReadWriteOnce]
master.persistence.size Persistent Volume Size (master nodes) 8Gi
master.persistence.existingClaim Provide an existing PersistentVolumeClaim (master nodes) nil
slave.replicas Desired number of slave replicas 1
slave.updateStrategy.type Slave statefulset update strategy policy RollingUpdate
slave.podAnnotations Pod annotations for slave nodes {}
slave.affinity Map of node/pod affinities for slave nodes {} (The value is evaluated as a template)
slave.nodeSelector Node labels for pod assignment on slave nodes {} (The value is evaluated as a template)
slave.tolerations Tolerations for pod assignment on slave nodes [] (The value is evaluated as a template)
slave.extraEnvVars Array containing extra env vars to configure MySQL slave replicas nil
slave.extraEnvVarsCM ConfigMap containing extra env vars to configure MySQL slave replicas nil
slave.extraEnvVarsSecret Secret containing extra env vars to configure MySQL slave replicas nil
slave.securityContext.enabled Enable security context for slave nodes true
slave.securityContext.fsGroup Group ID for the slave nodes' containers 1001
slave.securityContext.runAsUser User ID for the slave nodes' containers 1001
slave.containerSecurityContext Container security context for slave nodes' containers {}
slave.resources CPU/Memory resource requests/limits for slave nodes' containers {}
slave.livenessProbe.enabled Turn on and off liveness probe (slave nodes) true
slave.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (slave nodes) 120
slave.livenessProbe.periodSeconds How often to perform the probe (slave nodes) 10
slave.livenessProbe.timeoutSeconds When the probe times out (slave nodes) 1
slave.livenessProbe.successThreshold Minimum consecutive successes for the probe (slave nodes) 1
slave.livenessProbe.failureThreshold Minimum consecutive failures for the probe (slave nodes) 3
slave.readinessProbe.enabled Turn on and off readiness probe (slave nodes) true
slave.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (slave nodes) 30
slave.readinessProbe.periodSeconds How often to perform the probe (slave nodes) 10
slave.readinessProbe.timeoutSeconds When the probe times out (slave nodes) 1
slave.readinessProbe.successThreshold Minimum consecutive successes for the probe (slave nodes) 1
slave.readinessProbe.failureThreshold Minimum consecutive failures for the probe (slave nodes) 3
slave.persistence.enabled Enable persistence using a PersistentVolumeClaim (slave nodes) true
slave.persistence.mountPath Configure PersistentVolumeClaim mount path (slave nodes) /bitnami/mysql
slave.persistence.annotations Persistent Volume Claim annotations (slave nodes) {}
slave.persistence.storageClass Persistent Volume Storage Class (slave nodes) ``
slave.persistence.accessModes Persistent Volume Access Modes (slave nodes) [ReadWriteOnce]
slave.persistence.size Persistent Volume Size (slave nodes) 8Gi
slave.persistence.existingClaim Provide an existing PersistentVolumeClaim (slave nodes) nil
service.type Kubernetes service type ClusterIP
service.port MySQL service port 3306
service.nodePort.master Port to bind to for NodePort service type (master service) nil
service.nodePort.slave Port to bind to for NodePort service type (slave service) nil
service.loadBalancerIP.master Static IP Address to use for master LoadBalancer service type nil
service.loadBalancerIP.slave Static IP Address to use for slaves LoadBalancer service type nil
service.annotations Kubernetes service annotations {}
metrics.enabled Start a side-car prometheus exporter false
metrics.image Exporter image name bitnami/mysqld-exporter
metrics.imageTag Exporter image tag {TAG_NAME}
metrics.imagePullPolicy Exporter image pull policy IfNotPresent
metrics.resources Exporter resource requests/limit nil
metrics.service.type Kubernetes service type for MySQL Prometheus Exporter ClusterIP
metrics.service.port MySQL Prometheus Exporter service port 9104
metrics.service.annotations Prometheus exporter svc annotations {prometheus.io/scrape: "true", prometheus.io/port: "9104"}
metrics.serviceMonitor.enabled if true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true) false
metrics.serviceMonitor.namespace Optional namespace which Prometheus is running in nil
metrics.serviceMonitor.interval How frequently to scrape metrics (use by default, falling back to Prometheus' default) nil
metrics.serviceMonitor.selector Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install nil The above parameters map to the env variables defined in bitnami/mysql. For more information please refer to the bitnami/mysql image documentation.

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install my-release \
  --set root.password=secretpassword,user.database=app_database \
    bitnami/mysql

The above command sets the MySQL root account password to secretpassword. Additionally it creates a database named app_database.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install my-release -f values.yaml bitnami/mysql

Tip: You can use the default values.yaml

Configuration and installation details

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Production configuration

This chart includes a values-production.yaml file where you can find some parameters oriented to production configuration in comparison to the regular values.yaml. You can use this file instead of the default one.

  • Force users to specify a password:
- root.forcePassword: false
+ root.forcePassword: true

- db.forcePassword: false
+ db.forcePassword: true

- replication.forcePassword: false
+ replication.forcePassword: true
  • Desired number of slave replicas:
- slave.replicas: 1
+ slave.replicas: 2
  • Start a side-car prometheus exporter:
- metrics.enabled: false
+ metrics.enabled: true

Change MySQL version

To modify the MySQL version used in this chart you can specify a valid image tag using the image.tag parameter. For example, image.tag=X.Y.Z. This approach is also applicable to other images like exporters.

Initialize a fresh instance

The Bitnami MySQL image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder files/docker-entrypoint-initdb.d so they can be consumed as a ConfigMap.

The allowed extensions are .sh, .sql and .sql.gz.

Persistence

The Bitnami MySQL image stores the MySQL data and configurations at the /bitnami/mysql path of the container.

The chart mounts a Persistent Volume volume at this location. The volume is created using dynamic volume provisioning by default. An existing PersistentVolumeClaim can be defined.

Adjust permissions of persistent volume mountpoint

As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Upgrading

It's necessary to set the root.password parameter when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Administrator credentials' section. Please note down the password and run the command below to upgrade your chart:

$ helm upgrade my-release bitnami/mysql --set root.password=[ROOT_PASSWORD]

| Note: you need to substitue the placeholder [ROOT_PASSWORD] with the value obtained in the installation notes.

To 3.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 3.0.0. The following example assumes that the release name is mysql:

$ kubectl delete statefulset mysql-master --cascade=false
$ kubectl delete statefulset mysql-slave --cascade=false

About

Catalog template for deploying MySQL on KintoHub

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published