Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to initialize root password and custom user/password. #229

Closed
bluven opened this issue Feb 13, 2019 · 11 comments
Closed

Failed to initialize root password and custom user/password. #229

bluven opened this issue Feb 13, 2019 · 11 comments
Assignees
Labels
Milestone

Comments

@bluven
Copy link
Contributor

bluven commented Feb 13, 2019

I created a mysql cluster, but the generated mysql cluster didn't initialize root with specifyed password and custom user/password were neither initialized.

mysql-operator: 0.2.2

mysql> select user, host from mysql.user;
+--------------+-----------+
| user         | host      |
+--------------+-----------+
| orchestrator | %         |
| repl_McYOj   | %         |
| exp_dnuFW    | 127.0.0.1 |
| root         | localhost |
+--------------+-----------+

cluster.yaml

apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
  name: mc
spec:
  # image: percona:5.7.24
  replicas: 2
  secretName: my-secret

  mysqlConf:
    slow_query_log: "1"
    log_output: table
    long_query_time: "20"
    slow_query_log_file: /dev/stdout

  podSpec:
     resources:
       requests:
         memory: 256Mi
         cpu:    200m

  ## Specify additional volume specification
  volumeSpec:
     accessModes: [ "ReadWriteOnce" ]
     resources:
       requests:
         storage: 1Gi

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: default
type: Opaque
data:
  # root password is required to be specified
  ROOT_PASSWORD: cm9vdA==
  USER: Ymx1dmVu
  PASSWORD: eWFuc2hpeWk=

Partial of mysql node env

MYSQL_PASSWORD=yanshiyi
MYSQL_OPERATOR_TEST_ORCHESTRATOR_PORT=tcp://10.102.244.101:80
MYSQL_ROOT_PASSWORD=root
MYSQL_USER=bluven
@bluven bluven changed the title Cannot initialize root password,neither custom user/password Failed to initialize root password and custom user/password. Feb 13, 2019
@AMecea
Copy link
Contributor

AMecea commented Feb 13, 2019

Hi @bluven, did you make sure that it's a new cluster? Once a cluster is initialized (has data in PVC) will not update the credentials anymore. The user creation and initialization are done by docker image entrypoint and it's run only when the server is started for the first time. There is an issue opened about this #75

@bluven
Copy link
Contributor Author

bluven commented Feb 14, 2019

@AMecea I'm pretty sure about that. Today I deployed a new k8s and create a new mysql cluster, the same problem happened.

In fact this problem happended all the time, I never had root password and custom user/password initialized, I just thought it was designed that way until I read mysql-operator source code.

I don't know how this happened. I tried percona images directly with docker, the initialization was successful. I'll try creating myqlcluster with my image with custom docker-entrypoint.sh to see what happened.

@bluven
Copy link
Contributor Author

bluven commented Feb 14, 2019

@AMecea I guess i figured out what happened:

When a pod bootstaps, it may restart several times. But in the docker-entrypoint.sh, there is datadir check, if datadir is already created, it will skip initialization.

Here is the log of first time bootstrap:

Initializing database
2019-02-14T08:07:01.195849Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2019-02-14T08:07:15.352035Z 0 [Warning] InnoDB: New log files created, LSN=45790
2019-02-14T08:07:19.796664Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2019-02-14T08:07:20.289642Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 8d401ccc-302f-11e9-bd7d-063a37abb753.
2019-02-14T08:07:20.371568Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2019-02-14T08:07:21.407190Z 0 [Warning] CA certificate ca.pem is self signed.
2019-02-14T08:07:24.054013Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

This is all contents of first time bootstrap log, it aborted for no reason, the rest of initialization never happened. Then pod restarted, the initialization is just skipped.

Does this happened in your enviroment?

@AMecea
Copy link
Contributor

AMecea commented Feb 14, 2019

I never encountered that to restart the pod when it's in the Initializing database phase, before. Just the pt-heartbeat container is restarted a few times before starting but this should not affect mysql container. What k8s version do you use and what provider?

This situation is possible and I think the operator should handle it somehow, but yet I'm not sure how, meanwhile let's debug your situation because this should happen very rarely and only at cluster bootstrap.

Maybe more details will help here, what kubectl describe pod <pod-name> command outputs? The pod description shows if the container was restarted or not. Also, we can discuss further, easily, on the mysql-operator channel on Kubernetes community Slack.

@bluven
Copy link
Contributor Author

bluven commented Feb 15, 2019

My k8s cluster has only one node, I deployed it by myself witout any cloud provider.

k8s version:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
kubectl get po -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE
mc-mysql-0                                           3/4     Running   4          15h   192.168.179.46   bluven-k8s   <none>
mysql-operator-test-856f45fb7d-2dvrr                 1/1     Running   12         25d   192.168.179.30   bluven-k8s   <none>
mysql-operator-test-orchestrator-0                   1/1     Running   214        29d   192.168.179.24   bluven-k8s   <none>
nfs-client-nfs-client-provisioner-5f84d8669d-m979f   1/1     Running   6          35d   192.168.179.33   bluven-k8s   <none>
[root@bluven-k8s ~]# kubectl describe po mc-mysql-0
Name:               mc-mysql-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               bluven-k8s/10.22.19.3
Start Time:         Fri, 15 Feb 2019 13:26:37 +0800
Labels:             app=mysql-operator
                    controller-revision-hash=mc-mysql-6c79f75bf
                    mysql_cluster=mc
                    statefulset.kubernetes.io/pod-name=mc-mysql-0
Annotations:        config_rev: 7868825
                    prometheus.io/port: 9125
                    prometheus.io/scrape: true
                    secret_rev: 7465970
Status:             Running
IP:                 192.168.179.58
Controlled By:      StatefulSet/mc-mysql
Init Containers:
  init-mysql:
    Container ID:  docker://710e3a200d63da91bb068ed7c20b711392a08973fb0ed75444c3c8a94067775a
    Image:         quay.io/presslabs/mysql-operator-sidecar:latest
    Image ID:      docker-pullable://quay.io/presslabs/mysql-operator-sidecar@sha256:b5ce53bad36d881592155bc59ef63aba598c101928b3b87a816bdb743c169e11
    Port:          <none>
    Host Port:     <none>
    Args:
      files-config
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Feb 2019 13:26:44 +0800
      Finished:     Fri, 15 Feb 2019 13:26:44 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      MY_NAMESPACE:      default (v1:metadata.namespace)
      MY_POD_NAME:       mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:          (v1:status.podIP)
      MY_SERVICE_NAME:   mc-mysql-nodes
      MY_CLUSTER_NAME:   mc
      MY_FQDN:           $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:  http://mysql-operator-test-orchestrator.default/api
    Mounts:
      /etc/mysql from conf (rw)
      /mnt/conf from config-map (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
  clone-mysql:
    Container ID:  docker://e6bbcc386a5f71299cdd9b9b078d1857c51868e456c0de85bb8acf8b7442170b
    Image:         quay.io/presslabs/mysql-operator-sidecar:latest
    Image ID:      docker-pullable://quay.io/presslabs/mysql-operator-sidecar@sha256:b5ce53bad36d881592155bc59ef63aba598c101928b3b87a816bdb743c169e11
    Port:          <none>
    Host Port:     <none>
    Args:
      clone
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Feb 2019 13:26:52 +0800
      Finished:     Fri, 15 Feb 2019 13:26:52 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      MY_NAMESPACE:           default (v1:metadata.namespace)
      MY_POD_NAME:            mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:               (v1:status.podIP)
      MY_SERVICE_NAME:        mc-mysql-nodes
      MY_CLUSTER_NAME:        mc
      MY_FQDN:                $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:       http://mysql-operator-test-orchestrator.default/api
      MYSQL_BACKUP_USER:      <set to the key 'BACKUP_USER' in secret 'my-secret'>      Optional: true
      MYSQL_BACKUP_PASSWORD:  <set to the key 'BACKUP_PASSWORD' in secret 'my-secret'>  Optional: true
    Mounts:
      /etc/mysql from conf (rw)
      /var/lib/mysql from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
Containers:
  mysql:
    Container ID:   docker://9b5623e79531da84e0d23291da346b08638769207413023f542c4a1290b3610f
    Image:          registry.bluven.me:5000/bluven/percona:5.7-debug
    Image ID:       docker-pullable://registry.bluven.me:5000/bluven/percona@sha256:c16fe1f4d25824c45ba2c52c20d7e7136b2a1db161871cf7f8547ca32c29aec8
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 15 Feb 2019 13:28:06 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Fri, 15 Feb 2019 13:26:53 +0800
      Finished:     Fri, 15 Feb 2019 13:28:06 +0800
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:      500m
      memory:   1Gi
    Liveness:   exec [mysqladmin --defaults-file=/etc/mysql/client.cnf ping] delay=30s timeout=5s period=5s #success=1 #failure=3
    Readiness:  exec [mysql --defaults-file=/etc/mysql/client.cnf -e SELECT 1] delay=5s timeout=5s period=2s #success=1 #failure=3
    Environment:
      MY_NAMESPACE:         default (v1:metadata.namespace)
      MY_POD_NAME:          mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:             (v1:status.podIP)
      MY_SERVICE_NAME:      mc-mysql-nodes
      MY_CLUSTER_NAME:      mc
      MY_FQDN:              $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:     http://mysql-operator-test-orchestrator.default/api
      MYSQL_ROOT_PASSWORD:  <set to the key 'ROOT_PASSWORD' in secret 'my-secret'>  Optional: false
      MYSQL_USER:           <set to the key 'USER' in secret 'my-secret'>           Optional: true
      MYSQL_PASSWORD:       <set to the key 'PASSWORD' in secret 'my-secret'>       Optional: true
      MYSQL_DATABASE:       <set to the key 'DATABASE' in secret 'my-secret'>       Optional: true
    Mounts:
      /etc/mysql from conf (rw)
      /var/lib/mysql from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
  sidecar:
    Container ID:  docker://ee3fbfba1c8ca055c274a48f723f88937b61ce2c98ed8ee2cd3b1f73cf0d87ba
    Image:         quay.io/presslabs/mysql-operator-sidecar:latest
    Image ID:      docker-pullable://quay.io/presslabs/mysql-operator-sidecar@sha256:b5ce53bad36d881592155bc59ef63aba598c101928b3b87a816bdb743c169e11
    Port:          3307/TCP
    Host Port:     0/TCP
    Args:
      config-and-serve
    State:          Running
      Started:      Fri, 15 Feb 2019 13:28:14 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 15 Feb 2019 13:27:01 +0800
      Finished:     Fri, 15 Feb 2019 13:28:01 +0800
    Ready:          False
    Restart Count:  1
    Limits:
      cpu:  50m
    Requests:
      cpu:      10m
    Readiness:  http-get http://:8088/health delay=30s timeout=5s period=5s #success=1 #failure=3
    Environment Variables from:
      my-secret  Secret with prefix 'MYSQL_'  Optional: false
    Environment:
      MY_NAMESPACE:      default (v1:metadata.namespace)
      MY_POD_NAME:       mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:          (v1:status.podIP)
      MY_SERVICE_NAME:   mc-mysql-nodes
      MY_CLUSTER_NAME:   mc
      MY_FQDN:           $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:  http://mysql-operator-test-orchestrator.default/api
    Mounts:
      /etc/mysql from conf (rw)
      /var/lib/mysql from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
  metrics-exporter:
    Container ID:  docker://879aa1ee4efedeff800721e4f87c8c1bdbc11ce4f9384dada54b4ce1066bd819
    Image:         prom/mysqld-exporter:latest
    Image ID:      docker-pullable://prom/mysqld-exporter@sha256:9f4fb61cca309cb4a8c1b9ed9fb4aa75af0f7a21f36d3954667db37c062a0172
    Port:          9125/TCP
    Host Port:     0/TCP
    Args:
      --web.listen-address=0.0.0.0:9125
      --web.telemetry-path=/metrics
      --collect.heartbeat
      --collect.heartbeat.database=sys_operator
    State:          Running
      Started:      Fri, 15 Feb 2019 13:27:17 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:  100m
    Requests:
      cpu:     10m
    Liveness:  http-get http://:9125/metrics delay=30s timeout=30s period=30s #success=1 #failure=3
    Environment:
      MY_NAMESPACE:      default (v1:metadata.namespace)
      MY_POD_NAME:       mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:          (v1:status.podIP)
      MY_SERVICE_NAME:   mc-mysql-nodes
      MY_CLUSTER_NAME:   mc
      MY_FQDN:           $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:  http://mysql-operator-test-orchestrator.default/api
      USER:              <set to the key 'METRICS_EXPORTER_USER' in secret 'my-secret'>      Optional: false
      PASSWORD:          <set to the key 'METRICS_EXPORTER_PASSWORD' in secret 'my-secret'>  Optional: false
      DATA_SOURCE_NAME:  $(USER):$(PASSWORD)@(127.0.0.1:3306)/
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
  pt-heartbeat:
    Container ID:  docker://a1d20db630a299bbf5842a40c27c2420b7267c44d233d4af2614abdb8affb063
    Image:         quay.io/presslabs/mysql-operator-sidecar:latest
    Image ID:      docker-pullable://quay.io/presslabs/mysql-operator-sidecar@sha256:b5ce53bad36d881592155bc59ef63aba598c101928b3b87a816bdb743c169e11
    Port:          <none>
    Host Port:     <none>
    Args:
      pt-heartbeat
      --update
      --replace
      --check-read-only
      --create-table
      --database
      sys_operator
      --table
      heartbeat
      --defaults-file
      /etc/mysql/client.cnf
    State:          Running
      Started:      Fri, 15 Feb 2019 13:28:21 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    107
      Started:      Fri, 15 Feb 2019 13:27:34 +0800
      Finished:     Fri, 15 Feb 2019 13:27:39 +0800
    Ready:          True
    Restart Count:  2
    Limits:
      cpu:  50m
    Requests:
      cpu:  10m
    Environment:
      MY_NAMESPACE:      default (v1:metadata.namespace)
      MY_POD_NAME:       mc-mysql-0 (v1:metadata.name)
      MY_POD_IP:          (v1:status.podIP)
      MY_SERVICE_NAME:   mc-mysql-nodes
      MY_CLUSTER_NAME:   mc
      MY_FQDN:           $(MY_POD_NAME).$(MY_SERVICE_NAME).$(MY_NAMESPACE)
      ORCHESTRATOR_URI:  http://mysql-operator-test-orchestrator.default/api
    Mounts:
      /etc/mysql from conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9mrh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-mc-mysql-0
    ReadOnly:   false
  conf:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  config-map:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mc-mysql
    Optional:  false
  default-token-l9mrh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-l9mrh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From                 Message
  ----     ------            ----                   ----                 -------
  Warning  FailedScheduling  2m22s (x7 over 2m36s)  default-scheduler    pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         2m22s                  default-scheduler    Successfully assigned default/mc-mysql-0 to bluven-k8s
  Normal   Pulling           2m20s                  kubelet, bluven-k8s  pulling image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Pulled            2m15s                  kubelet, bluven-k8s  Successfully pulled image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Created           2m15s                  kubelet, bluven-k8s  Created container
  Normal   Started           2m15s                  kubelet, bluven-k8s  Started container
  Normal   Pulling           2m14s                  kubelet, bluven-k8s  pulling image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Pulled            2m7s                   kubelet, bluven-k8s  Successfully pulled image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Created           2m7s                   kubelet, bluven-k8s  Created container
  Normal   Started           2m7s                   kubelet, bluven-k8s  Started container
  Normal   Pulling           2m6s                   kubelet, bluven-k8s  pulling image "registry.bluven.me:5000/bluven/percona:5.7-debug"
  Normal   Pulled            2m6s                   kubelet, bluven-k8s  Successfully pulled image "registry.bluven.me:5000/bluven/percona:5.7-debug"
  Normal   Created           2m6s                   kubelet, bluven-k8s  Created container
  Normal   Pulling           2m6s                   kubelet, bluven-k8s  pulling image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Started           2m6s                   kubelet, bluven-k8s  Started container
  Normal   Pulled            2m                     kubelet, bluven-k8s  Successfully pulled image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Created           2m                     kubelet, bluven-k8s  Created container
  Normal   Started           118s                   kubelet, bluven-k8s  Started container
  Normal   Pulling           118s                   kubelet, bluven-k8s  pulling image "prom/mysqld-exporter:latest"
  Normal   Pulled            102s                   kubelet, bluven-k8s  Successfully pulled image "prom/mysqld-exporter:latest"
  Normal   Created           102s                   kubelet, bluven-k8s  Created container
  Normal   Started           102s                   kubelet, bluven-k8s  Started container
  Normal   Pulling           102s                   kubelet, bluven-k8s  pulling image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Pulled            97s                    kubelet, bluven-k8s  Successfully pulled image "quay.io/presslabs/mysql-operator-sidecar:latest"
  Normal   Created           97s                    kubelet, bluven-k8s  Created container
  Normal   Started           96s                    kubelet, bluven-k8s  Started container
  Warning  Unhealthy         95s                    kubelet, bluven-k8s  Liveness probe failed: mysqladmin: connect to server at '127.0.0.1' failed
error: 'Can't connect to MySQL server on '127.0.0.1' (111)'
Check that mysqld is running on 127.0.0.1 and that the port is 3306.
You can check this by doing 'telnet 127.0.0.1 3306'

It seems that the mysqld --initialize-insecure aborted and the exitcode is 137. I googled 137 and found this issue, it seems that OOM or readiness probe failure may cause this.

I did a lot of tests and found readiness probe failure truely have an impact. I found it take a lot of time(5m40s) to finish mysqld --initialize-insecure, this is far beyond readiness probe settings.

But the reason behind it is complex and is beyond my ability. I used a nfs-client-provisioner-1.2.1 to provide storage and calico to provide network. I guess these two caused slow initialization. But after I used only pod to test, the slow initialization didn't happen. Then I tried only statefulset, the slow initialization happened again. This is wierd.

I tested two statefulsets with totally different config, this one works and the other one failed.

@bluven
Copy link
Contributor Author

bluven commented Feb 15, 2019

I think it can be solved by using a custom percona image whose docker-entrypoint.sh initializes root and custom user/password in different if-else to promote error tolerance.

I'd love to talk on slack, but we're in different timezone, most time when you go to work and I'm already at rent room which has a terriable network to access slack.

@calind calind added the bug label Feb 19, 2019
@calind calind modified the milestones: 0.2.5, 0.2.x Feb 19, 2019
@AMecea
Copy link
Contributor

AMecea commented Feb 19, 2019

I think initialization should be done in an init container, doing so, liveness probe will not kill the container anymore.

@AMecea
Copy link
Contributor

AMecea commented Feb 26, 2019

This issue depends on percona/percona-docker#82

@calind calind modified the milestones: 0.2.x, 0.3.x Mar 4, 2019
@bluven
Copy link
Contributor Author

bluven commented Mar 6, 2019

After a series of tests, I finllay found that the default option innodb-flush-method whose value is set to O_DIRECT caused slow initialization. After chaning it to fsync, slow initializion is gone.

So this is basiclly envirioment problem.

I guess maybe it's better to let user to set liveness probe and readiness probe, because different enviroment may have different initialization speed. It does'nt have to be specified in ClusterSpec, I think some mysql-operator options are enough.

@albertocsm
Copy link

have the same problem running a 1 replica cluster on GKE node - g1-small (1 vCPU, 1.7 GB memory)

@AMecea AMecea added the Epic label Jun 4, 2019
@calind calind removed the Epic label Jun 4, 2019
@AMecea AMecea self-assigned this Jun 5, 2019
@AMecea AMecea closed this as completed Jul 2, 2019
@AMecea
Copy link
Contributor

AMecea commented Feb 17, 2020

Fixed by #342

chapsuk pushed a commit to chapsuk/mysql-operator that referenced this issue Oct 16, 2023
Signed-off-by: Florent Poinsard <florent.poinsard@outlook.fr>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants