Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to create Database with 5.7 on Kubernetes / GCE - directory has files in it #186

Closed
aklinkert opened this issue Jun 29, 2016 · 15 comments

Comments

@aklinkert
Copy link

Hey everybody,

I have a strange issue over here with MySQL running on Kubernetes inside Google Container Engine. I have a Deployment for a Database like so:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: database
  labels:
    name: database
    type: database
    role: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: database
        type: database
        role: mysql
    spec:
      containers:
      - image: mysql:5.7
        name: database
        env:
          - name: MYSQL_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: database-secret
                key: rootpw
          - name: MYSQL_DATABASE
            valueFrom:
              secretKeyRef:
                name: database-secret
                key: database
          - name: MYSQL_USER
            valueFrom:
              secretKeyRef:
                name: database-secret
                key: user
          - name: MYSQL_PASSWORD
            valueFrom:
              secretKeyRef:
                name: database-secret
                key: password
        ports:
          # database = 2fe62899c
        - containerPort: 3306
          name: 2fe62899c-db
        volumeMounts:
        - name: database-staging
          mountPath: /var/lib/mysql
      volumes:
        - name: database-staging
          gcePersistentDisk:
            pdName: database-staging
            fsType: ext4

After creating an empty disk with that name and creating this deployment, the pod falls into a "container crashed back off loop" with the following error:

[ERROR] --initialize specified but the data directory has files in it. Aborting.

The exact same configurationis working for MySQL Version 5.6 with. Might be related with #69. Any Ideas regarding a solution or what might be the issue? 😟

To prevent the question: I'm absolutely sure the disk is empty after creation. :trollface:

@yosifkit
Copy link
Member

Are you sure? 😉 A new ext4 disk partition is not usually empty; there is a lost+found directory, which mysql is known to choke on. You could try adding --ignore-db-dir=lost+found to the CMD to know for sure (from mysql docs)

@aklinkert
Copy link
Author

Hehe. To be correct, I'm sure I created an empty disk and didn't add something. 🙈

Okay, hint seems to be right. Checked the filesystem with the working v5.6 container:

$ kubectl --namespace=staging exec my-pod  -- ls -al /var/lib/mysql/
total 110632
drwxr-xr-x  6 mysql mysql     4096 Jun 29 14:14 .
drwxr-xr-x 19 root  root      4096 Jun 10 02:21 ..
-rw-rw----  1 mysql mysql       56 Jun 29 14:14 auto.cnf
drwx------  2 mysql mysql     4096 Jun 29 14:14 cc
-rw-rw----  1 mysql mysql 50331648 Jun 29 14:14 ib_logfile0
-rw-rw----  1 mysql mysql 50331648 Jun 29 14:14 ib_logfile1
-rw-rw----  1 mysql mysql 12582912 Jun 29 14:14 ibdata1
drwx------  2 mysql mysql    16384 Jun 29 14:13 lost+found

So there actually is a lost+found dir. Confirming now that the parameter for v5.7 is working is a bit tricky. Passing the parameter works in terms of not emitting the original error. Unfortunately now I have the following output of my pod:

2016-06-30T09:28:09.385684Z 0 [Note] mysqld (mysqld 5.7.13) starting as process 1 ...
2016-06-30T09:28:09.390022Z 0 [Note] InnoDB: PUNCH HOLE support available
2016-06-30T09:28:09.390052Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-06-30T09:28:09.390056Z 0 [Note] InnoDB: Uses event mutexes
2016-06-30T09:28:09.390058Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2016-06-30T09:28:09.390061Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-06-30T09:28:09.390064Z 0 [Note] InnoDB: Using Linux native AIO
2016-06-30T09:28:09.390279Z 0 [Note] InnoDB: Number of pools: 1
2016-06-30T09:28:09.390407Z 0 [Note] InnoDB: Using CPU crc32 instructions
2016-06-30T09:28:09.393052Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2016-06-30T09:28:09.405537Z 0 [Note] InnoDB: Completed initialization of buffer pool
2016-06-30T09:28:09.408448Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2016-06-30T09:28:09.418531Z 0 [Note] InnoDB: The first innodb_system data file 'ibdata1' did not exist. A new tablespace will be created!
2016-06-30T09:28:09.418858Z 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2016-06-30T09:28:09.418877Z 0 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.
2016-06-30T09:28:09.418882Z 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2016-06-30T09:28:09.418886Z 0 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.
2016-06-30T09:28:09.418889Z 0 [ERROR] InnoDB: Cannot open datafile './ibdata1'
2016-06-30T09:28:09.418892Z 0 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!
2016-06-30T09:28:09.418897Z 0 [ERROR] InnoDB: InnoDB Database creation was aborted with error Cannot open a file. You may need to delete the ibdata1 file before trying to start up again.
2016-06-30T09:28:10.019486Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2016-06-30T09:28:10.019530Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-06-30T09:28:10.019537Z 0 [ERROR] Failed to initialize plugins.
2016-06-30T09:28:10.019540Z 0 [ERROR] Aborting

2016-06-30T09:28:10.019561Z 0 [Note] Binlog end
2016-06-30T09:28:10.019645Z 0 [Note] Shutting down plugin 'MyISAM'
2016-06-30T09:28:10.019673Z 0 [Note] Shutting down plugin 'CSV'
2016-06-30T09:28:10.022286Z 0 [Note] mysqld: Shutdown complete

It's the exact same type of FS used for the 5.6 image, but this one fails with permission issues? Dont get that.

sttts added a commit to sttts/docker-k8s-mesos that referenced this issue Sep 8, 2016
@alexpls
Copy link

alexpls commented Sep 30, 2016

I had this issue with Kubernetes and MySQL 5.7.15 as well. Adding the suggestion from @yosifki to my container's definition got things working.

Here's an extract of my working YAML definition:

name: mysql-master
image: mysql:5.7
args:
  - "--ignore-db-dir=lost+found"

@ahmetb
Copy link

ahmetb commented Aug 4, 2017

Issue still persists on newer mysql:8.0.2 image, although with a mildly different error:

Initializing database
2017-08-04T20:01:25.482016Z 0 [Note] Basedir set to /usr/
2017-08-04T20:01:25.482239Z 0 [Warning] The syntax '--symbolic-links/-s' is deprecated and will be removed in a future release
2017-08-04T20:01:25.483905Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2017-08-04T20:01:25.484094Z 0 [ERROR] Aborting

Repro steps.

  1. gcloud compute disks create --size 200GB mysql-disk
  2. kubectl create secret generic mysql --from-literal=password=123456
  3. kubectl create -f https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/wordpress-persistent-disks/mysql.yaml
  4. kubectl get pods, observe Error status on mysql pod.
  5. Take a look at kubectl logs <pod-name>, observe the logs I pasted above.

@tianon
Copy link
Member

tianon commented Aug 4, 2017

@ahmetb right -- that's expected if using the raw root of a freshly-formatted ext4 drive (because it then contains lost+found by default)

I'd recommend either configuring k8s to use a subdirectory (which is my own personal preferred solution, given how k8s supports it and that it's likely to be less error-prone -- I believe it's as easy as adding subPath: to the volumeMounts: object), or following the suggestion in #186 (comment) (adding --ignore-db-dir=lost+found to tell MySQL to ignore the lost+found directory).

I'm going to close this issue now, given that there are two different solutions noted. 👍

@schollii
Copy link

The subPath option worked and is a better solution IMO than the config flag in container: you don't need to edit the container, this is better because technically the container does not know whether any of its folders will be replaced by volumes so subPath is a nice separation of concerns.

@wangyaliyali
Copy link

good

@linydquantil
Copy link

good thank you

syncrisis pushed a commit to syncrisis/kubernetesdemo that referenced this issue Oct 24, 2018
@tomekit
Copy link

tomekit commented Feb 12, 2019

The ignore-db-dir trick mentioned above (#186 (comment)) no longer works, as of MySQL 8.0, as this option was removed: https://dev.mysql.com/doc/refman/8.0/en/upgrade-prerequisites.html
Not an easy software world we live in :)

Any ideas how to solve it reliably on Kubernetes?

EDIT:
I am going to try the subpath mentioned below.

I've tried to run this init and it does seem to work as well, not sure if safe though:

initContainers:
        - name: remove-lostfound
          image: busybox
          command: ["rm", "-r", "/var/lib/mysql/lost+found"]
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /var/lib/mysql
              name: mysql-volume

@ltangvald
Copy link
Collaborator

Use a subpath, e.g. instead of datadir=/mountpath/ use datadir=/mountpath/mysql-data

@tonyg4864
Copy link

I tried everything here and no luck ? Any other suggestions?

@wglambert
Copy link

What issues did you you encounter with the workarounds given? Is the issue you're encountering the lost+found folder in the root directory?

I'd recommend getting more details/information and asking over at the Docker Community Forums, Docker Community Slack, or Stack Overflow. Since these repositories aren't really a help forum

@linydquantil
Copy link

linydquantil commented Aug 19, 2019

@tonyg4864 @wglambert

containers:
  - image: mysql:5.7.24
    name: mysql
    args:
      - "--ignore-db-dir=lost+found"

you can try it

@karthikeayan
Copy link

In case someone facing this in Kubernetes, it might be because of container got killed with OOM and couldn't complete the steps it was executing.

Increase limits on mysql container is the fix for me in Kubernetes.

tzolov added a commit to spring-cloud/spring-cloud-dataflow that referenced this issue Sep 5, 2019
 The mysql:6.7 image upgrade caused the following error when deployed on GKE:
 > [ERROR] --initialize specified but the data directory has files in it. Aborting.
 Patch is based on this:  docker-library/mysql#186 (comment)

 Related to #3482
grant added a commit to GoogleCloudPlatform/kubernetes-engine-samples that referenced this issue Sep 20, 2019
* Support MySQL, see b/64102112

docker-library/mysql#186

* Use mysql 5.7

* Update mysql.yaml
ilayaperumalg pushed a commit to spring-cloud/spring-cloud-dataflow that referenced this issue Nov 8, 2019
 The mysql:6.7 image upgrade caused the following error when deployed on GKE:
 > [ERROR] --initialize specified but the data directory has files in it. Aborting.
 Patch is based on this:  docker-library/mysql#186 (comment)

 Related to #3482
MattyJ007 added a commit to openhie/instant that referenced this issue Mar 2, 2020
MySQL version 5.7 contains a bug that prevents start up when there are
files in the lost+found directory (which appear to get created on start
up). This issue only appears to affect the AWS deployment...
See link below for details

OHM-972
docker-library/mysql#186 (comment)
@zhanlinfeng
Copy link

Are you sure? 😉 A new ext4 disk partition is not usually empty; there is a lost+found directory, which mysql is known to choke on. You could try adding --ignore-db-dir=lost+found to the CMD to know for sure (from mysql docs)

Awesome

@docker-library docker-library locked as resolved and limited conversation to collaborators Apr 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests