Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker driver: Persistent Volume data not restored after minikube new start #8458

Closed
dpolivaev opened this issue Jun 11, 2020 · 20 comments · Fixed by #8780
Closed

docker driver: Persistent Volume data not restored after minikube new start #8458

dpolivaev opened this issue Jun 11, 2020 · 20 comments · Fixed by #8780
Assignees
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/storage storage bugs co/docker-driver Issues related to kubernetes in container co/podman-driver podman driver issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/duplicate Indicates an issue is a duplicate of other open issue.
Milestone

Comments

@dpolivaev
Copy link

With minikube versions v1.9.2 up to v1.11.0 data contained in persistent volumes created by persistent volume claim is not restored after minikube is stopped and started again.

To demonstrate it I install mysql helm template from https://github.com/helm/charts/tree/master/stable/mysql
I connect to mysql using CLI client and create a new user testuser and obtain a list of all users to check that the user was created.

After I stop and restart minikube, connec to mysql again and obtain the user list again, the created user is not available. It indicates that all mysql databases are reset to their original state.

Actually, no data is restored after minikube starts again, and all files are recreated.

Minikube v1.8.2 works as expected.

Please see the log below.

Steps to reproduce the issue:

dimitry-> minikube start
😄  minikube v1.11.0 on Ubuntu 18.04
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
dimitry-> helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
dimitry-> helm install mysql stable/mysql
NAME: mysql
LAST DEPLOYED: Thu Jun 11 20:09:08 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

dimitry-> kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
qA2IHCiKkV
dimitry-> kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
If you don't see a command prompt, try pressing enter.
root@ubuntu:/# 
root@ubuntu:/# apt-get update && apt-get install mysql-client -y
root@ubuntu:/# mysql -h mysql -p           
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 39
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'testuser'@'localhost' IDENTIFIED BY 'test123test!';
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT User,Host FROM mysql.user;
+---------------+-----------+
| User          | Host      |
+---------------+-----------+
| root          | %         |
| mysql.session | localhost |
| mysql.sys     | localhost |
| root          | localhost |
| testuser      | localhost |
+---------------+-----------+
5 rows in set (0.00 sec)

mysql> ^DBye
root@ubuntu:/# logout
dimitry-> kubectl delete pod ubuntu
pod "ubuntu" deleted
dimitry-> minikube stop
✋  Stopping "minikube" in docker ...
🛑  Powering off "minikube" via SSH ...
🛑  Node "minikube" stopped.
dimitry-> minikube start
😄  minikube v1.11.0 on Ubuntu 18.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
If you don't see a command prompt, try pressing enter.
root@ubuntu:/# 
root@ubuntu:/# apt-get update && apt-get install mysql-client -y
root@ubuntu:/# mysql -h mysql -p           
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 39
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SELECT User,Host FROM mysql.user;
+---------------+-----------+
| User          | Host      |
+---------------+-----------+
| root          | %         |
| mysql.session | localhost |
| mysql.sys     | localhost |
| root          | localhost |
+---------------+-----------+
4 rows in set (0.00 sec)

mysql> ^DBye
root@ubuntu:/# logout
@dpolivaev
Copy link
Author

The issue may be related to #7828

@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

@dpolivaev do you mind sharing what addons do u have enabled?

minikube addons list

@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. area/storage storage bugs sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Jun 11, 2020
@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

@afbjorklund could this be something about storage provisioner?

@dpolivaev
Copy link
Author

dimitry-> minikube start
😄  minikube v1.11.0 on Ubuntu 18.04
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
dimitry-> minikube addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| ambassador                  | minikube | disabled     |
| dashboard                   | minikube | disabled     |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metallb                     | minikube | disabled     |
| metrics-server              | minikube | disabled     |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| olm                         | minikube | disabled     |
| registry                    | minikube | disabled     |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
|-----------------------------|----------|--------------|

@afbjorklund
Copy link
Collaborator

I think it is more related to #8151, and storing the PV on tmpfs ?

@dpolivaev
Copy link
Author

Looks you are right. The missing data is placed in directories under /tmp/hostpath-provisioner

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container and removed sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Jun 11, 2020
@dpolivaev
Copy link
Author

https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/ says them to be persisted

@afbjorklund
Copy link
Collaborator

Minikube v1.8.2 works as expected.

Probably because it didn't default to the "docker" driver ? I don't think this has changed in KIC

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 11, 2020

https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/ says them to be persisted

Unfortunately that only applies to the VM, it is not valid for the "none" driver or the "docker" driver.

@dpolivaev
Copy link
Author

Sorry, I have not got it.
I see the directories if I execute minikube ssh followed by cd /tmp/hostpath-provisioner
Are they not on VM?
I use minikube with VirtualBox as recommended.

@afbjorklund
Copy link
Collaborator

An alternative solution is #7511, where the mount point is moved to the (persisted) /var/tmp

@afbjorklund
Copy link
Collaborator

I use minikube with VirtualBox as recommended.

Hmm, maybe that is the problem here then:

😄 minikube v1.11.0 on Ubuntu 18.04
✨ Automatically selected the docker driver

You need to add --driver=virtualbox, if so.

But the volumes should be persisted also on the container-based version (KIC) of minikube.
Just that it hasn't been implemented yet. We should also persist all of /data, for manual data.

@afbjorklund afbjorklund added the addon/storage-provisioner Issues relating to storage provisioner addon label Jun 11, 2020
@dpolivaev
Copy link
Author

dpolivaev commented Jun 11, 2020

Anyway my intended use case is running minikube on AWS with --driver=none
Are the data lost in this case too?

@dpolivaev
Copy link
Author

Minikube v1.8.2 uses virtualbox driver by default.
It looks like it is the reason why my test worked with v1.8.2

I would appreciate if you tell me what happens with --driver=none.
I expect that the directories do not have to be saved and restored because they are saved on the host machine.

@dpolivaev
Copy link
Author

I confirm that using --driver=virtualbox fixes it

@medyagh medyagh added this to the v1.13.0-candidate milestone Jun 11, 2020
@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jun 11, 2020
@medyagh medyagh changed the title Persistent Volume data not restored after minikube new start docker driver: Persistent Volume data not restored after minikube new start Jun 11, 2020
@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

thanks for confriming I will add this bug to v1.13.0 milestone, I am looking for help for this issue, if you are an expert in storage and mounts specailly for docker driver your help is wanted.

@medyagh medyagh added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 11, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 12, 2020

I would appreciate if you tell me what happens with --driver=none.
I expect that the directories do not have to be saved and restored because they are saved on the host machine.

You will have to set up the mounts yourself, otherwise the contents will disappear when the host machine is rebooted. You are supposed to provide the bind mounts for the persistent storage:

    /data
    /tmp/hostpath_pv
    /tmp/hostpath-provisioner

To some suitable place on the host, where the contents will be kept.

For instance you could have a special data partition, that you mount.

@Harkishen-Singh
Copy link
Contributor

Harkishen-Singh commented Jun 19, 2020

I didnt get this. Is this behaviour intentional or do we want to persist /data, /tmp/hostpath* even on --driver=none || docker || podman? In short, what do we exactly mean by bug here?

@afbjorklund @medyagh

@afbjorklund
Copy link
Collaborator

@Harkishen-Singh : It means that when you are using the "none" driver, it is up to you to set a permanent storage location and then bind mount this to the expected paths. When using the "docker" or "podman" drivers, then minikube should do it.

The bug here is that the persistent storage is kept on tmpfs by default, due to #8151. It is a bug in the kic base image. When using the virtualbox VM, the files are persisted correctly... They are supposed to be stored on the disk image / volume.

@afbjorklund afbjorklund self-assigned this Jul 6, 2020
@afbjorklund afbjorklund added the co/podman-driver podman driver issues label Jul 6, 2020
@afbjorklund
Copy link
Collaborator

This issue is a duplicate of #8151 so it will be solved at the same time.

@afbjorklund afbjorklund added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Jul 20, 2020
@afbjorklund afbjorklund linked a pull request Jul 21, 2020 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/storage storage bugs co/docker-driver Issues related to kubernetes in container co/podman-driver podman driver issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants