Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Files in hostPath volumes are lost after minikube restart #3582

Closed
vdamle opened this issue Jan 24, 2019 · 11 comments
Closed

Files in hostPath volumes are lost after minikube restart #3582

vdamle opened this issue Jan 24, 2019 · 11 comments
Labels
area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@vdamle
Copy link

vdamle commented Jan 24, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Please provide the following details:

Environment: Minikube on MacOS Mojave (10.14.2)

Minikube version : v0.33.1

  • OS : MacOS Mojave (10.14.2)
  • VM Driver : virtualbox
  • ISO version : minikube-v0.29.0.iso
  • Install tools: brew cask install minikube
  • Others:

What happened: Files present in hostPath volumes are not persisted after restart of minikube

What you expected to happen: Files present in hostPath volumes should be persisted after restart of minikube

How to reproduce it (as minimally and precisely as possible):

  • Create a pod with one or more hostPath volumes and corresponding volume mounts
  • write to one or more files in the volume mount directories (one can minikube ssh and see that the files are present in the hostPath directory)
  • minikube stop
  • minikube start --vm-driver="virtualbox"
  • The files are no longer present in the directory

Output of minikube logs (if applicable):

minikube-logs.txt

Anything else do we need to know: This was working perfectly fine until recently. I have upgraded MacOS, minikube (using brew cask), docker recently. Not sure which one could be the culprit ¯_(ツ)_/¯

@tstromberg
Copy link
Contributor

Do you mind elaborating on your repro instructions? I'd like to see if this problem exists on other platforms, but am not yet familiar with hostPath volumes. Thanks!

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/mount priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jan 24, 2019
@vdamle
Copy link
Author

vdamle commented Jan 24, 2019

Hi @tstromberg : thanks for taking a look. I have the following volume definition in my stateful set:

      volumes:
      - hostPath:
          path: /mnt/qdata
          type: DirectoryOrCreate
        name: qdata

and one or more containers use that volume and mount it to a path inside the container:

        volumeMounts:
        - mountPath: /qdata
          name: qdata

After I start a pod with the above, the software running in the pod writes files to /qdata directory. Anytime after this, if I restart minikube (as a result of putting my macbook to sleep) and/or explicitly restarting minikube, the pod is running again, and /qdata directory shows up in the pod (and inside minikube VM), but the files which were written are not present in the directory anymore. Hope this helps.

@vdamle
Copy link
Author

vdamle commented Jan 31, 2019

@tstromberg - Did you get a chance to look at this? do you need more info?

@afbjorklund
Copy link
Collaborator

Minikube will only persist host paths located under /data, not anything located under e.g. /qdata
You could of course add your own mount or symlink, to move the data over to the /dev/sda1 disk ?

The dynamically provision volumes are also persisted, but the paths should be considered internal.
Same with the other default directories, those are mostly internal to the system or to the runtime...


Here is the findmnt output, excluding the overlay mounts since those are even more "internal":

TARGET                SOURCE                                      FSTYPE OPTIONS
/mnt/sda1             /dev/sda1                                   ext4   rw,rela
/var/lib/boot2docker  /dev/sda1[/var/lib/boot2docker]             ext4   rw,rela
/var/lib/docker       /dev/sda1[/var/lib/docker]                  ext4   rw,rela
/var/lib/containers   /dev/sda1[/var/lib/containers]              ext4   rw,rela
/var/log              /dev/sda1[/var/log]                         ext4   rw,rela
/var/tmp              /dev/sda1[/var/tmp]                         ext4   rw,rela
/var/lib/kubelet      /dev/sda1[/var/lib/kubelet]                 ext4   rw,rela
/var/lib/cni          /dev/sda1[/var/lib/cni]                     ext4   rw,rela
/data                 /dev/sda1[/data]                            ext4   rw,rela
/tmp/hostpath_pv      /dev/sda1[/hostpath_pv]                     ext4   rw,rela
/tmp/hostpath-provisioner
                      /dev/sda1[/hostpath-provisioner]            ext4   rw,rela
/var/lib/rkt          /dev/sda1[/var/lib/rkt]                     ext4   rw,rela
/etc/rkt              /dev/sda1[/var/lib/rkt-etc]                 ext4   rw,rela
/var/lib/minikube     /dev/sda1[/var/lib/minikube]                ext4   rw,rela
/var/lib/minishift    /dev/sda1[/var/lib/minishift]               ext4   rw,rela

So the recommended location provided for persistent hostPath mounts is somewhere under /data.

See https://github.com/kubernetes/minikube/blob/master/docs/persistent_volumes.md

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2019
@tstromberg tstromberg added r/2019q2 Issue was last reviewed 2019q2 priority/backlog Higher priority than priority/awaiting-more-evidence. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels May 22, 2019
@sharifelgamal sharifelgamal added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 18, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 16, 2019
@hellovietduc
Copy link

I'm having this issue. After restart I lose data in my Postgres DB. Here's the deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: db
  labels:
    app: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:11.5
          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: config-postgres
          env:
            - name: PGDATA
              value: /var/lib/postgresql/data
          volumeMounts:
            - mountPath: /var/lib/postgresql
              name: postgresdb
      volumes:
        - name: postgresdb
          persistentVolumeClaim:
            claimName: pvc-postgres

And the volumes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: minikube-pv-postgres
  namespace: db
  labels:
    app: minikube-pv-postgres
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 10Gi
  hostPath:
    path: /data/minikube-pv-postgres/

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-postgres
  namespace: db
  labels:
    app: pvc-postgres
spec:
  volumeName: minikube-pv-postgres
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

The headache is I specify hostPath at /data/ directory, but the data is still lost.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 26, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zeynepkoyun
Copy link

Hi @hellovietduc
I'm having the same problem, did you solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests

8 participants