Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set default location for PV mounts #3318

Open
treuherz opened this issue Nov 8, 2018 · 20 comments
Open

Set default location for PV mounts #3318

treuherz opened this issue Nov 8, 2018 · 20 comments
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@treuherz
Copy link

treuherz commented Nov 8, 2018

Is this a BUG REPORT or FEATURE REQUEST? Feature request/query

Minikube version (use minikube version): v0.30.0

  • OS: Arch Linux
  • VM Driver: None

The minikube docs list a number of locations that Minikube can provision persistent volumes, as well as sample config for a PV, but there doesn't seem to be any way to configure the storage-provisioner to provision volumes anywhere but /tmp/hostpath-provisioner. Unless I'm misunderstanding the source, it looks like the path in tmp is fixed and unparameterised. Is there some way of providing a patched storage-provisioner to minikube with a path of the user's choice? Our use case involves persisting a lot of large data files, for which a tmpfs-based PV quickly becomes impractical.

@treuherz treuherz changed the title Set default location for PVC mounts Set default location for PV mounts Nov 8, 2018
@balopat balopat added kind/feature Categorizes issue or PR as related to a new feature. area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. labels Nov 14, 2018
@balopat
Copy link
Contributor

balopat commented Nov 14, 2018

Thank you for filing! This should be doable by parametrizing https://github.com/kubernetes/minikube/blob/v0.30.0/pkg/storage/storage_provisioner.go#L49.
PRs are welcome.

@ms-choudhary
Copy link

Hi, would be interested in picking up this as my first issue.

I've few clarifications:

  • As I understand, even though, minikube doc specifies supported directory list inside VM, it always provisions PV on /tmp/hostpath-provisioner. How do we plan to configure a different path? Is that to be inferred from storage class parameters?
  • Should we only allow the paths listed in the doc?

@chenleji
Copy link

Implementing the parameterization of the provisioner is very simple, but I found that the provisioner's yaml file mounts the /mnt directory into pod, and the yaml file cannot be dynamically modified. In addition, the parameterization of the provisioner should be implemented using the minikube config, which should involve the question of how the product is designed. @balopat @ms-choudhary

@tstromberg tstromberg added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jan 23, 2019
@afbjorklund
Copy link
Collaborator

Both /tmp/hostpath-provisioner and /tmp/hostpath_pv are actually stored on the disk /dev/sda1 :

| |-/tmp/hostpath_pv                /dev/sda1[/hostpath_pv]                        ext4      rw,relatime,data=ordered
| `-/tmp/hostpath-provisioner       /dev/sda1[/hostpath-provisioner]               ext4      rw,relatime,data=ordered

So it is only the mountpoints that are kept on /tmp. But it would be nice to be able to configure this...

For large data, it would even be nice to be able to add extra disks. Maybe this could be considered.

e.g. keep the containers on /dev/sda1 and the volumes on /dev/sda2 ?

Currently we are using /data as the canonical (non-dynamic) host path.

|-/data                             /dev/sda1[/data]                               ext4      rw,relatime,data=ordered

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 7, 2019
@tstromberg tstromberg added r/2019q2 Issue was last reviewed 2019q2 and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2019
@11janci
Copy link
Contributor

11janci commented Oct 3, 2019

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 3, 2019
@tstromberg
Copy link
Contributor

This issue still exists in minikube v1.6 AFAIK.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2020
@sharifelgamal
Copy link
Collaborator

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Apr 22, 2020
@alternaivan
Copy link

Hi everyone,
Is there any update on this issue?

Thanks,
Marjan

@blueelvis
Copy link
Contributor

I will pick this up.

/assign

@blueelvis
Copy link
Contributor

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 21, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 18, 2021
@ilya-zuyev ilya-zuyev added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Feb 18, 2021
@afbjorklund afbjorklund added the addon/storage-provisioner Issues relating to storage provisioner addon label Aug 9, 2021
@zerthimon
Copy link

Is there a way to specify the path where to create persistent volumes ? I'd like to have my persistent volumes created in /hosthome/[user]/tmp instead of /tmp, so the data survives minikube delete

@medyagh
Copy link
Member

medyagh commented Sep 15, 2021

@blueelvis haven't heard from you. still working on it ?

@medyagh
Copy link
Member

medyagh commented Sep 15, 2021

this issue is available to be picked up anyone inttersted

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests