Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Suggest NFS as default filesystem for local dev #1889

Closed
eecue opened this issue Aug 25, 2017 · 37 comments
Closed

Proposal: Suggest NFS as default filesystem for local dev #1889

eecue opened this issue Aug 25, 2017 · 37 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@eecue
Copy link

eecue commented Aug 25, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please provide the following details:

Environment:

Minikube version: v0.21.0

  • OS: OS X
  • VM Driver: Virtualbox
  • ISO version: minikube-v0.23.0.iso
  • Install tools: brew
  • Others:

What happened:

We noticed our application ran significantly slower in k8s/minikube compared to vagrant. We changed our local files to mount via NFS instead of 9p and our dev system is now much faster than vagrant.

What you expected to happen:

We should update the docs to recommend using NFS for local dev instead of 9p, at least for OS X, but probably makes sense on linux, too. We're going to test on Windows shortly to see if it has the same effect which I assume it will.

How to reproduce it (as minimally and precisely as possible):

Set up a persistent volume store using NFS and use it instead of 9p:

  1. Update your /etc/exports to allow minikube to access it:

$ echo "/Users -alldirs -mapall="$(id -u)":"$(id -g)" 192.168.99.100"| sudo tee -a /etc/exports

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-volume
spec:
  capacity:
    storage: 15Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  nfs:
    server: 192.168.99.1
    path: /Users/Shared/Sites/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cr-volume
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 15Gi

Anything else do we need to know:

If you think this is a good idea, I'll make a PR for the update docs. It's been life changing for our devs.

@eecue
Copy link
Author

eecue commented Aug 25, 2017

This seems to have fixed #1839 for us.

@alanbrent
Copy link

alanbrent commented Aug 28, 2017

I've been doing this for a couple of weeks now, because the 9p driver has reliable data inconsistency. This is the only way I've been able to create a reliably functioning, reasonably performing minikube-based Kubernetes cluster for local development. It'd be amazing if it became the default.

I'm not sure why the PersistentVolume is neeed, though. For "local" storage (e.g. vendor/bundle in Ruby projects) on the Docker host (minikube vm), I'm just using the /data mount.

@alanbrent
Copy link

By the way, this also "fixes" (obviates the need to fix) the data inconsistency problems with 9p as outlined in #1515.

@r2d4 r2d4 added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 28, 2017
@nathanleclaire
Copy link

nathanleclaire commented Aug 30, 2017

Cool! You might want to consider starting with extending mount to make it an option to mount an NFS share (then gradually work out the bugs) rather than starting by making it default. Mounting all of /Users is convenient, but kind of overkill, and NFS itself tends to be a bit finnicky. Not sure the current state of the 9p driver though.

@nathanleclaire
Copy link

BTW, Windows won't work with NFS (AFAIK), so you should also consider how you might want to implement things like CIFS / Samba sharing and/or rsync (which would be its own weird hairball, requiring a process monitoring the local filesystem on host side).

@eecue
Copy link
Author

eecue commented Aug 30, 2017

@nathanleclaire I believe you can get NFS working with Cygwyn, but our windows dev is back tomorrow so we will experiment. And yeah sounds like a good plan, we actually don't need all of /Users mounted either, I'm going to narrow that down to just our dev directory.

@alanbrent
Copy link

alanbrent commented Aug 30, 2017

In our use case, broadly mounting /Users is acceptable and even desirable. I haven’t yet encountered any issues with the NFS amount whatsoever, and it is significantly faster than 9p and vbox. In my local, not at all robust or sophisticated testing, the write performance penalty of each solution vs. native is the following:

  • vbox: 95%
  • xhyve+9p: 85%
  • xhyve+NFS: 67%

@nathanleclaire
Copy link

Yeah, no doubt NFS is the fastest of commonly available options. I've definitely seen it have some odd behavior though, so just tipping you off (like you noted, sometimes it also just chugs along without issue as well). There's also a few quirks in setting it up you might want to document -- I recall having to change a setting on my Mac to allow access to lower (more privileged) ports from the VM.

As far as the /Users thing goes, that's mostly a security caveat from me (and definitely personal/team preference). The main attack vector I'd be concerned with there is developers accidentally running a malicious container/image -- if /Users/user/.aws/credentials is present in the VM, it will be easier for attackers to access, etc.

@alanbrent
Copy link

@nathanleclaire Since we're having this conversation ... have you automated this solution at all? We're in the early stages of doing so, in order to move to a minikube-based local development workflow, and it's a little thorny re: best balance of "make this easy" and "people should know the tools they'll be using every day".

@nathanleclaire
Copy link

nathanleclaire commented Aug 31, 2017

@alanbrent Automated setup of NFS? I haven't but usually to try it out I've used https://github.com/adlogix/docker-machine-nfs (this is what I was referring to about the ports IIRC). That shell script should be relatively translatable to Go code.

@eecue
Copy link
Author

eecue commented Aug 31, 2017

@alanbrent I do, i have a shell script that automates the complete dev environment setup (on a mac, and more or less on a PC, would be easy to port to linux as well). it does the following:

  1. Install brew
  2. Install docker
  3. Install kubectl
  4. Install minikube
  5. Build the standard site structure that our local dev env uses
  6. Check out the repos for all of our services from gitlab
  7. Set up your k8s specific public key (and help you upload it to gitlab) which is used for composer/gulp
  8. Start and configure minikube
  9. Build all of the docker images needed
  10. Spin up the deployments for k8s
  11. Add hosts entries in /etc/hosts and NFS settings in /etc/exports, restart nfsd
  12. Install MySQL
  13. Configure MySQL to work with k8s
  14. Download and load Dev DB Dump
  15. Configure all the applications mentioned above with k8s friendly settings so everything should work without configuration

All is non destructive so if something happens midway through or you need to delete something and recreate it all works with that.

@eecue
Copy link
Author

eecue commented Aug 31, 2017

@alanbrent @nathanleclaire no real need to automate:

echo "/Users -alldirs -mapall="$(id -u)":"$(id -g)" 192.168.99.100"| sudo tee -a /etc/exports sudo nfsd restart

@nathanleclaire
Copy link

nathanleclaire commented Aug 31, 2017

Well, you've got to consider mount / export deletion. As well as things like what happens if the VM IP address changes. As well as elegantly handling the root privilege of doing so (e.g., Vagrant has a section on this ) and making sure the right commands are run for users of other Unixes. I run minikube on Linux for instance.

@aaron-prindle
Copy link
Contributor

This makes a lot of sense, we should definitely document how to get NFS working with minikube on each platform. We can also look at extending 'mount' in the future to make setting up the NFS server easier.

@eecue
Copy link
Author

eecue commented Aug 31, 2017

On that note @aaron-prindle do you know of anyone that's gotten NFS working on Windows? We're struggling with it currently. Using Cygwin.

@georgecrawford
Copy link

Hi all.

This is the thread for me! I've been suffering from very slow disk i/o in my default /Users mount (xhyve on OS X), and also the issue at #1515. However, I'm a bit of a novice by comparison, so don't quite understand what I need to do to experiment with NFS.

@eecue is there any chance you could share your shell script that automates the complete dev environment setup, with any private info removed? This is basically exactly what I'm currently doing for my team, and building in NFS improvements would help a great deal.

@alanbrent I was also wondering why the PersistentMount is required, but didn't understand your comment:

I'm not sure why the PersistentVolume is neeed, though. For "local" storage (e.g. vendor/bundle in Ruby projects) on the Docker host (minikube vm), I'm just using the /data mount.
What is the /data mount? If you need to do something to set that up, what is it? Like you, I'm very happy if the whole of /Users is mounted into the VM.

Thanks for any help you can offer!

@Lookyan
Copy link

Lookyan commented Sep 9, 2017

Yes. We've just moved from vbox+vboxfs to xhyve+NFS and it's really cool. CPU usage has decreased 4 times, response time the same. It would be really cool if it was automated in minikube and set to default.
I had some issues with xhyve installation with resources available only through VPN from host (I solved it using this script: https://gist.github.com/mowings/633a16372fb30ee652336c8417091222).
Another issue was about connecting from xhyve VM to host by static IP as in VBox (now I solved this problem only using VBox host-only adapters). Do you know any solution without using Vbox adapters? I need to have an access from VM by static IP.

@georgecrawford
For NFS on mac you should do following steps:
Set right settings in /etc/exports on mac. We use this: sudo sh -c 'echo "/Users -alldirs -mapall=0:0 $(minikube ip)" > /etc/exports'
Then sudo nfsd restart
And then add to your project NFS persistent volume (eecue gave an example).
Enjoy fast i/o.

@possibilities
Copy link

This is a good workaround but I think adding nfs as a requirement to use minikube is too much of a barrier. That said I suffer from the constant crashing as well. I wish I had a constructive advice but my hope is that the current experience remains the status quo and we don't end up with a much harder to use tool. Thanks for listening (:

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 26, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 25, 2018
@huggsboson
Copy link

docker-machine-nfs solves this really reliably for docker-machine. I wonder if it's possible to port it to minikube and run it after the fact as like minikube-nfs or something.

@huggsboson
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 2, 2018
@huggsboson
Copy link

Looks like someone already did it:
https://github.com/mstrzele/minikube-nfs

@huggsboson
Copy link

/close

@k8s-ci-robot
Copy link
Contributor

@huggsboson: you can't close an active issue unless you authored it or you are assigned to it, Can only assign issues to org members and/or repo collaborators..

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pietervogelaar
Copy link

I create a blog post for setting up a Minikube NFS mount. This way containers can mount source code on the host machine with great performance! http://pietervogelaar.nl/minikube-nfs-mounts

@bhack
Copy link

bhack commented Sep 28, 2018

Is also 9p giving problem with os.rename?

@tstromberg tstromberg added kind/documentation Categorizes issue or PR as related to documentation. and removed kind/feature Categorizes issue or PR as related to a new feature. labels Sep 28, 2018
@antonmarin
Copy link

We use nfs too. But it have a thing. Nfs doesn't support inotify events, so nodejs watchers don't work :(

@tstromberg
Copy link
Contributor

I would love to see documentation suggesting that NFS as a more performant alternative for power users, but I hesitate to suggest it as the default unless the user experience is relatively painless. For instance, automatically editing /etc/exports files.

If we could make --nfs an option to mount, then I'd be more than happy to switch it as the default, and fall-back to 9p/vbox/etc if it isn't available for some reason.

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@pietervogelaar
Copy link

Just using https://github.com/vapor-ware/ksync is also a great alternative.

@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 21, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@shadiramadan
Copy link

@eecue I'm on macos Catalina and because of that have to mount to /System/Volumes/Data

Following your exports command and config I get

mount.nfs: access denied by server while mounting 192.168.64.2:/System/Volumes/Data/Users/

However if I change the ip to be localhost I can verify that mounting locally (not from within minikube) does work.

Have you tried this on Catalina? And if so I would love some pointers on how you set it up to work.

@shadiramadan
Copy link

Silly me I was using the wrong server IP in the PV.

My nfs setup on macOs Catalina if anyone gives this a shot themselves.

echo "/System/Volumes/Data/Users/xyz -alldirs -mapall="$(id -u)":"$(id -g)" $(minikube ip)" | sudo tee -a /etc/exports

sudo nfsd restart

# Verify the export appears
showmount -e

# Get the local ip that is exposed to minikube
# This will be used as the server ip
ifconfig bridge100 | grep inet

nfs:
    server: <nfs-server-ip>
    path: /System/Volumes/Data/Users/xyz

@shadiramadan
Copy link

You can also test this by:

minikube ssh

mkdir test-mnt
sudo mount <nfs-server-ip>:/System/Volumes/Data/Users/xyz test-mnt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests