-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mount.nfs: access denied by server while mounting #8704
Comments
This particular issue has been plaguing me for about 1.5 years. What works for me is to stick on Vagrant 1.8.x running Virtualbox 5.0.x. Currently that means 1.8.7 and 5.0.40 respectively. I took a crack at debugging the vagrant issue and can confirm the code is structured such that it grabs the top network (which is usually eth0/1) and assumes that is what NFS should bind to. However when docker is installed the docker0 network ends up first on the list so it binds to docker. |
Just wanted to chime in and say that I have this issue as well. |
Also having the same issue
Running macOS High Sierra 10.13.6 FWIW I I also run https://github.com/adlogix/docker-machine-nfs for speedy docker toolbox and for Mac OSX High Sierra I had to modify my nfs bind mount parameters
I have not tried adjusting this yet here, as the real issue seems to be it grabbing the wrong interface. But I figured I'd mention since I notice @chrisrollins65 is also running High Sierra. |
Just to follow up, changing the bind mount parameters did not help at all. Definitely seems like it's a choosing-wrong-interface situation. I did however find that switching away from nfs and using rsync+gatling worked. For that, |
So further follow up, I determined that it may be a host system issue. From my Mac OSX host
This was determined by using So I am effectively having to use I have not determined the cause yet - another developer has an almost identical setup sans docker-machine-nfs and it "just works". We both have the same folder ownership/group permissions around /User and our respective user sub-folders. Vagrant is adding the right /etc/exports, to the right ip, but either nfsd is not restarting properly to pick it up, or my folders are only able to be exported from high up in the folder hierarchy. I wish I had an easier repro or fix, such as specific ownership or how to share it with nfsd's group but I hope my debugging steps can help someone else get to the bottom of their situations. EDIT:
I do have to run this each time I up the box, unfortunately |
Also running into a similar issue if I start more than 1 vagrant host, appears to coincide with a recent upgrade to
Reverting to shared folders for now Errors I was seeing:
vagrant reload
|
the problem still exists with vagrant 2.2.5 and macOS Catalina V10.15 |
Hi, Everyone. Catalina has been annoying me since it was released (Catalina is a bad woman, isn't she?). I used to have this problem. I fixed the problem. So I wanted to write things I did . My environment is this now:
Things I didI had to update But first, I did this: ❯❯❯ vagrant plugin install vagrant-vbguest
❯❯❯ vagrant plugin install vagrant-omnibus Actually, I don't know whether this action is effective for this problem or not. So you might not need it. VagrantfileFirst, I updated config.vm.synced_folder "/System/Volumes/Data" + Dir.pwd, '/[DIR-NAME]', type: 'nfs' I added via: https://apple.stackexchange.com/questions/367158/whats-system-volumes-data /etc/exportsNext, I updated ❯❯❯ sudo vi /etc/exports
# VAGRANT-BEGIN: 501 3cee32a0-9e19-4a6e-94b0-2f042dbc9180
"/System/Volumes/Data/Users/seiyamaeda/[DIR-NAME]/" xxx.xxx.xx.xx -alldirs -mapall=501:20
# VAGRANT-END: 501 3cee32a0-9e19-4a6e-94b0-2f042dbc9180
❯❯❯ sudo nfsd restart # For refreshing this config After that, I reloaded Vagrant.
And then, this problem vanished. |
This might work if I'm the only user of the Vagrant file, or the whole development team uses MacOS. In our case we have mixture of Linux and Macos users so unfortunately we cannot hardcode any paths. And modifying /etc/exports manually is not very elegant. |
@kwn Yeah, This way is depends on each development team situation. So yes, It is not a cool way. Just now, I recreated |
Thanks a lot @seiyamaeda !!! In the Vagrantfile I put:
Also I had to remove old records from ~/.ssh/known_hosts to connect to vagrant via SSH (from Sequel Pro). |
HUGE THANK YOU @seiyamaeda for the perfect instructions. I've been trying to debug this for days, and following your steps worked immediately. |
The issue has been resolved in Vagrant 2.2.6 which was released yesterday. I just tested it in Catalina and there are no more workarounds needed. |
I am still having this issue when using synced folders on a USB/external volume. ProductName: Mac OS X VirtualBox: Version 6.0.14 r133895 (Qt5.6.3) I have all my repositories on a external USB drive (1 TB). Under Mojave this worked like charm. But now on Catalina I did not managed to setup the virtual box because of a "permission error". For this command on the guest machine..
..I get the following error..
I am using the laravel/homestead box and tried specifing the paths as following:
The Vagrant project root is /System/Volumes/Data/Volumes/VMs/Repositories/Homestead so I tried also relative paths. The volume is not encrypted. For testing I gave VirtualBox, Terminal, mount and mount_nfs full disk access in the OS X privacy settings. The setup works (like under High Sierra and Mojave) only if I move the "Repositories" folder beneath "/Volumes/Macintosh HD/Users/marvin/". Am I missing something in permissions for other Volumes? |
FWIW I think I've just discovered a potential issue with this: if your path contains a space, as is allowed in macOS, that space will break the NFS exports line created by Vagrant. As a workaround, renaming my directory to remove the space has resolved the issue. |
What worked for me:
Btw.: my shared folders reside on an external drive |
@SteveRohrlack Can you please let me know what "item" exactly you are talking about in your first point? I'm having this issue even with Vagrant 2.2.6, and I have already done the other 2 items in your list.
|
That was a typo - I meant “iterm”. I fixed the typo it in my comment. |
Thanks for your answer! I use Alacritty and just gave it full disk access but still having this issue 😞 |
Same issue here. Running Vagrant 2.2.6, macOS 10.15.1, VirtualBox 6.0.14 and keep getting Gave full disc access to nfsd, iterm and virtualbox. Also turned off FileVault to disable encryption. Cleaned Code is on APFS (case-sensitive) volume. Any more ideas what to try? |
I had the same issue. All I had to do is disable starting docker on startup from this post. Do a EDIT: I read this entire thread and my development environment is not as complicated as all of yours and I failed to mention that. Sorry. I was in a coffee shop and was in a hurry but later forgot to update it. Anyway, I'm working on two microservices that, for development purposes, are on a single Ubuntu machine while my host is a Mac. One microservice is in NodeJS, the other in Go. My Go project points to /var/www dir on my virtual machine that has www-data group and the rest of web permissions. The main reason why this error was happening, for me at least, is because I used absolute paths and Docker. So for example,
would fail. Then I shutdown the docker on boot like above but it still didn't work. So I changed it to a relative path that was the same as the absolute path as before, with docker on boot turned off config.vm.synced_folder "../applications/my-app-name/dir-to-sync", "/var/www/html" And it worked. The only downside is that, since Go is installed on the vagrant linux OS, my IDE does not know of any third party packaged that I install so its all red :). But I can live with that. Hope that someone finds this helpful. |
For me the VM I'm trying to spin up doesn't have docker in it. I think it's just the fact that the shared folder(s) I configured for that VM is in a different volume ( |
Installing the plugins and modifying the Vagrantfile was not required. Only updating the |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
#5424 is not fixed. The cause is the installation of docker on the guest. Please see my comments there.
The text was updated successfully, but these errors were encountered: