-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filesystem Sharing #20
Comments
My current plan:
|
In parallel, we can consider supporting mutagen integration as well. IIUC it is used by Docker for Mac as well. |
Do you think this is possible? It requires to be able to share mmap'ped memory pages between host and guest. It this is possible, it would be cool. |
Just so you know: There is a patch for Darwin 9p support developed within the Nixpkgs project. I have adapted it to current QEMU from a patchset originally by Keno Fischer. The people over at Nixpkgs would certainly be interested in any effort to accelerate file sharing using VirtioFS, which is why I’m now subscribed here. |
Thanks @mroi !
|
@mroi Aside from 9p, do you know whether somebody is working on supporting vsock? |
I am preparing to propose the revised patch upstream. Have not had sufficient time to push this forward, but I hope to do this next week. It would certainly be good to have this upstream. Regarding binaries: Nix has a binary cache, but the packages do not run standalone (they depend on other Nix packages). The patch however should apply to the vanilla QEMU sources, so you should be able to just recompile QEMU 6 with it. I’m not aware of vsock developments. Unfortunately I’m not familiar with QEMU development internals at all. I just happened to have a need for 9p and worked on this patch. |
Thanks! |
The patch seems to have issues 😞 NixOS/nixpkgs#122420 (comment) |
Until we can get diff --git a/pkg/qemu/qemu.go b/pkg/qemu/qemu.go
index d9b0778..19f0f75 100644
--- a/pkg/qemu/qemu.go
+++ b/pkg/qemu/qemu.go
@@ -154,7 +154,7 @@ func Cmdline(cfg Config) (string, []string, error) {
// Parallel
args = append(args, "-parallel", "none")
- // Serial
+ // Legacy Serial
serialSock := filepath.Join(cfg.InstanceDir, "serial.sock")
if err := os.RemoveAll(serialSock); err != nil {
return "", nil, err
@@ -167,7 +167,17 @@ func Cmdline(cfg Config) (string, []string, error) {
args = append(args, "-chardev", fmt.Sprintf("socket,id=%s,path=%s,server,nowait,logfile=%s", serialChardev, serialSock, serialLog))
args = append(args, "-serial", "chardev:"+serialChardev)
- // We also want to enable vsock and virtfs here, but QEMU does not support vsock and virtfs for macOS hosts
+ // vport for 9p
+ vportSock := filepath.Join(cfg.InstanceDir, "vport.sock")
+ if err := os.RemoveAll(vportSock); err != nil {
+ return "", nil, err
+ }
+ const vportChardev = "char-vport"
+ args = append(args, "-device", "virtio-serial")
+ args = append(args, "-chardev", fmt.Sprintf("socket,id=%s,path=%s,server,nowait", vportChardev, vportSock))
+ args = append(args, "-device", fmt.Sprintf("virtserialport,chardev=%s,name=lima", vportChardev))
+
+ // TODO: use virtio-9p-pci when QEMU supports it for macOS hosts
// QEMU process
args = append(args, "-name", "lima-"+cfg.Name) package main
import (
"fmt"
"os"
"syscall"
)
func main() {
if err := xmain(); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func xmain() error {
devPath := "/dev/vport5p1"
mntPath := "/mnt/foo"
devFd, err := syscall.Open(devPath, syscall.O_RDWR|syscall.O_NONBLOCK, 0600)
if err != nil {
return err
}
return syscall.Mount(
"",
mntPath,
"9p",
0,
fmt.Sprintf("trans=fd,rfdno=%d,wfdno=%d", devFd, devFd),
)
} # go run main.go
invalid argument
exit status 1
# dmesg
...
[27353.823239] kernel write not supported for file /vport5p1 (pid: 2243 comm: kworker/3:1) The error is happening because the vport device somehow wires up both I guess I have to inject some pipe as a "shim" fd, or use ssh instead of virtserial. |
virtserial seems slower than ssh :( virtserialqemu flags: [host]$ socat file:urandom-1G unix-connect:foo.sock [guest]$ time sha256sum /dev/virtio-ports/foo
real 0m10.022s ssh[host] $ cat urandom-1G | lima time sha256sum -
real 0m7.673s |
Given that we seem to be stuck with ssh, at least for now, did you check if selecting a different cipher makes any difference for throughput? It seems like we are using the default:
I remember reading that I'm somewhat confused though, as even when I add |
Haven't looked into. Help wanted 🙏 |
Btw QEMU Samba seems roughly two times faster than sshfs [root@lima-archlinux ~]# time sha256sum /mnt/smb/ubuntu-21.04-desktop-amd64.iso
fa95fb748b34d470a7cfa5e3c1c8fa1163e2dc340cd5a60f7ece9dc963ecdf88 /mnt/smb/ubuntu-21.04-desktop-amd64.iso
real 0m15.578s
user 0m7.054s
sys 0m6.259s
[root@lima-archlinux ~]# time sha256sum /tmp/lima/ubuntu-21.04-desktop-amd64.iso
fa95fb748b34d470a7cfa5e3c1c8fa1163e2dc340cd5a60f7ece9dc963ecdf88 /tmp/lima/ubuntu-21.04-desktop-amd64.iso
real 0m29.707s
user 0m9.244s
sys 0m6.370s
Needs Homebrew/homebrew-core#80171 to be merged (which may take time) |
Samba is now available on homebrew (https://github.com/Homebrew/homebrew-core/blob/master/Formula/samba.rb), so I'm planning to replace sshfs with samba soon. Samba will be executed with So Samba will not listen on any actual TCP port. cc @jandubois |
@AkihiroSuda That is awesome to hear. From the benchmarks i have seen so far, it is supposedly 2x faster with most workloads. What would it take for us to support inotify events? |
It would be interesting to see some benchmarks between sshfs and virtio-9p-pci and nfs and cifs/smb... The virtfs authors seemed convinced: https://www.kernel.org/doc/ols/2010/ols2010-pages-109-120.pdf |
On Linux QEMU using virtfs is a lot faster than sshfs and even 9p. If anything is slowed down on macOS due to kernel lockdown/security don't take it as given for Linux users! -–› Please always consider Linux users in mind since lima itself becomes more and more a tool for cross-platform development (macOS, Linux, WSL/Windows). Especially k8s/k3s is really nyce scenerio! Don't make your decisions based on macOS only! Thank you! Docker for Desktop on Windows uses 9p (they switched away from SMB which they used before). Docs for linux: https://www.linux-kvm.org/page/9p_virtio Aslo check out how wimpy uses hardware-acceleration/virt*-kernel modules in Linux w/ qemu v6.x (yup 6.0+!): |
It's not Docker per se, but WSL2 that uses 9P for any WSL VM: https://devblogs.microsoft.com/commandline/a-deep-dive-into-how-wsl-allows-windows-to-access-linux-files/ I've been testing out WSL2 integration for other team members, and the filesystem performance is still not to a point where it's usable for large projects. For example, PHPStorm hangs and crashes trying to index a basic Drupal 9 codebase over the To date, mutagen (which is built into ddev) is giving our team the best experience. That work came about because Docker Desktop dropped their alpha mutagen support, and I think it would be really nice if support for it was at the lima (or colima) layer instead of at the next layer up. I'm not sure it's worth the development effort to only get a relatively small performance improvement. If performance is any more than 20-50% slower than native IO, it's still too slow for most of our uses. |
Looks like there has been some progress/success with 9p and Qemu containers/podman#8016 (comment). |
It is still enroute to be merged for 7.0, and still available as a patch for 6.x - so no real news from last year, except that more people are actually trying it now |
After the 9p patch (https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg02090.html) gets merged we can switch the default from reverse sshfs to 9p, however, probably we will have to continue maintain reverse sshfs for EL8 family 😞 [root@lima-rocky ~]# grep -i 9p /boot/config-4.18.0-348.el8.0.2.x86_64
# CONFIG_NET_9P is not set |
They are waiting for virtiofs support (or nfs over vsock). Waiting, waiting, waiting. It's been years already. https://access.redhat.com/discussions/1119043 We discussed it as a replacement for VirtualBox Shared Folders (vboxsf), but the outcome was: sshfs |
The 9p patch is now merged in the master branch of QEMU qemu/qemu@f45cc81 |
Note that it is the 9p for Darwin that is "new", the 9p for Linux has been there for ages |
I added a "mountType" selector. Theoretically it could also include something like NFS.
|
Yeah, I guess most of us are interested in the Darwin version as we're trying to mount @ MacOS for replacing Docker Desktop for good. |
The 9p patch is now also merged in |
That's right -- the 9p patch has been backported to 6.2 in both |
The brew package is insanely large, since it includes some 30+ architectures. But that's another story... (One can compile and install just amd64/arm64 and their firmware, if needed) |
Docker Desktop Edge has a VirtioFS option that is looking very good. |
Looks like that dropped to stable! https://docs.docker.com/desktop/mac/release-notes/#docker-desktop-460 |
VirtioFS is still unavailable in the macOS version of QEMU. Apple's Virtualization.framework supports VirtioFS, but it is not the best fit for Lima, as it lacks support for firmware, qcow2, slirp, and a lot of features of QEMU. And yet it is proprietary. |
Yes, it is available a an experimental feature and will be replace the gRPC FUSE implementation (hopefully). Instead of the hypervisor.framework it requires the new virtualization.framework which is available Big Sur. But VirtioFS in Docker Desktop for Mac is incredibly fast. With that feature the experience with developing inside containers is very comfortable. Would be very nice to have it in lima (perhaps optionally with additional libs / runtime options). |
There is a new project which provides https://news.ycombinator.com/item?id=32726166 IMHO, looks very promising. May be, it's would be very nice for |
It has some similar licensing issues as macfuse, it seems ? |
It isn't open-source, and it isn't even free to use for commercial uses, so FUSE-T is not an option for Lima. Also nfs is not really that simple to use on macOS: you need to be |
Hello, some interesting conversation in this thread. I was wondering, looking at this from a pragmatic lens, if there was an immediate solution (maybe the 9p patch getting folded in) while the longer term solution is in the works (which sounds like VirtioFS)? Myself, and others, would be thrilled to use Rancher Desktop on a regular basis, which is a downstream package that uses lima, but there's one thing stopping us: rancher-sandbox/rancher-desktop#1209 (mounting volumes). Appreciate any thoughts or feedback. |
Looks like 6 days ago Lima documented using virtiofs c18ae23 |
Yes, this is now available in v0.14.0-beta.0. |
Any updates to this? Looking for a way to solve the missing inotify events in docker-compose volume mounts in Colima and Lima on macOS. |
inotify is discussed here: |
This adds support for using virtiofs to mount filesystems on Linux hosts, via QEMU's vhost-user-fs-pci device + the Rust implementation of virtiofsd. In a simple "benchmark" running sha256sum on a copy of the Windows 11 ARM64 VHDK (because it's a large file I randomly had lying around): - reverse-sshfs took ~21s - 9p took ~13-15s - virtiofs took ~6-7s (For comparison, running it directly on the host system took ~5s.) This is marked as "experimental" because it has undergone testing by...me and relies on additional tools installed other than just QEMU. Unfortunately, this does *not* include support for DAX, because that's not merged into upstream QEMU yet, making it rather difficult to test. Ref. lima-vm#20.
This adds support for using virtiofs to mount filesystems on Linux hosts, via QEMU's vhost-user-fs-pci device + the Rust implementation of virtiofsd. In a simple "benchmark" running sha256sum on a copy of the Windows 11 ARM64 VHDK (because it's a large file I randomly had lying around): - reverse-sshfs took ~21s - 9p took ~13-15s - virtiofs took ~6-7s (For comparison, running it directly on the host system took ~5s.) This is marked as "experimental" because it has undergone testing by...me and relies on additional tools installed other than just QEMU. Unfortunately, this does *not* include support for DAX, because that's not merged into upstream QEMU yet, making it rather difficult to test. Ref. lima-vm#20. Signed-off-by: Ryan Gonzalez <git@refi64.dev>
This adds support for using virtiofs to mount filesystems on Linux hosts, via QEMU's vhost-user-fs-pci device + the Rust implementation of virtiofsd. In a simple "benchmark" running sha256sum on a copy of the Windows 11 ARM64 VHDK (because it's a large file I randomly had lying around): - reverse-sshfs took ~21s - 9p took ~13-15s - virtiofs took ~6-7s (For comparison, running it directly on the host system took ~5s.) This is marked as "experimental" because it has undergone testing by...me and relies on additional tools installed other than just QEMU. Unfortunately, this does *not* include support for DAX, because that's not merged into upstream QEMU yet, making it rather difficult to test. Ref. lima-vm#20. Signed-off-by: Ryan Gonzalez <git@refi64.dev>
This adds support for using virtiofs to mount filesystems on Linux hosts, via QEMU's vhost-user-fs-pci device + the Rust implementation of virtiofsd. In a simple "benchmark" running sha256sum on a copy of the Windows 11 ARM64 VHDK (because it's a large file I randomly had lying around): - reverse-sshfs took ~21s - 9p took ~13-15s - virtiofs took ~6-7s (For comparison, running it directly on the host system took ~5s.) This is marked as "experimental" because it has undergone testing by...me and relies on additional tools installed other than just QEMU. Unfortunately, this does *not* include support for DAX, because that's not merged into upstream QEMU yet, making it rather difficult to test. Ref. lima-vm#20. Signed-off-by: Ryan Gonzalez <git@refi64.dev>
Hello. I would like to give some ideas and advices about filesystem sharing.
So, since we are using QEMU, lets see which options do we have:
git status
in shared directory with middle size project takes at least half a minute. I would rather just place files in VM and access them via some remote file access protocol and use vscode with remote access (sad, but they're proprietary). I think that it is so slow because it is sync. Whenever you read some file, do stat call etc. you have to wait for this operation to end.The text was updated successfully, but these errors were encountered: