Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multipass and Vagrant with VirtualBox clash on macOS #3674

Open
pkarjala opened this issue Sep 12, 2024 · 13 comments
Open

Multipass and Vagrant with VirtualBox clash on macOS #3674

pkarjala opened this issue Sep 12, 2024 · 13 comments

Comments

@pkarjala
Copy link

Describe the bug

When starting a new multipass instance, it will hang and the status becomes "Unknown".

To Reproduce

From either the GUI or the CLI, launch a new Ubuntu 24.04 LTS image with default settings.

  1. multipass launch 24.04

Expected behavior

The instance should launch and be available for use.

Logs

When launching from GUI:

[2024-09-12T11:10:03.774] [debug] [blueprint provider] Loading "anbox-cloud-appliance" v1
[2024-09-12T11:10:03.775] [debug] [blueprint provider] Loading "charm-dev" v1
[2024-09-12T11:10:03.776] [debug] [blueprint provider] Loading "docker" v1
[2024-09-12T11:10:03.776] [debug] [blueprint provider] Loading "jellyfin" v1
[2024-09-12T11:10:03.776] [debug] [blueprint provider] Loading "minikube" v1
[2024-09-12T11:10:03.777] [debug] [blueprint provider] Loading "ros-noetic" v1
[2024-09-12T11:10:03.777] [debug] [blueprint provider] Loading "ros2-humble" v1
[2024-09-12T11:10:03.784] [debug] [qemu-system-x86_64] [12137] started: qemu-system-x86_64 --version
[2024-09-12T11:10:03.844] [debug] [qemu-img] [12138] started: qemu-img info /var/root/Library/Caches/multipassd/qemu/vault/images/noble-20240821/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:10:03.882] [debug] [qemu-img] [12139] started: qemu-img resize /var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/ubuntu-24.04-server-cloudimg-amd64.img 5368709120
[2024-09-12T11:10:03.926] [debug] [qemu-img] [12140] started: qemu-img snapshot -l /var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:10:03.942] [debug] [qemu-img] [12141] started: qemu-img amend -o compat=1.1 /var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:10:03.973] [debug] [rich-labrador] process working dir ''
[2024-09-12T11:10:03.973] [info] [rich-labrador] process program 'qemu-system-x86_64'
[2024-09-12T11:10:03.974] [info] [rich-labrador] process arguments '-accel, hvf, -drive, file=/Library/Application Support/com.canonical.multipass/bin/../Resources/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on, -cpu, host, -nic, vmnet-shared,model=virtio-net-pci,mac=52:54:00:dc:0d:a7, -device, virtio-scsi-pci,id=scsi0, -drive, file=/var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/ubuntu-24.04-server-cloudimg-amd64.img,if=none,format=qcow2,discard=unmap,id=hda, -device, scsi-hd,drive=hda,bus=scsi0.0, -smp, 1, -m, 1024M, -qmp, stdio, -chardev, null,id=char0, -serial, chardev:char0, -nographic, -cdrom, /var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/cloud-init-config.iso'
[2024-09-12T11:10:03.979] [debug] [qemu-system-x86_64] [12142] started: qemu-system-x86_64 -nographic -dump-vmstate /private/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/multipassd.lMYTrt
[2024-09-12T11:10:04.081] [info] [rich-labrador] process state changed to Starting
[2024-09-12T11:10:04.087] [info] [rich-labrador] process state changed to Running
[2024-09-12T11:10:04.087] [debug] [qemu-system-x86_64] [12143] started: qemu-system-x86_64 -accel hvf -drive file=/Library/Application Support/com.canonical.multipass/bin/../Resources/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on -cpu host -nic vmnet-shared,model=virtio-net-pci,mac=52:54:00:dc:0d:a7 -device virtio-scsi-pci,id=scsi0 -drive file=/var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/ubuntu-24.04-server-cloudimg-amd64.img,if=none,format=qcow2,discard=unmap,id=hda -device scsi-hd,drive=hda,bus=scsi0.0 -smp 1 -m 1024M -qmp stdio -chardev null,id=char0 -serial chardev:char0 -nographic -cdrom /var/root/Library/Application Support/multipassd/qemu/vault/instances/rich-labrador/cloud-init-config.iso
[2024-09-12T11:10:04.087] [info] [rich-labrador] process started
[2024-09-12T11:10:04.089] [debug] [rich-labrador] Waiting for SSH to be up
[2024-09-12T11:10:04.237] [debug] [rich-labrador] QMP: {"QMP": {"version": {"qemu": {"micro": 1, "minor": 2, "major": 8}, "package": ""}, "capabilities": ["oob"]}}

[2024-09-12T11:10:04.242] [warning] [rich-labrador] Could not open option rom 'kvmvapic.bin': No such file or directory

[2024-09-12T11:10:04.242] [warning] [qemu-system-x86_64]
[2024-09-12T11:10:04.279] [debug] [rich-labrador] QMP: {"return": {}}

[2024-09-12T11:10:04.922] [debug] [rich-labrador] QMP: {"timestamp": {"seconds": 1726175404, "microseconds": 922404}, "event": "RTC_CHANGE", "data": {"offset": 0, "qom-path": "/machine/unattached/device[3]/rtc"}}

[2024-09-12T11:10:05.924] [debug] [rich-labrador] QMP: {"timestamp": {"seconds": 1726175405, "microseconds": 133553}, "event": "RTC_CHANGE", "data": {"offset": -1, "qom-path": "/machine/unattached/device[3]/rtc"}}

[2024-09-12T11:10:29.541] [debug] [rich-labrador] QMP: {"timestamp": {"seconds": 1726175429, "microseconds": 541772}, "event": "NIC_RX_FILTER_CHANGED", "data": {"path": "/machine/unattached/device[21]/virtio-backend"}}

[2024-09-12T11:16:33.800] [debug] [daemon] Returning setting local.driver=qemu

When launching from CLI:

% multipass launch 24.04
launch failed: The following errors occurred:
delectable-ling: timed out waiting for response

Logs:

[2024-09-12T11:32:09.682] [debug] [blueprint provider] Loading "anbox-cloud-appliance" v1
[2024-09-12T11:32:09.682] [debug] [blueprint provider] Loading "charm-dev" v1
[2024-09-12T11:32:09.683] [debug] [blueprint provider] Loading "docker" v1
[2024-09-12T11:32:09.684] [debug] [blueprint provider] Loading "jellyfin" v1
[2024-09-12T11:32:09.684] [debug] [blueprint provider] Loading "minikube" v1
[2024-09-12T11:32:09.685] [debug] [blueprint provider] Loading "ros-noetic" v1
[2024-09-12T11:32:09.685] [debug] [blueprint provider] Loading "ros2-humble" v1
[2024-09-12T11:32:45.390] [debug] [qemu-system-x86_64] [12231] started: qemu-system-x86_64 --version
[2024-09-12T11:32:45.444] [debug] [qemu-img] [12232] started: qemu-img info /var/root/Library/Caches/multipassd/qemu/vault/images/noble-20240821/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:32:45.481] [debug] [qemu-img] [12233] started: qemu-img resize /var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/ubuntu-24.04-server-cloudimg-amd64.img 5368709120
[2024-09-12T11:32:45.503] [debug] [qemu-img] [12234] started: qemu-img snapshot -l /var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:32:45.538] [debug] [qemu-img] [12235] started: qemu-img amend -o compat=1.1 /var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/ubuntu-24.04-server-cloudimg-amd64.img
[2024-09-12T11:32:45.553] [debug] [delectable-ling] process working dir ''
[2024-09-12T11:32:45.553] [info] [delectable-ling] process program 'qemu-system-x86_64'
[2024-09-12T11:32:45.553] [info] [delectable-ling] process arguments '-accel, hvf, -drive, file=/Library/Application Support/com.canonical.multipass/bin/../Resources/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on, -cpu, host, -nic, vmnet-shared,model=virtio-net-pci,mac=52:54:00:ee:da:48,
-device, virtio-scsi-pci,id=scsi0, -drive, file=/var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/ubuntu-24.04-server-cloudimg-amd64.img,if=none,format=qcow2,discard=unmap,id=hda, -device, scsi-hd,drive=hda,bus=scsi0.0, -smp, 1, -m, 1024M, -qmp, stdio, -chardev, null,id=char0, -serial, chardev:char0, -nographic, -cdrom, /var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/cloud-init-config.iso'
[2024-09-12T11:32:45.558] [debug] [qemu-system-x86_64] [12236] started: qemu-system-x86_64 -nographic -dump-vmstate /private/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/multipassd.TMBVif
[2024-09-12T11:32:45.677] [info] [delectable-ling] process state changed to Starting
[2024-09-12T11:32:45.682] [info] [delectable-ling] process state changed to Running
[2024-09-12T11:32:45.682] [debug] [qemu-system-x86_64] [12237] started: qemu-system-x86_64 -accel hvf -drive file=/Library/Application Support/com.canonical.multipass/bin/../Resources/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on -cpu host -nic vmnet-shared,model=virtio-net-pci,mac=52:54:00:ee:da:48 -device virtio-scsi-pci,id=scsi0 -drive file=/var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/ubuntu-24.04-server-cloudimg-amd64.img,if=none,format=qcow2,discard=unmap,id=hda -device scsi-hd,drive=hda,bus=scsi0.0 -smp 1 -m 1024M -qmp stdio -chardev null,id=char0 -serial chardev:char0 -nographic -cdrom /var/root/Library/Application Support/multipassd/qemu/vault/instances/delectable-ling/cloud-init-config.iso
[2024-09-12T11:32:45.682] [info] [delectable-ling] process started
[2024-09-12T11:32:45.684] [debug] [delectable-ling] Waiting for SSH to be up
[2024-09-12T11:32:45.840] [debug] [delectable-ling] QMP: {"QMP": {"version": {"qemu": {"micro": 1, "minor": 2, "major": 8}, "package": ""}, "capabilities": ["oob"]}}

[2024-09-12T11:32:45.845] [warning] [delectable-ling] Could not open option rom 'kvmvapic.bin': No such file or directory

[2024-09-12T11:32:45.845] [warning] [qemu-system-x86_64]
[2024-09-12T11:32:45.880] [debug] [delectable-ling] QMP: {"return": {}}

[2024-09-12T11:32:46.506] [debug] [delectable-ling] QMP: {"timestamp": {"seconds": 1726176766, "microseconds": 506643}, "event": "RTC_CHANGE", "data": {"offset": -1, "qom-path": "/machine/unattached/device[3]/rtc"}}

[2024-09-12T11:32:47.527] [debug] [delectable-ling] QMP: {"timestamp": {"seconds": 1726176766, "microseconds": 730314}, "event": "RTC_CHANGE", "data": {"offset": -1, "qom-path": "/machine/unattached/device[3]/rtc"}}

[2024-09-12T11:33:09.756] [debug] [delectable-ling] QMP: {"timestamp": {"seconds": 1726176789, "microseconds": 756746}, "event": "NIC_RX_FILTER_CHANGED", "data": {"path": "/machine/unattached/device[21]/virtio-backend"}}

[2024-09-12T11:37:03.491] [debug] [async task] fetch manifest periodically

Additional info

  • OS: macOS 14.6.1
  • multipass version :
multipass   1.14.0+mac
multipassd  1.14.0+mac
  • multipass info :
Name:           rich-labrador
State:          Unknown
Snapshots:      0
IPv4:           --
Release:        --
Image hash:     0e25ca6ee9f0 (Ubuntu 24.04 LTS)
CPU(s):         --
Load:           --
Disk usage:     --
Memory usage:   --
Mounts:         --
Name:           delectable-ling
State:          Unknown
Snapshots:      0
IPv4:           --
Release:        --
Image hash:     0e25ca6ee9f0 (Ubuntu 24.04 LTS)
CPU(s):         --
Load:           --
Disk usage:     --
Memory usage:   --
Mounts:         --
  • multipass get local.driver: qemu

Additional context

I have rebooted the host computer, as well as uninstalled and reinstalled multipass entirely. It is installed via the .pkg method (not brew).

Initially an instance booted without issue. After that they all time out while attempting to boot. Stopping and restarting the initial working instance also resulted in a timeout and being unable to use it.

@pkarjala pkarjala added bug needs triage Issue needs to be triaged labels Sep 12, 2024
@sharder996
Copy link
Contributor

Hi @pkarjala, Can you verify that your firewall is not enabled and that you are not running a VPN?

@sharder996 sharder996 removed the needs triage Issue needs to be triaged label Sep 13, 2024
@pkarjala
Copy link
Author

I can confirm that I am not on a VPN, and that the macOS Firewall is disabled. Thanks!

@sharder996
Copy link
Contributor

One of my colleagues wrote an excellent troubleshooting guide for ssh connection issues here

From your posted logs I can see that the instance is booting up successfully, but for some reason Multipass is not able to connect to the instance. As explained in the above discussion you could try using tcpdump to check for dhcp requests and replies or check the contents of dhcp_leases.

@pkarjala
Copy link
Author

Thank you, I will review and follow up!

@Gab2thebo
Copy link

Same issues. I have updated to macOS Sequoia today and multipass instance are not starting and are in an unknown state.

@sharder996
Copy link
Contributor

@Gab2thebo Yes, there is an issue with new Sequoia updating causing issues with Multipass not being able to acquire IP addresses of instances. It is being tracked with #3661

@pkarjala
Copy link
Author

Thank you again, I was able to get back to some troubleshooting on this issue following the steps listed in #3660

It looks like I am in situation 3, as each time an instance starts I can see the DHCP lease being granted, and it also shows up in /var/db/dhcpd_leases, as follows:

% sudo tcpdump -i bridge100 udp port 67 and port 68
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on bridge100, link-type EN10MB (Ethernet), snapshot length 524288 bytes
11:16:47.028466 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:c3:11:46 (oui Unknown), length 295
11:16:47.067659 IP 192.168.64.1.bootps > 192.168.64.2.bootpc: BOOTP/DHCP, Reply, length 308
11:16:47.068136 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:c3:11:46 (oui Unknown), length 305
11:16:47.069380 IP 192.168.64.1.bootps > 192.168.64.2.bootpc: BOOTP/DHCP, Reply, length 308
^C

4 packets captured
{
        name=testboot
        ip_address=192.168.64.2
        hw_address=1,52:54:0:c3:11:46
        identifier=1,52:54:0:c3:11:46
        lease=0x66f48056
}

Of note, both of these happened after rebooting my system, and I was able to start and stop Multipass instances without issue.

Then I started up one of my other VM services, and this is when it became an issue.

I also run Vagrant machines on my system using VirtualBox as the VM host. This has not been an issue with conflicts with Multipass in the past.

Running a Vagrant instance and then attempting to run a Multipass instance results in the Multipass instance being inaccessible. Doing the opposite results in both being able to run.

But as long as any Vagrant instance is running, Multipass is unable to pull a dhcp lease from the host OS. The following is monitoring while Vagrant is running and attempting to boot a new Multipass instance, and then stopping Vagrant and running a Multipass instance:

 % sudo tcpdump -i bridge100 udp port 67 and port 68
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on bridge100, link-type EN10MB (Ethernet), snapshot length 524288 bytes
15:42:48.835765 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:42:52.193617 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:42:57.007968 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:43:04.647335 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:43:19.990408 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:43:51.837917 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:44:55.958793 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:45:59.580174 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:47:03.471795 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:48:08.120863 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:49:11.416655 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:a6:9b:ee (oui Unknown), length 305
15:52:01.092465 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:11:d0:55 (oui Unknown), length 303
15:52:01.104232 IP 192.168.64.1.bootps > 192.168.64.3.bootpc: BOOTP/DHCP, Reply, length 300
15:52:01.104769 IP 0.0.0.0.bootpc > broadcasthost.bootps: BOOTP/DHCP, Request from 52:54:00:11:d0:55 (oui Unknown), length 313
15:52:01.105480 IP 192.168.64.1.bootps > 192.168.64.3.bootpc: BOOTP/DHCP, Reply, length 300

So this appears to be tied to Vagrant (using VirtualBox as its VM host) conflicting with Multipass's ability to pull a DHCP lease for the booting instance.

For now we are simply not running Vagrant while using Multipass, but will have to dig into why this is conflicting.

@ricab
Copy link
Collaborator

ricab commented Sep 26, 2024

Very interesting @pkarjala. So in the output of the last tcpdump, were the replies at 15:52 issued only after you'd disabled vagrant? That would point to vagrant causing situation 2. I wonder if it is messing with... Could be the firewall, bootp, bridge100... Very useful info, thanks!

@ricab
Copy link
Collaborator

ricab commented Sep 26, 2024

Also @pkarjala, what versions of Vagrant and VirtualBox are you using?

@pkarjala
Copy link
Author

Very interesting @pkarjala. So in the output of the last tcpdump, were the replies at 15:52 issued only after you'd disabled vagrant? That would point to vagrant causing situation 2. I wonder if it is messing with... Could be the firewall, bootp, bridge100... Very useful info, thanks!

Correct; this is after stopping all running Vagrant instances and then launching a Multipass instance.

Additionally, it appears that when Multipass is running 1 or more instances and has bound to bridge100, Vagrant w/VirtualBox will go ahead and use bridge101. Note that once any Vagrant instance is running, no further Multipass instances will properly boot to completion, though they will show up in the interface for bridge100 as an additional vmenet#.

This example shows ifconfig for two successfully running Multipass instances and a third not properly accessible bound to bridge100, and two Vagrant instances bound to bridge101; the boot order was

  • Multipass 1
  • Multipass 2
  • Vagrant 1
  • Vagrant 2
  • Multipass 3 (not accessible)
bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
	options=3<RXCSUM,TXCSUM>
	ether 8a:66:5a:73:b3:64
	inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
	inet6 fe80::8866:5aff:fe73:b364%bridge100 prefixlen 64 scopeid 0x17
	inet6 fdce:d88e:d862:75c4:181c:ec8e:27a4:38e2 prefixlen 64 autoconf secured
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x0
	member: vmenet0 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 22 priority 0 path cost 0
	member: vmenet1 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 24 priority 0 path cost 0
	member: vmenet4 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 28 priority 0 path cost 0
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active

bridge101: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=3<RXCSUM,TXCSUM>
	ether 8a:66:5a:73:b3:65
	inet 192.168.10.1 netmask 0xffffff00 broadcast 192.168.10.255
	inet6 fe80::8866:5aff:fe73:b365%bridge101 prefixlen 64 scopeid 0x19
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x0
	member: vmenet2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 26 priority 0 path cost 0
	member: vmenet3 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 27 priority 0 path cost 0
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active

Also @pkarjala, what versions of Vagrant and VirtualBox are you using?

  • Vagrant 2.4.1 (latest release from what I can tell)
  • VirtualBox Version 7.0.12 r159484 (Qt5.15.2) (version 7.1.0 is out, I have not upgraded yet)

Also, if it is relevant, this is running on an Intel Mac, not an Apple Silicon Mac.

@ricab
Copy link
Collaborator

ricab commented Sep 27, 2024

Thank you @pkarjala, I suspect this is down to how QEMU uses vmnet. But we can at least update the new troubleshooting guide we're working on.

I understand the new network stack in VirtualBox 7.1.0 does not work for everyone, but if you ever upgrade to 7.1.X and have new findings, please share!

@pkarjala
Copy link
Author

I have no qualms with attempting to upgrade to VirtualBox 7.1.x (I honestly wasn't aware it was out until working on researching this issue); will likely try that early next week and report back.

@pkarjala
Copy link
Author

Had a few minutes and tested this morning, unfortunately it turns out the current version of Vagrant is not compatible (without some tweaking) with VirtualBox 7.1.x as per hashicorp/vagrant#13501

So will have to wait for a bit for proper testing.

@ricab ricab added this to the select backlog bag milestone Oct 7, 2024
@ricab ricab changed the title Multipass instances do not boot on macOS Multipass and Vagrant with VirtualBox clash on macOS Oct 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants