Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very hacky solution for Windows guest #2

Open
arne-claeys opened this issue Mar 23, 2018 · 346 comments
Open

Very hacky solution for Windows guest #2

arne-claeys opened this issue Mar 23, 2018 · 346 comments

Comments

@arne-claeys
Copy link

Dear Mr Coulter

First of all, thanks a lot for your research.
In the meantime I have managed to get GPU passthrough (of my muxless NVIDIA GTX 950M) on a Windows guest working as well.
At the moment the solution is very hacky, but perhaps it could be useful.

To this end I have hard coded the VROM dump in the OVMF image by patching OvmfPkg/AcpiPlatformDxe/QemuFwCfgAcpi.c
fwcfg_patch.txt

The VROM is read from a header file and copied to a RuntimePool to make sure it remains persistent after the OS has loaded.
In the following part of the code a new ACPI table is defined that first defines an OperationRegion that points to the VROM image.
At the end a slightly modified version of your ACPI table, in which I pasted a decompiled version of the ROM call from the SSDT of my laptop, is appended to the rest of the table.
The RVBS variable should be set to the actual length of the VROM dump.

ssdt.asl

As currently I don't have sufficient time to figure out a more elegant solution, the following was done to compile this table.

  • force compiling the table using iasl -f ssdt.asl
    The force option is necessary as the OperationRegion is not included in the ASL, but in the preceding part of the ACPI table that was defined in QemuFwCfgAcpi.c.
  • using the following script to drop the header of the table and create a hex dump in vrom_table.h
    buildtable.txt

In my case this made the Error 43 finally disappear.
I hope this could be of any help.

Kind regards
Arne

@jscinoz
Copy link
Owner

jscinoz commented Mar 24, 2018

Hi Arne,

Thank you for this! I didn't realise anyone else was still looking into this. I've not had too much time to do so myself lately sadly. I'm glad to hear this got it working for you.

To try and figure out what's going on here, could you please let me know the following about your setup?

  • Were you also assigning a GVT card to the guest as primary VGA, or was the 950M the primary card?
  • Your qemu command line / libvirt XML

Also, can you confirm that loading the same VROM via the original ASL in this repository (without your OVMF patch) did not work on your hardware? I may well be missing something as this whole area is quite new to me, but I'd have expected them to have the same result, as the interface to the Nvidia driver itself (the _ROM method) remains the same in either case.

Cheers,
Jack

@arne-claeys
Copy link
Author

arne-claeys commented Mar 24, 2018

Hi Jack
Attached you can find the libvirt XML that was used.
win10-pci.txt

A Virtio GPU was assigned as the primary graphics adapter for my guest.
The NVIDIA card was assigned as the guest's secondary graphics adapter.
As in the Misairu tutorial, the guest was first configured using a Spice client and afterwards accessed using RemoteFX.

I can confirm Error 43 still occurred when I tried using the original ASL table and passed the VROM as a PCI ROMBAR.
However it has been a while since I tried this out.
As I start doubting whether I have changed the filename in this critical line of the ACPI table, I can't exclude it would have worked in an easier way.
Local1 = FWGS(Local0, "genroms/10de:139b:4136:1764")
I will check this later on.

As you wrote in this post that Windows clears the ROMBAR image once booted, I quickly switched to the RuntimePool approach.

Kind regards
Arne

@jscinoz
Copy link
Owner

jscinoz commented Mar 25, 2018

Thanks for the information. It's interesting that it worked for you without a GVT card. I will have to try that scenario again myself, with a fresh VM in case perhaps there is something broken in the one I've used so far.

The filename in the FWGS call is simply whatever filename the ROM ends up as in fw_cfg. I named it according to hardware IDs in my case (vendor, device, subsystem) as I intended to eventually make the ASL generic and to just read the PCI IDs from the device at PCI address 1:0:0 and load the appropriate image.

I'll give things a try on my machine with your method when I have a bit of free time and will reply back here with the result.

@jscinoz
Copy link
Owner

jscinoz commented Apr 2, 2018

Myself and a few others have had a chance to test your patch, and I can confirm it works as far as getting further than the code 43 error :)

Unfortunately, none of us have had any luck getting 3D workloads going in the guest - were you able to do so in your setup?

@jscinoz
Copy link
Owner

jscinoz commented Apr 2, 2018

After further testing, I can confirm 3D workloads do in fact work. What currently doesn't work (and I suspect this is the same with any RemoteFX-based setup), is fullscreen mode. I suspect we might need to emulate an Optimus setup in the VM with GVT for this to work, but thus far I haven't been able to get GVT itself to work (even without a Nvidia card involved)

@arne-claeys
Copy link
Author

arne-claeys commented Apr 2, 2018

Nice to hear the patch helped you to finally get rid of code 43 :-)
So I can conclude that 3D workloads work for you, unless you run your RDP client in full screen mode?
At first sight, I find it difficult to imagine why that makes a difference.
Hopefully there will be a way to solve this issue without the need to emulate Optimus with GVT in the VM.
Solving the error 12 here (What about GVT-g?) doesn't really sound promising.

In my setup some simple 3D rendering tasks seemed to run on the GPU, but I did not test this in detail and never in full screen mode.

It will also take a while before I can try out something new as my own laptop is currently sent back to the manufacturer for repair.

@jscinoz
Copy link
Owner

jscinoz commented Apr 3, 2018

So I can conclude that 3D workloads work for you, unless you run your RDP client in full screen mode?

Not quite. To clarify, it has nothing to do with whether or not the RDP client is fullscreen, but rather, whether the application (within the VM) itself runs in fullscreen. There are a few ways to reproduce this:

  1. Try running a game that defaults to fullscreen (true fullscreen, not borderless windowed). It will likely crash on startup
  2. As above, but with a benchmark; 3DMark is an example of this; it will throw an error relating to enumerating display resolutions (I don't remember the exact name of the throwing method but it was along the lines of ListAllModes)
  3. As an example of how non-fullscreen applications work, try the Unigine Heaven benchmark - it will work fine in windowed mode, but will be unable to enter fullscreen mode.

Hopefully there will be a way to solve this issue without the need to emulate Optimus with GVT in the VM.
Solving the error 12 here (What about GVT-g?) doesn't really sound promising.

I do not get the Code 12 error - I suspect @Misairu-G had something else broken in their setup. I can get GVT working (and even run 3D workloads on the GVT card) if a QXL card remains the primary VGA in the VM.

What I have not been able to get working is GVT as primary VGA in the guest. There's ongoing work by Intel on this (specifically GVT dmabuf and x-display support), but it is still quite raw. Judging by this document, having GVT working as primary VGA will be necessary to trigger the hybrid-graphics behaviour in the Windows graphics stack.

@arne-claeys
Copy link
Author

Thanks for the explanation. It gives me a better understanding of the problem now.

@jscinoz
Copy link
Owner

jscinoz commented Apr 9, 2018

After a bit of testing, I've found the following things:

  • Some games/engines are clever enough to make use of the Nvidia card when it is not primary VGA, even without a valid hybrid graphics setup. Fortnite is the only such game I've found that works in this configuration, but I imagine the same would occur with any UE4 game. Even when QXL is the VM's primary VGA, it successfully renders on the Nvidia card and draws to the QXL display with framerates comparable of bare-metal performance (both 90-100fps). This seems to be the exception, not the rule; all other tested games (Overwatch, Planetside 2) and benchmarks (3DMark, Unigine Heaven) run with software rendering only in this configuration.
  • GVT-g local display DMA-BUF support currently only works with SeaBIOS based VMs, and even then, seems incredibly flakey - guest BSODs are frequent, and even when the guest doesn't crash outright, there is significant graphical corruption in the guest.

Going forward, I think this leaves us with a few options:

  • Wait for GVT-g to support OVMF (OVMF / UEFI support? intel/gvt-linux#23) and see if that then allows for a valid hybrid graphics setup in the VM.
  • Make similar changes to SeaBIOS to support loading the Nvidia VBIOS, and see if this results in a valid hybrid graphics setup. There are questions as to what the impact of the noted graphical corruption would be.
  • Modify qxl-wddm-dod to support the additional capabilities required for it to be a valid participant in a hybrid graphics setup - this might be the best option (if it's actually technically workable), as it would avoid quite a bit of complexity inherent to GVT. It is unknown whether the Nvidia driver would cooperate in such a setup, but as far as my limited understanding of the WDDM hybrid graphics model goes, it should work.

@jscinoz
Copy link
Owner

jscinoz commented Apr 11, 2018

For anyone else looking at this, an updated OVMF patch generated against current OVMF git master is here

@jscinoz
Copy link
Owner

jscinoz commented May 21, 2018

After a bit of experimentation, and a patch from upstream OVMF, I got GVT-g local display support working on my machine. Unfortunately, this does not result in a valid hybrid graphics setup, as the emulated display is a regular DisplayPort device, and as per Microsoft documentation, the iGPU needs to expose an embedded display panel of some kind.

At this point, there are two options to potentially get this working, but both are beyond my current knowledge/expertise, and I sadly don't have much free time to get up to speed in these areas:

  • Modify qxl-wddm-dod to support WDDM 1.3, with the additional constraint that it must expose the emulated display as an embedded display (i.e. DXGKDDI_QUERY_CHILD_RELATIONS should return children of type _DXGK_CHILD_DEVICE_TYPE.TypeIntegratedDisplay. This is probably the preferable option if it works, as we can avoid the complexity of GVT-g.
  • Make similar modifactions to GVT-g. I'm unsure as to whether these would require modifying the closed-source Intel Windows driver (i.e. something we can't do), or if it could be done entirely in the vgpu code in the host kernel.

@marcosscriven
Copy link

@jscinoz @arne-claeys - just trying to investigate whether this would allow gaming in a windows guest on a linux host?

I have a a Dell Precision 5520 via work, which has a Quadro M1200. Like the XPS’s, I believe this is a muxless setup, and appears as a 3D controller.

I see you mentioning ‘rendering workloads’, and indeed games based in Unreal engine, but still unclear on current state, or what the potential is here on a laptop with this setup?

@marcosscriven
Copy link

I found a good guide to the current status here: https://www.reddit.com/r/VFIO/comments/8gv60l/current_state_of_optimus_muxless_laptop_gpu/

Appears to mention @jscinoz’s work.

@Ashymad
Copy link

Ashymad commented Jul 4, 2018

Sadly I didn't have any luck with getting this to work. I did however create a PKGBUILD that complies OVMF with the vBIOS patched in for people that want to test it out quickly (and are running Arch Linux). Just place your rom in the same folder, name in vBIOS.bin, and run makepkg -si.
EDIT: After copying much of Arne's libvirt XML I was finally able to say goodbye to Code 43 :)

@marcosscriven
Copy link

@Ashymad - any ideas how to get the VBIOS for something like the Dell XPS or Precision 5520?

@pseudolobster
Copy link

@marcosscriven I'd imagine the VBIOS is included in the system BIOS, so you will not be able to use tools which try to dump the VBIOS from the PCIe bus like you'd do for a discrete card.

The easiest way is probably to try booting up windows on bare metal, then grab the vbios from the registry. I found a guide on how to do this here: https://forums.laptopvideo2go.com/topic/32103-how-to-grab-a-notebooks-vbios-that-is-not-supported-by-nvflash/

Another way would be to decompile your system BIOS and grab the VBIOS rom out of that.

On a HP, I was able to go to support.hp.com, search my model, download the BIOS update, run it, but don't actually go through with flashing your BIOS. Just allow it to unpack, then look in C:\windows\temp or %appdata% to see where it put everything. Some installers you may be able to unpack with 7zip.

Once you have the system BIOS, you'll need to find a copy of Phoenix BIOS Editor, or some similar tool to decompile the UEFI image into its individual firmware blobs. This gave me a bunch of files with names like 4A640366-5A1D-11E2-8442-47426188709B_1693_updGOP.ROM. From there I was able to grep these ROM files for "Nvidia", and I found a copy of my VBIOS that way.

@marcosscriven
Copy link

marcosscriven commented Jul 20, 2018

Thanks so much @pseudolobster - extracting via the linked how-to worked a treat on the Dell Precision 5520.

In case that link disappears in future the basic overview is:

  • Extract [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0002\Session] with regedit to a file (the 0002 might be different).
  • Cut out everything except the hex data
  • Import that with a hex tool that understands how to turn bytes encoded as XX strings to raw binary data.

@marcosscriven
Copy link

marcosscriven commented Jul 20, 2018

@pseudolobster @arne-claeys @jscinoz

I extracted the bios from windows reg, but it seems to be of type x86 PC-AT rather than UEFI:

	PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 13b6, class: 030200
	PCIR: revision 3, vendor revision: 1
	Last image

According to https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF that means this won't work with passthrough.

Do you know a way around that at all please?

@Ashymad
Copy link

Ashymad commented Jul 21, 2018 via email

@hardeepd
Copy link

hardeepd commented Jul 24, 2018

@arne-claeys @jscinoz Thank you both very much for your work here.
I've tried your patch and I still get a Code 43 in windows.
I thought I'd try to debug the firmware in a linux VM and can see that the nouveau driver fails to find any vBios.

nouveau: bios: unable to locate usable image
nouveau: bios: ctor failed, -22

Any ideas how to resolve this or where I should be looking?

Is there a way to verify that the OVMF firmware I've compiled does in fact have the vBIOS embedded?

Edit: I fixed it! Seems that the firmware was fine all along but there was an address problem in the ioh3420 configuration of my qemu script

@marcosscriven
Copy link

@arne-claeys @jscinoz

I created a patched OVMF for my Nvidia Quadro M1200 (per https://github.com/marcosscriven/ovmf-with-vbios-patch)

However, I still get error 43. I see this error in the qemu logs:

2018-08-03T12:45:56.397289Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:01:00.0

I've ensured those patched versions are in use, and KVM is hidden etc.

<domain type='kvm'>
  <name>win10-2</name>
  <uuid>e7d44285-507b-48da-bfe2-2eba415016bd</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-2.11'>hvm</type>
    <loader readonly='yes' type='pflash'>/edk2/Build/OvmfX64/RELEASE_GCC5/FV/OVMF_CODE.fd</loader>
    <nvram>/edk2/Build/OvmfX64/RELEASE_GCC5/FV/OVMF_VARS.fd</nvram>
    <boot dev='hd'/>
    <smbios mode='host'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='5DIE45JG7EAY'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>

I've also ensure the device is passed through with:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

I tried both with and without the <rom bar> tag.

IOMMUS looks to be all setup ok:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU Group 1 01:00.0 3D controller [0302]: NVIDIA Corporation GM107GLM [Quadro M1200 Mobile] [10de:13b6] (rev a2)

dmesg show the vfio_pci added:

dmesg | grep -i vfio
[    2.358815] VFIO - User Level meta-driver version: 0.3
[    2.380410] vfio_pci: add [10de:13b6[ffff:ffff]] class 0x000000/00000000
[  184.054104] vfio-pci 0000:01:00.0: enabling device (0002 -> 0003)

And finally lspci shows the card is bound to vfio-pci driver:

lspci -nnk -d 10de:13b6                         
01:00.0 3D controller [0302]: NVIDIA Corporation GM107GLM [Quadro M1200 Mobile] [10de:13b6] (rev a2)
	Subsystem: Dell GM107GLM [Quadro M1200 Mobile] [1028:07bf]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau

Any ideas please?

@marcosscriven
Copy link

@hardeepd - can you share how you worked out the ioh3420 settings and your xml confit please? I’ve posted my own PCI tree above.

@marcosscriven
Copy link

For reference I did finally get this working https://github.com/marcosscriven/ovmf-with-vbios-patch/blob/master/qemu/win-hybrid.xml

The tricky thing is if the GPU is attached via a bridge, you need to specify that connection:

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=4136'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=1983'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.bus=pci.1'/>
  </qemu:commandline>

@kalelkenobi
Copy link

Hey @marcosscriven, I think I'm experiencing a similar problem. I successfully passed my dGPU to the Windows 10 x64 Guest, using @Ashymad's PKGBUILD to patch the OVMF with my vBIOS. That got me to the point were I was able to install NVIDIA drivers, but after that I'm stuck with code 43. Could you please post your entire xml? the link above did not work for me (404). Thank you very much.

@marcosscriven
Copy link

All my config for this is in the same linked repo https://github.com/marcosscriven/ovmf-with-vbios-patch

@kalelkenobi
Copy link

Sadly I had no luck, so I turn to you guys :). I'm trying to do this with my MSI GS63VR 6RF, it should be a muxless laptop with a GTX1060 dGPU. What's interesting is that the dGPU should be directly connected to the HDMI output, so I was hoping to pass the 1060 to a Win10 guest and use an external monitor connected to the HDMI (don't know if that's possible).
I'm on ArchLinux using qemu-headless 2.12.1 and libvirt 4.5.0.

The relevant IOMMU groups are:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106M [GeForce GTX 1060 Mobile] [10de:1c20] (rev a1)

and also here's my full libvirt xml:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>windows10</name>
  <uuid>da3372e1-96a4-4470-8131-6079e178c609</uuid>
  <memory unit='KiB'>15624192</memory>
  <currentMemory unit='KiB'>15624192</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='6'/>
    <vcpupin vcpu='3' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.12'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/windows10_VARS.fd</nvram>
    <bootmenu enable='yes'/>
    <smbios mode='host'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='5DIE45JG7EAY'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='allow'>Skylake-Client</model>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/home/kalel/workspace/VirtualMachines/windows10.img'/>
      <target dev='vda' bus='virtio'/>
      <boot order='3'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='8' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='8'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:c4:cb:d0'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0' multifunction='on'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <gl enable='no' rendernode='/dev/dri/by-path/pci-0000:00:02.0-render'/>
    </graphics>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </sound>
    <video>
      <model type='virtio' heads='1' primary='yes'>
        <acceleration accel3d='no'/>
      </model>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=5218'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=4525'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.bus=pci.1'/>
    <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
    <qemu:env name='QEMU_PA_SAMPLES' value='4096'/>
    <qemu:env name='QEMU_AUDIO_TIMER_PERIOD' value='200'/>
    <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
  </qemu:commandline>
</domain>

At this point I've tried a lot of different things: patching NVIDIA drivers, removing the VIRTIO primary GPU, booting with the external monitor plugged in, but I'm still getting code 43 after I install NVIDIA drivers. I've also checked the vBIOS and it seems that the one I used to patch OVMF is the right one, because it's an exact match to the one I extracted with nvflash from inside the VM.
I'm probably missing something stupid at this point could you guys please help?

Thank you all for your assistance.

@KenMasters20XX
Copy link

@kalelkenobi @marcosscriven

I'm posting to confirm that I have a very similar configuration as @kalelkenobi . I'm using a Gigabyte Aero 15X v8, with GTX 1070 Max-Q and yet I'm stuck with Code 43.

I've tried every configuration posted in this repo, as well as in the other dGPU repo, and none of them seem to work.

In Windows, the Nvidia driver installs perfectly fine without complaint, and yet reports Code 43. I've patched my OVMF using the provided PKGBUILD up-thread. I've tried passing my vBIOS ROM separately, alongside, or not at all. I've tried with and without ROM BAR enabled. I've tried SeaBIOS and UEFI both, i440fx and Q35.

Perhaps this information can help someone figure this out but as of now, I am at a bit of a loss. What I have learned is that my particular graphics configuration has the following characteristics:

  1. The card has it's own EEPROM chip, an MXIC MX25U4033E 512KB chip.
  2. I can retrieve what I believe to be the complete BIOS via nvflash as well as GPU-Z; however, there is only a non-UEFI BIOS to be found.
  3. I've gone so far as to dump the flash from the chip myself using an EEPROM programmer.
  4. The dump is 512KB, verified against the chip, but only 169kBytes is actually used and, again, no UEFI; only the PC Type 0 BIOS; the rest is just zero'ed.
  5. I've searched far and wide through the Aero 15X and the MSI GS65 BIOS update files for ANYTHING that might be an nVidia UEFI PE file and found nothing. All of this leads me to believe these cards are NOT UEFI-enabled, and they are NOT being shadowed like other Optimus cards that don't have discrete EEPROM (I could be wrong here).
  6. The card shows up in lspci as a "VGA Controller."
  7. This is not an MXM device, and is Optimus-enabled.
  8. The GTX 1070 Max-Q controls the HDMI and mini-DP ports. If the GPU's driver is disabled those ports will not work.
  9. If I attach an external display, I see the internal QXL card mirrored across the GTX card when the kernel goes into framebuffer mode during boot and I can see Ubuntu's logo and status indicator upon booting the VM (I believe this is VESA). After about 3-4 seconds the system seems to hard lock (although I have not tried to SSH in to confirm) .
  10. I don't see anything from Windows, nor do I see Tianocore's logo upon boot on the external display; this only happens with the Ubuntu splash/status indicator and this is using the default 18.04 Nouveau drivers.
  11. FWIW, all of the cards information shows up in GPU-Z, the BIOS dump from within the VM is exactly the same as it is from outside the VM in bare-metal Windows and from an EEPROM reader directly from the chip. So the BIOS is being passed-through successfully. The only difference is that the GPU shows no clock speed; and believe is in a D3 power-down/sleep state. AFAIK, I have no way of getting it out of this state (due to the Code 43).

Some suppositions on my part:

I don't believe this card has a UEFI 'BIOS', either on it's own discrete EEPROM or in the system firmware. That might be true of all the Max-Q model cards? My guess is that these designs are completely relying on the iGPU at boot and operate in CSM-mode only with a legacy BIOS. I don't think any of these laptops can operate with the iGPU off, nor can any of them disable the internal display or remain functional at bootup with the internal display disabled (if done through a BIOS hack).

At this point, I'm left attempting a few other alternatives, but I think I've fully explored the possibility that the VM isn't getting the correct BIOS -- as far as I can tell, it is. I've used @marcosscriven 's configuration as well as many other iterations, and yet, nothing works for me.

Next steps would be to try the ACS patch (because there is a hidden PCIe HDMI audio component at 0000:01:00.1 that I cannot passthrough).

Or.. to try to use a UEFI-enabled GTX 1070M BIOS patched OVMF (assuming compatibility with Max-Q).

Or.. try patch my own custom Pascal BIOS for the Max-Q, based on combining the 1070M UEFI-enabled BIOS with the 1070 Max-Q and then flashing that to my card (I can flash back with programmer if it fails, so no worries there) and hoping that by effectively turning the card into a UEFI-compatible card that it might help?

Any thoughts, ideas, would be greatly appreciated. Would really like to get this working and it seems I'm right very close and I'm maybe missing something trivial? I get the feeling that maybe I'm spending a lot of time on this BIOS issue and it's something completely different?

Thanks!

@kalelkenobi
Copy link

@KenMasters20XX thank you for your intensive testing. I believe I am in the same situation as you are. My GTX 1060 is NOT a Max-Q design, but I’ve jumped through the same hoops as you have trying to confirm that I had in fact a valid BIOS (short of using an eeprom programmer) and came to the same result. The Guest seems to be getting the right BIOS and there is no way to extract a UEFI compatible dump from the card or the BIOS updates. Tried all the same setups you did (q35, i440fx, patches OMVF, regular OVMF, etc...) with no luck. Unfortunately there’s little else I can contribute aside from confirming some of your guesses: my laptop cannot in fact operate with the iGPU or internal display disabled (I tried via unlocked BIOS). I’ve also tried using a downloaded vBIOS that seemed a close match to my own, no luck. Lately I’ve been focusing my attention on the PCI hierarchy, thinking maybe I missed something there. I hadn’t found the hidden PCI device, although I suspected it existed. Do you guys think that could be it? Maybe the HDMI audio needs to be passed on for the card to work properly. bare metal windows seems to be able to use it, even though it doesn’t show up in device manager.

@KenMasters20XX
Copy link

KenMasters20XX commented Aug 13, 2018

@kalelkenobi I think we're the two users in this thread so far with Pascal cards? I think perhaps everyone else is using Maxwell-based cards and that might make the difference. So far, I've not found any instances online of either an integrated or MXM-based Pascal card being successfully passed-through.

What is interesting is that MXM cards like the 1070M do in fact have a UEFI BIOS; however, the integrated cards, even though they show up as VGA Controller, and have control over the HDMI ports, do not have an associated UEFI module. My guess is these cards simply do not have UEFI-functionality by design? I'm going to retrieve my firmware's Gop Driver and take another look, but IIRC, there was no indication of an Nvidia driver there.

Now, if that's true; then perhaps that's what's causing the Code 43? If not, then there's a UEFI module that I'm simply missing...

BTW; the 'hidden' HDMI Audio device does indeed exist, I've seen it "accidentally" exposed by toggling the power-state via ACPI calls. At various times (seemingly at random) the HDMI device will show up in lspci. This is one of the reasons I'm thinking of using an ACS-patched kernel in my next series of tests, simply to isolate this as a possibility.

Lastly, I'm thinking that perhaps the ACPI tables might be a difference-maker here. I've taken a cursory look at the Aero 15X's SSDT table and I'm guessing that, perhaps like a Hackintosh, there is some incompatibility here between what's been posted and what the Nvidia driver is expecting to see for an integrated Pascal GPU.

Hard to really say with any certainty since this amounts to shooting in the dark.

@kalelkenobi
Copy link

@KenMasters20XX I'm no nowhere near as versed as I'm guessing you are on BIOS and ACPI inner workings, that is why I got stumped on the no-UEFI dump front and essentially gave up on that angle. This is simply way over my head, that's why I'll try and look at the HDMI audio angle. I was able to find a way too old bug about this: https://bugs.freedesktop.org/show_bug.cgi?id=75985
it seems to be a nvidia proprietary driver issue, that they simply choose to ignore. There is a workaround, described in the bug, to make HDMI audio show up reliably and the device is indeed in the same IOMMU group as my 1060:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106M [GeForce GTX 1060 Mobile] [10de:1c20] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)

I'll try passing it over to the guest and see if that makes any difference, but I'm not holding my breath. As you pointed out very few people tried this on pascal optimus laptops and I have not been able to find anyone who actually succeded.

@x0r2
Copy link

x0r2 commented Dec 8, 2020

@ktod16

@citadelcore Here is my working qemu config for P71 and P4000
qemu.txt

Hi! I went through these guides, but I still get error 43.
I've not patched vbios.

I've taken your XML from Jan 4 post, because I have the same laptop as you (ThinkPad P71).

<kvm>
    <hidden state='on'/>
</kvm>

<qemu:commandline>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,hv_time,kvm=off,hv_spinlocks=0x1fff,-hypervisor'/>
    <qemu:arg value='-acpitable'/>
    <qemu:arg value='file=/var/tmp/SSDT1.dat'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=6058'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=8780'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.bus=pci.1'/>
</qemu:commandline>

When I start virtual machine I get error:

Error starting domain: internal error: qemu unexpectedly closed the monitor:
qemu-system-x86_64: -acpitable file=/var/tmp/SSDT1.dat:
warning: ACPI table has wrong length, header says 1095060310, actual size 219 bytes
2020-12-08T10:28:48.021359Z qemu-system-x86_64:
-device vfio-pci,host=0000:01:00.0,id=hostdev0,bus=pci.5,addr=0x0:
PCI: slot 0 function 0 not available for vfio-pci, in use by e1000e

/var/tmp/SSDT1.dat from
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#%22Error_43:_Driver_failed_to_load%22_with_mobile_(Optimus/max-q)_nvidia_GPUs

U1NEVKEAAAAB9EJPQ0hTAEJYUENTU0RUAQAAAElOVEwYEBkgoA8AFVwuX1NCX1BDSTAGABBMBi5f
U0JfUENJMFuCTwVCQVQwCF9ISUQMQdAMCghfVUlEABQJX1NUQQCkCh8UK19CSUYApBIjDQELcBcL
cBcBC9A5C1gCCywBCjwKPA0ADQANTElPTgANABQSX0JTVACkEgoEAAALcBcL0Dk=

I don't understand what I have to get. Will screen work only with external HDMI monitor or only with FreeRDP client?

@ktod16
Copy link

ktod16 commented Dec 8, 2020

@x0r2
Passthroug works on P71 both with externall Monitor and with RDP client.
Post your complete xml file, something might be wrong somewere.

I think your problem comes from this "PCI: slot 0 function 0 not available for vfio-pci, in use by e1000e". Pci you want to asign to gpu is in use by network card.

Have you managed to start up the VM without gpu passthrough?

@x0r2
Copy link

x0r2 commented Dec 9, 2020

@ktod16

@x0r2
Passthroug works on P71 both with externall Monitor and with RDP client.
Post your complete xml file, something might be wrong somewere.

I think your problem comes from this "PCI: slot 0 function 0 not available for vfio-pci, in use by e1000e". Pci you want to asign to gpu is in use by network card.

Have you managed to start up the VM without gpu passthrough?

  1. I've created new VM with virt-manager, I've set firmware OVMF_CODE.fd. Then I've added NVIDIA Quadro M620 through add PCI, and NVIDIA Audio.
  2. After that I've installed Windows 10, and NVIDIA drivers and edit through virsh XML config (my previous post).

I have the same as your device ids (x-pci-sub-vendor-id=6058 and x-pci-sub-device-id=8780) but I have different NVIDIA card. Is it right? Sorry for my English.

win10.txt

@x0r2
Copy link

x0r2 commented Dec 11, 2020

@ktod16

I fixed this error by changing bus id for network device from 0x01 to 0x11 (because NVIDIA device had the same bus id). And first warning was appear because I forget decode base64 SSDT1.dat. Now I don't get any errors or warnings. But after starting Windows, I see 43 error in device manager.

IOMMU Group 1:
	00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
	01:00.0 3D controller [0302]: NVIDIA Corporation GM107GLM [Quadro M620 Mobile] [10de:13b4] (rev a2)
	01:00.1 Audio device [0403]: NVIDIA Corporation GM107 High Definition Audio Controller [GeForce 940MX] [10de:0fbc] (rev a1)

Should I passthrough "00:01.0 PCI bridge [0604]: Intel" to qemu also or enable GVT-g? What do you think, what I could do wrong?

@x0r2
Copy link

x0r2 commented Dec 21, 2020

It's working without VBIOS patch on ThinkPad P71 (Nvidia Quadro m620).

It's NOT working for me with pci bus setting, I don't know why, I spent lot of time for this problem:

<qemu:arg value="-set"/>
<qemu:arg value="device.hostdev0.bus=pci.1"/>

My working settings:

<!-- vendor_id in hyperv section -->
<vendor_id state='on' value='GenuineIntel'/>

<!-- hide kvm -->
<kvm>
<hidden state='on'/>
</kvm>

<!-- sub-vendor-id and sub-device-id -->
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x17aa'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x224c'/>

<!-- ACPI battery -->
<qemu:arg value='-acpitable'/>
<qemu:arg value='file=/var/tmp/ssdt1.dat'/>

I used Nvidia driver which installed by Windows automatically.

@mikejcKS
Copy link

mikejcKS commented Feb 1, 2021

It's working without VBIOS patch on ThinkPad P71 (Nvidia Quadro m620).

It's NOT working for me with pci bus setting, I don't know why, I spent lot of time for this problem:

<qemu:arg value="-set"/>
<qemu:arg value="device.hostdev0.bus=pci.1"/>

My working settings:

<!-- vendor_id in hyperv section -->
<vendor_id state='on' value='GenuineIntel'/>

<!-- hide kvm -->
<kvm>
<hidden state='on'/>
</kvm>

<!-- sub-vendor-id and sub-device-id -->
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x17aa'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x224c'/>

<!-- ACPI battery -->
<qemu:arg value='-acpitable'/>
<qemu:arg value='file=/var/tmp/ssdt1.dat'/>

I used Nvidia driver which installed by Windows automatically.

Can you post your grub config and full current xml?

@briskspirit
Copy link

briskspirit commented Mar 14, 2021

Spent few days but was able to passthrough dGPU on my Gigabyte Aero 15 laptop. FurMark FHD score 3655pts, 61FPS, 75C max temp.
Laptop config: i7-7700, Geforce 1060 6Gb, 32Gb RAM.
Right now I don't have bumblebee or similar (as had problems with setting it up) .
External displays on HDMI and miniDVI.
dGPU isolated with vcio-pci as a grub kernel option. So as of right now I can't use dGPU on Linux...
Also main problem is if I boot Ubuntu 20.04 with connected displays - they won't work. So if I need to reboot laptop I also need to disconnect HDMI/miniDP.
Code 43 was solved with ACPI battery patch and manual driver installation. Also haven't switched buses at all.

Maybe somebody knows how to be able to use dGPU in Linux when it is not needed for VM without reboot?

====
Edit:
Found a workaround for display plug/unplug issue: needed to add video=efifb:off to grub kernel options list.

Found another problem:
I have 2 VMs : macOS and Win 10. MacOS is just a regular one without passthrough. When I run it - Win10 VM will shut down or won't start...

====
Edit2:
Trying to switch off dGPU with system76-power . Strange results... dGPU off gets me to 25W and on - 20W. But card disappears from lspci . (Also noticed that nvidia-drm, nvidia-modeset and nvidia-uvm modeprobes hard on CPU sometimes). So nvidia trues to get it's device back? How to solve this problem?

@PiemP
Copy link

PiemP commented Mar 17, 2021

@x0r2 Can you please share your libvirt xml configuration? I have a thinkPad P15V. the Windows guest identify the card and I can install the driver correcly but when I reboot the GPU give the error 43. In the Windows task manager performance tab I can't see the GPU though I have driver correcly installed. Thank you.

PS. I have a similar IOMMU situation like yours but I don't see the Audio device.

@midi1996
Copy link

midi1996 commented Apr 10, 2021

Hi I would like to share my setup too. I'm using a Thinkpad P50 with Quadro M2000M, it has a gpu layout that goes something like this (from the schematics):
Untitled Diagram

My host is as follows:

  • Lenovo Thinkpad P50
  • CPU: Xeon E3-1505M V5
  • GPUs: Intel HD P530 (used for GVT) - Nvidia Quadro M2000M (passed to the VM)
  • OS: Manjaro (I tried this on regular Arch, Ubuntu and Pop!OS too)
  • Qemu 5.2.0 patched with this
  • libvirtd 7.1.0 (official from repo)
  • modules I'm loading: vfio_pci vfio vfio_iommu_type1 vfio_virqfd vfio_mdev kvmgt (for gvt too)
  • options for modules:
blacklist nvidia
blacklist nvidia-drm
options i915 enable_guc=0
options i915 enable_fbc=0
options i915 enable_gvt=1
  • kernel boot parameters: iommu=pt intel_iommu=on kvm.ignore_msrs=1

This issue and this guide have been a huge help! And the keys for my setup were the proper PCIe location in the VM and passing a GOP updated vBIOS, otherwise I'll be met with Error 43 or no display. To get the M2000M working properly I had to: (I assume you've read Archlinux's wiki about OVMF GPU Passthrough and how to add stuff like qemu commands in a virsh xml)

  • Dump the vBIOS (there are many ways, through the windows registry (manually or with a script) or by using VBiosExtractor following this guide -- which is actually a good concentrated effort results from this issue, note that you will need to put the full path to the BIOS updater executable, the VBiosExtractor script doesn't seem to do well with relative paths
  • Add UEFI GOP to it (as it's PC-AT only) (I used Maxwell MXM)
  • Patch the OVMF image with the regular (or GOP updated) vBIOS by building it automatically (thanks to @Ashymad's original gist) if you're on an Arch-based OS or check the PKGBUILD and make the patching manually (this guide has the same steps too, don't forget the dos2unix commands, otherwise the patches wont be applied). You might want to add the path of the OVMF in /etc/libvirtd/qemu.conf accordingly or edit the PKGBUILD to copy them to /usr/share/edk2-ovmf/x64/ instead (where the edk2-ovmf official package store them).
  • Make a VM with virt-manager as you would normally, add the dGPU and the other hardware that comes with it (for me only Audio, for others it may be also USB or others peripherals that come with a GPU nowadays), but in the XML section (make sure you enable editing), remove all PCI addressing lines (to avoid conflicting addresses) and add the proper address for the dGPU to the hostdev section (note that it usually just means that the source address and the hostdev address match). When applying, virsh will properly populate the other addresses.
  <source>
    <address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
  </source>
  <!--...-->
  <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0" multifunction="on" />
  • Add the UEFI GOP vBIOS as ROM in the devhost, otherwise you won't be getting any image (in the firmware boot or in windows), you can add the vBIOS to all the passed nvidia parts (not sure if that matters, but I did it anyways):
  <source>
    <address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
  </source>
  <rom file="/ext/qemu/vbios_10de_13b0_1_updGOP.rom"/>
  <!-- match it to your own path -->
  <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0" multifunction="on" />
  • Masquerade the device IDs and subsystem IDs otherwise the drivers won't even install (the first 2 are subsystem in decimal values and the last 2 are in hex, either of the formats work)
    <qemu:arg value="-set"/>
    <qemu:arg value="device.hostdev1.x-pci-sub-vendor-id=6058"/>
    <qemu:arg value="-set"/>
    <qemu:arg value="device.hostdev1.x-pci-sub-device-id=8750"/>
    <qemu:arg value="-set"/>
    <qemu:arg value="device.hostdev1.x-pci-vendor-id=0x10de"/>
    <qemu:arg value="-set"/>
    <qemu:arg value="device.hostdev1.x-pci-device-id=0x13b0"/>
    <qemu:arg value="-acpitable"/>
    <qemu:arg value="file=/home/midi/Downloads/SSDT_BAT.dat"/>

These were the requirement to get my setup working and I got output through HDMI (and the TB3 port through a hub, no need to pass the hub through it as it supported DP-Alt). If any of them isn't present then I'll be met with either no display or Error 43.

Issues I encountered:

  • Running the VM without passing the ROM (or rom bar="off") will result in no display in the VM and also when the VM crashes and powers off, the dGPU becomes in a broken state and I can only fix that by rebooting the system (resetting the device by removing it and then rescanning pci devices did not work)
  • GPU is running hot, probably because it cannot sense CPU temps or something like that and just cannot properly manager its temps, probably a solution would be to undervolt it, but I haven't done that as the temps for me don't exceed the 70C (which I guess is fine for a mobile chip, the fans are running properly to cool it down when not in use)
  • My laptop can no longer enter sleep mode (and sometime even rebooting or shutting down) when enabling vfio/kvm modules (no idea why, probably unrelated to this issue, but I'm keeping here in case someone knows what's up), even when no VM is running.
  • My laptop has an option to use the dGPU-only, and disabling the iGPU. I followed the single GPU passthrough guide which worked (ofc with the requirements above) but only for the external displays, not the internal one. And funny enough, I could extract 2 vBIOS of my nvidia GPU (different versions) from the registry for both dGPU+iGPU configuration and dGPU-only. For my vBIOS test, I used the one I extracted from the BIOS update, I might try the ones I pulled from registry.

Other finding:

  • I added GVT to the same VM and to my surprise I could still use the nvidia GPU through it, although with a really bad output as the spice viewer is slow and introduces a lot of latency (like 30fps max), I tried to mitigate this as much as I could by patching qemu with this, it's smoother (~60fps) but still has quite some latency. I did not try looking-glass yet.
  • Windows' feature to select which GPU to use through the Graphics Settings (in Display Settings) is working pretty good as I tested furmark and GB5 Vulkan and OpenGL benchmarks (with and without a display attached to the dGPU) and the results were the same. I'm going to do more testing on regular windows and see if the performances matches.
  • Without repeating the same information: your vm needs to have all the lines that hide the fact that it's a VM (it's everywhere on the internet and on r/VFIO)
  • Having the laptop in battery mode while trying this is a big no-no as the performances tank so hard!

What I'm trying to do next:

  • Get the internal display work with the dGPU-only setup
  • Try to pass some sensor data to the VM (including battery charge)
  • Try to pass my SMBIOS data and some of my SSDTs to activate Windows (as I want to pass a whole nvme drive to the VM)

My XML for my setup here.

If I find anything new I might edit this comment.

@T-vK
Copy link

T-vK commented Apr 11, 2021

@coledeck
Copy link

coledeck commented Apr 11, 2021

I wonder if this applies to laptops:
Code 43? No More! NVIDIA Finally Blesses VFIO!? (ft. Threadripper Pro) - Level1Linux

You need the acpi battery patch, but no vendor ID and KVM off. Without the battery the notebook driver simply gets mad because you're not using a laptop, and that's not going to change.

@midi1996
Copy link

I wonder if this applies to laptops:
Code 43? No More! NVIDIA Finally Blesses VFIO!? (ft. Threadripper Pro) - Level1Linux

This actually works with quadro cards too! But also if you install the same driver package version of the quadro drivers it won't work! I just tried it on my M2000M, the Gefore drivers install and work while the quadro ones complain.

@cristatus
Copy link

I wonder if this applies to laptops:
Code 43? No More! NVIDIA Finally Blesses VFIO!? (ft. Threadripper Pro) - Level1Linux

This actually works with quadro cards too! But also if you install the same driver package version of the quadro drivers it won't work! I just tried it on my M2000M, the Gefore drivers install and work while the quadro ones complain.

Hi @midi1996, would please share how to use Geforce drivers with quadro cards?

I have HP Zbook 15 v5 with Quadro P2000. I have tried almost all tricks but always getting Code 43 error. Yesterday, after seeing your post, I tried again with the latest Quadro driver but no success.

@midi1996
Copy link

I wonder if this applies to laptops:
Code 43? No More! NVIDIA Finally Blesses VFIO!? (ft. Threadripper Pro) - Level1Linux

This actually works with quadro cards too! But also if you install the same driver package version of the quadro drivers it won't work! I just tried it on my M2000M, the Gefore drivers install and work while the quadro ones complain.

Hi @midi1996, would please share how to use Geforce drivers with quadro cards?

I have HP Zbook 15 v5 with Quadro P2000. I have tried almost all tricks but always getting Code 43 error. Yesterday, after seeing your post, I tried again with the latest Quadro driver but no success.

I just downloaded their regular drivers and installed them, nothing special.

@EvanBenechoutsos
Copy link

EvanBenechoutsos commented Apr 17, 2021

@midi1996 Followed the guide you cited and after a few roadblocks
I am facing serious issue at the last step while compiling the OVMF_VARS.fd, In several distros (Arch, Manjaro, Debian) all same issue).

Did you run across this?

test -e /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/MdeModulePkg/Universal/PCD/Pei/Pcd/OUTPUT/PcdPeim.efi && GenSec -s EFI_SECTION_PE32 -o /opt/edk2/Build/OvmfX64/DEBUG_GCC5/FV/Ffs/9B3ADA4F-AE56-4c24-8DEA-F03B7558AE50PcdPeim/9B3ADA4F-AE56-4c24-8DEA-F03B7558AE50SEC2.1.pe32 /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/MdeModulePkg/Universal/PCD/Pei/Pcd/OUTPUT/PcdPeim.efi
make: *** [GNUmakefile:400: /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe/OUTPUT/QemuFwCfgAcpi.obj] Error 1


build.py...
 : error 7000: Failed to execute command
	make tbuild [/opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe]


build.py...
 : error F002: Failed to build module
	/opt/edk2/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe.inf [X64, GCC5, DEBUG]

- Failed -
Build end time: 14:17:29, Apr.17 2021
Build total time: 00:00:24

@midi1996
Copy link

midi1996 commented Apr 18, 2021

@midi1996 Followed the guide you cited and after a few roadblocks
I am facing serious issue at the last step while compiling the OVMF_VARS.fd, In several distros (Arch, Manjaro, Debian) all same issue).

Did you run across this?

test -e /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/MdeModulePkg/Universal/PCD/Pei/Pcd/OUTPUT/PcdPeim.efi && GenSec -s EFI_SECTION_PE32 -o /opt/edk2/Build/OvmfX64/DEBUG_GCC5/FV/Ffs/9B3ADA4F-AE56-4c24-8DEA-F03B7558AE50PcdPeim/9B3ADA4F-AE56-4c24-8DEA-F03B7558AE50SEC2.1.pe32 /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/MdeModulePkg/Universal/PCD/Pei/Pcd/OUTPUT/PcdPeim.efi
make: *** [GNUmakefile:400: /opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe/OUTPUT/QemuFwCfgAcpi.obj] Error 1


build.py...
 : error 7000: Failed to execute command
	make tbuild [/opt/edk2/Build/OvmfX64/DEBUG_GCC5/X64/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe]


build.py...
 : error F002: Failed to build module
	/opt/edk2/OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe.inf [X64, GCC5, DEBUG]

- Failed -
Build end time: 14:17:29, Apr.17 2021
Build total time: 00:00:24

@EvanBenechoutsos I fixed it, I forgot to comment some lines that made links to the python2 binary because old edk2. You can try now, it should be fixed.

@x0r2
Copy link

x0r2 commented Jun 24, 2021

@PiemP, I spent lot of time for this problem and I went by another way. I didn't take other configs. I created new virtual machine as usually and then I was changing places in XML configuration:

  1. Add XML schema:
    xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"
  1. Add hyperv vendor_id state:
    <vendor_id state='on' value='GenuineIntel'/>
  1. Add kvm hidden state:
    <hidden state='on'/>
  1. Add your correct vendor-id (0x17aa) and device-id (0x224c):
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x17aa'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x224c'/>
  1. Add correct path to ssdt1.dat:
    <qemu:arg value='-acpitable'/>
    <qemu:arg value='file=/var/tmp/ssdt1.dat'/>

All changes:

xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"


<features>
  <acpi/>
  <hyperv>
    <vendor_id state='on' value='GenuineIntel'/>
  </hyperv>
  <kvm>
    <hidden state='on'/>
  </kvm>
</features>


<qemu:commandline>
  <qemu:arg value='-set'/>
  <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x17aa'/>
  <qemu:arg value='-set'/>
  <qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x224c'/>

  <qemu:arg value='-acpitable'/>
  <qemu:arg value='file=/var/tmp/ssdt1.dat'/>
</qemu:commandline>

After changes check that your config saved correctly. I was adding Nvidia GPU from virt-manager GUI. May be this solution helps you.

Sorry for my late answer.

@T-vK
Copy link

T-vK commented Jul 18, 2021

Hey everyone, is it possible to use the OVMF patch mentioned in the first post as well as the battery ACPI table?

My understanding is that they both work by overriding the SSDT ACPI table. So I'm wondering if by using them both, the battery SSDT table will win over the SSDT table that has been patched with the vBIOS ROM or if you can simply have multiple SSDT tables at the same time.

@midi1996
Copy link

midi1996 commented Jul 19, 2021 via email

@realSaltyFish
Copy link

realSaltyFish commented Nov 21, 2021

@kalelkenobi @KenMasters20XX @spacepluk Hi guys I have similar setup (mobile Pascal card) and I was inspired by @ghostface's reply (RVBS should be set to correct value) so I did a quick check. From the virtual machine I can confirm that the SSDT is correctly loaded. However, when I use the kernel module acpi_call to access \_SB.PCI0.PEG0.PEGP.RVBS I get incorrect return value. I guess this might be causing issues. Could you also test if this is the case for you?

More info on my setup

  • Host: Arch Linux on HP Pavilion 15 cb076tx. When using Windows guest I am greeted with the notorious code 43 so I installed Manjaro Linux guest to debug.
  • GPU: GTX1050 Mobile. It is directly wired to the HDMI port, so listed as "VGA compatible controller"
  • The VBIOS cannot be dumped from Linux directly as I will get I/O error. GPU-z says that dumping my VBIOS is not supported. I was nevertheless able to dump it in these ways (yield identical thing):
    1. Use the Windows registry editor.
    2. Download a BIOS update from HP, decrypt it using BIOSCreator, then use VBiosFinder to analyze the BIOS binary.
    3. Use nouveau and read from /sys/kernel/debug/dri/1/vbios.rom
  • In my case the correct value for RVBS should be 0x29200. In the Manjaro Linux guest, I get 0x29200ed. I have no idea where that ed came from. I also tested another value from the SSDT: \_SB.PCI0.PEG0._ADR should be 0x10000 but I get 0x10000ed. Is it possible that this is an OVMF bug?

How I used ACPI calls on Linux

First install and load the kernel module acpi_call, then use this code snippet:

acpicall () {
	echo "$1" | sudo tee /proc/acpi/call > /dev/null
	sudo cat /proc/acpi/call
	echo
}

@Tualua
Copy link

Tualua commented Dec 1, 2021

Hi! I'm trying to passthrough RTX3070 Mobile to Windows VM. Seems that in this generation it is trickier. I can passthrough GPU, but Dynamic Boost 2.0 is not working. Device NVIDIA Platform Controllers and Framework cannot start with Code 31. I rebooted into Windows and found that this is driver for device ACPI/NVDA0820

I checked ACPI tables - there is one SSDT table from NVIDIA. I've attached a dump from RwEverything. Seems like it is the key to get DB2.0 working in VM.
SSDTnv.txt

Is there any way to insert this ACPI table to VM? I tried to add it with -acpitable but Win10 crashes with ACPI_BIOS_ERROR

 lspci -vnnk -s 01:00.0
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA104M [GeForce RTX 3070 Mobile / Max-Q] [10de:249d] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: Tongfang Hongkong Limited Device [1d05:1147]
        Flags: fast devsel, IRQ 16
        Memory at 5d000000 (32-bit, non-prefetchable) [size=16M]
        Memory at 4000000000 (64-bit, prefetchable) [size=8G]
        Memory at 4200000000 (64-bit, prefetchable) [size=32M]
        I/O ports at 5000 [size=128]
        Expansion ROM at 000c0000 [disabled] [size=128K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [b4] Vendor Specific Information: Len=14 <?>
        Capabilities: [100] Virtual Channel
        Capabilities: [258] L1 PM Substates
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] #19
        Capabilities: [bb0] #15
        Capabilities: [c1c] #26
        Capabilities: [d00] #27
        Capabilities: [e00] #25
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau

I also got error when starting VM in dmesg

vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe8c8

@VPaulV
Copy link

VPaulV commented Jun 2, 2023

Sorry for getting up an old topic, but did you find a solution? @Tualua

@ZiemlichUndead
Copy link

Ok so i went over all of this for a week now and I'm fairly confused how this whole thing even worked for anyone else then OP. I might get things wrong here but the following are my observations.
I tried using this patch to get windows to work but after making no progress on windows I tested this setup in a linux vm and when i saw that neither the nvidia driver nor nouveau were able to read the VBIOS i patched into ACPI. I got suspicious of this whole patching process.
As mentioned in the first post:

At the end a slightly modified version of your ACPI table, in which I pasted a decompiled version of the ROM call from the SSDT of my laptop, is appended to the rest of the table.
The RVBS variable should be set to the actual length of the VROM dump.

This suggests that the ACPI path that the rom is attached to in ssdt.asl differs from laptop to laptop.
I confirmed this by decompiling the acpi table of my laptop aswell as what nouveau reports the address of the rom to be.
OP has his rom on address: \_SB.PCI0.PEG0.PEGP._ROM
My laptop has it on address: \_SB_.PCI0.RP01.PEGP._ROM

The problem here is that this address will change depending on what pci address and pcie-root-port the gpu is attached inside the VM.
For me, my host attaches the gpu like this:

$ lspci -tv
-[0000:00]-+-00.0  Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers
           +-02.0  Intel Corporation UHD Graphics 620
           +-04.0  Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem
           ...
           +-1c.0-[01]----00.0  NVIDIA Corporation GP108M [GeForce MX150]

So the pcie-root-port is on 00:1c.0 while the gpu is on 01:00.0 attached via the pcie-root-port.
I was able to replicate this layout with libvirt by letting the pcie-root-ports start at address 00:1c.0.
This sadly still doesnt result in the ACPI path my host has. Instead. ACPI path for rom with this setup is now: \_SB_.PCI0.SE0_.S00_._ROM (This is where nouveau starts searching for the rom)
When i then change ssdt.asl to point at this adress, i can load nouveau without providing a rom file via tag.
This also works with the unpatched nvidia driver on linux. Even prime offloading works right away at least for me.

As great as this all sounds, I still cant get around Code 43 in windows even with the exact same pcie layout and acpi patch.
I think it might be possible that windows uses a different layout for the acpi adresses but thats just a theory as I have no idea how this works.

I know that all of this is now really long ago and no one seems to care for this topic anymore but i do really wonder why that is. I suspect newer laptops just dont have this weird vbios issue? Or interest in gpu passthrough just stagnated? Anyways, I documented my findings here. Maybe they will be useful for anyone going over this. I will maybe try to debug the nvidia driver on windows to see what prevents it from loading but after having the breakthrough on linux with everything working while windows doesnt do anything on the same config kinda killed my motivation.

@lion328
Copy link

lion328 commented Feb 21, 2024

@ZiemlichUndead Did you try adding a fake battery yet? https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#%22Error_43:_Driver_failed_to_load%22_with_mobile_(Optimus/max-q)_nvidia_GPUs

Ok so i went over all of this for a week now and I'm fairly confused how this whole thing even worked for anyone else then OP. I might get things wrong here but the following are my observations. I tried using this patch to get windows to work but after making no progress on windows I tested this setup in a linux vm and when i saw that neither the nvidia driver nor nouveau were able to read the VBIOS i patched into ACPI. I got suspicious of this whole patching process. As mentioned in the first post:

At the end a slightly modified version of your ACPI table, in which I pasted a decompiled version of the ROM call from the SSDT of my laptop, is appended to the rest of the table.
The RVBS variable should be set to the actual length of the VROM dump.

This suggests that the ACPI path that the rom is attached to in ssdt.asl differs from laptop to laptop. I confirmed this by decompiling the acpi table of my laptop aswell as what nouveau reports the address of the rom to be. OP has his rom on address: \_SB.PCI0.PEG0.PEGP._ROM My laptop has it on address: \_SB_.PCI0.RP01.PEGP._ROM

The problem here is that this address will change depending on what pci address and pcie-root-port the gpu is attached inside the VM. For me, my host attaches the gpu like this:

$ lspci -tv
-[0000:00]-+-00.0  Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers
           +-02.0  Intel Corporation UHD Graphics 620
           +-04.0  Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem
           ...
           +-1c.0-[01]----00.0  NVIDIA Corporation GP108M [GeForce MX150]

So the pcie-root-port is on 00:1c.0 while the gpu is on 01:00.0 attached via the pcie-root-port. I was able to replicate this layout with libvirt by letting the pcie-root-ports start at address 00:1c.0. This sadly still doesnt result in the ACPI path my host has. Instead. ACPI path for rom with this setup is now: \_SB_.PCI0.SE0_.S00_._ROM (This is where nouveau starts searching for the rom) When i then change ssdt.asl to point at this adress, i can load nouveau without providing a rom file via tag. This also works with the unpatched nvidia driver on linux. Even prime offloading works right away at least for me.

From my experiments, NVIDIA drivers didn't care where you put the GPU as long as you provided a working _ROM method at the GPU device node. This is sufficient enough for Linux guests. For Windows, I also need to apply custom vendor_id, hide KVM state, and a fake battery.

I don't know when but for me this specific patch is not working for Linux guests anymore because the name collision between \_SB.PCI0.S08.S00 from QEMU and \_SB.PCI0.PEG0.PEGP from the patch. The key problem here is that they assigned the same PCI address in ACPI table (_ADR) and the driver probably got the one loaded first. Surprisingly, the Windows guest still works.

I fixed it by put _ROM in the QEMU device scope instead of adding a new device, which also get rid of _ADR stuff. Here is my custom SSDT if you're interested: https://github.com/lion328/gpu-passthrough/blob/master/ssdt.asl

As great as this all sounds, I still cant get around Code 43 in windows even with the exact same pcie layout and acpi patch. I think it might be possible that windows uses a different layout for the acpi adresses but thats just a theory as I have no idea how this works.

I don't think Windows use different ACPI addresses given that it's QEMU who assign them. I think you can check it in Device Manager under the BIOS device name though.

@ZiemlichUndead
Copy link

@lion328 Yeah I did the fake battery and all of the things in the arch wiki right away before even trying to get behind the acpi stuff.

From my experiments, NVIDIA drivers didn't care where you put the GPU as long as you provided a working _ROM method at the GPU device node.

Not sure if I get you correctly. You agree that the acpi address for the patch should be changed to where the GPU is mounted, right? You mention this calculation in your script: (slot << 3 | function). It seems like this is a bit more complicated with my "pcie-root-port" setup as you can see here:

$ lspci 
...
01:00.0 3D controller: NVIDIA Corporation GP108M [GeForce MX150] (rev a1)
...
$ lspci -tv
-[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
           +-01.0  Red Hat, Inc. Virtio 1.0 GPU
           +-1b.0  Intel Corporation 82801I (ICH9 Family) HD Audio Controller
           +-1c.0-[01]----00.0  NVIDIA Corporation GP108M [GeForce MX150]
...
$ ./acpi.sh #The script in your git
Trying to read ROM from device 0000:01:00.0.
Found ACPI node at \_SB_.PCI0.SE0_.S00_.
Using \_SB_.PCI0.SE0_.S00_._ROM method for dumping ROM.
The ROM length is 0x2c000 bytes. Start dumping the ROM...
#Rom dump does work

With this setup and the ovmf patch changed to \_SB_.PCI0.SE0_.S00_ the nvidia drivers work in linux. I am also getting some ACPI conflicts on boot. Maybe these are the cause of Code 43 in windows but linux working correctly?
How exactly did you patch your ssdt.asl into ovmf and what exactly are the changes you made? I am currently using this automated script using github actions: https://github.com/SimpliFly03/ovmf-with-vbios-patch
I am not familiar with the language used in these files so I don't know how I would implement this using my own rom.

@lion328
Copy link

lion328 commented Feb 21, 2024

@ZiemlichUndead

Not sure if I get you correctly. You agree that the acpi address for the patch should be changed to where the GPU is mounted, right?

Yes.

You mention this calculation in your script: (slot << 3 | function). It seems like this is a bit more complicated with my "pcie-root-port" setup as you can see here:

$ lspci 
...
01:00.0 3D controller: NVIDIA Corporation GP108M [GeForce MX150] (rev a1)
...
$ lspci -tv
-[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
           +-01.0  Red Hat, Inc. Virtio 1.0 GPU
           +-1b.0  Intel Corporation 82801I (ICH9 Family) HD Audio Controller
           +-1c.0-[01]----00.0  NVIDIA Corporation GP108M [GeForce MX150]
...

That's a pretty normal topology. My laptop is similar except the root port is at 00:01.0. Doing the calculation in reverse, PCI0.SE0.S00 means S00 is at 00.0 of its parent SE0 and SE0 is at 1c.0 of its parent PCI0.

Also thanks for testing my script.

With this setup and the ovmf patch changed to \_SB_.PCI0.SE0_.S00_ the nvidia drivers work in linux. I am also getting some ACPI conflicts on boot. Maybe these are the cause of Code 43 in windows but linux working correctly? How exactly did you patch your ssdt.asl into ovmf and what exactly are the changes you made? I am currently using this automated script using github actions: https://github.com/SimpliFly03/ovmf-with-vbios-patch I am not familiar with the language used in these files so I don't know how I would implement this using my own rom.

The patch added new ACPI devices using Device keyword. Normally this would work fine since \_SB.PCI0.PEG0.PEGP did not exist in the VM. So if you change it to be the same as what QEMU would be adding, it will obviously caused conflicts. Try replacing Device with Scope and use External to reference \_SB.PCI0.SE0.S00 so you will get something like:

External (\_SB.PCI0.SE0.S00, DeviceObj)
Scope (\_SB.PCI0.SE0.S00) {
...
}

Basically, instead of adding a new device, this extends the functionality of the already existing device instead. And also remove every _ADR definitions since QEMU will set it for you. FYI, _ADR of SE0.S00 and SE0 should be 0 and 0x1C0000 respectively. Check the ACPI spec to see how it's calculated.

For my SSDT, the primary change is that the table will now uses QEMU fw_cfg to load ROM dynamically from the file I specified in the XML, instead of loading it from patched OVMF. I also fixed the conflict mentioned above, included my own version of a fake battery device, and removed unused code. I compile it manually using iasl and pass it to VM like the fake battery table. IMO, it's much easier than maintaining a custom version of OVMF, which is the main reason why this exists.

@ZiemlichUndead
Copy link

@lion328 My god thank you so much I just booted into my windows vm using your ssdt and saw my GPU appear in taskmanager for the first time ever.
I love your way of providing the rom via fw_cfg. I didnt know this was possible but i always wondered why we couldnt load the roms ssdt like we do with the battery ssdt.
I honestly didnt think i would get this to work anymore. This is amazing.

Somehow your ssdt doesnt work for me with linux on nvidia drivers even tho nouveau works with it. Maybe this is caused by my weird pcie setup which im still using right now.

That's a pretty normal topology. My laptop is similar except the root port is at 00:01.0. Doing the calculation in reverse, PCI0.SE0.S00 means S00 is at 00.0 of its parent SE0 and SE0 is at 1c.0 of its parent PCI0.

Yeah I think I get it now. The location of the pcie-root-port will be used to calculate PCI0.SXX.S00 while the location of the device itself will result in PCI0.S00.SXX right?

Anyways you did amazing work here. I was kinda proud of myself for finding some stuff out about these ACPI paths and PCIE adresses but you actually solved the whole problem in a way better way. Again, thank you so much <3

@lion328
Copy link

lion328 commented Feb 22, 2024

@ZiemlichUndead Glad to see it's working!

Somehow your ssdt doesnt work for me with linux on nvidia drivers even tho nouveau works with it. Maybe this is caused by my weird pcie setup which im still using right now.

Is there any warning or error in dmesg? A dump of guest's ACPI tables would be nice,.

Yeah I think I get it now. The location of the pcie-root-port will be used to calculate PCI0.SXX.S00 while the location of the device itself will result in PCI0.S00.SXX right?

Pretty much in this case yeah.

@ZiemlichUndead
Copy link

@lion328

Is there any warning or error in dmesg? A dump of guest's ACPI tables would be nice,.

Actually nevermind I didnt blacklist my drivers correctly. Now its working perfectly just like in windows. My bad..

Repository owner deleted a comment from tarunjainsagar Feb 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests