cancel
Showing results for 
Search instead for 
Did you mean: 

PC Graphics

Endermanbugzjfc
Journeyman III

Virtual machine PCIe pass through succeeded, drivers installed. But Adrenalin GUI does not start.

Description

I use a Radeon RX 7900 GRE on Arch Linux. I have configured all the PCIe passthrough and successfully bound the GPU to a Docker Windows virtual machine.

 

I then installed the Adrenalin drivers, which seemed successful as the display adapter had changed in Device Manager (please refer to the screenshot below). However, when I tried to start the Adrenalin GUI, it always failed with this message: 

The version of AMD Software that you have launched is not
compatible with your currently installed AMD graphics driver.

For information on how to resolve this, please go to:
https://www.amd.com/en/support/kb/faq/pa-300

I have checked with AMD Software Compatibility Tool according to https://www.amd.com/en/support/kb/faq/pa-300 but it could not find any problem (please refer to the screenshot below). Doing a clean driver installation with Display Driver Uninstaller also did not work. 

 

Endermanbugzjfc_0-1740985481435.png

Versions Tested

  • Adrenalin Versions: Attempted to work with various versions of the Adrenalin software, including:
    • 20.4.2: could not be installed.
    • 24.12.1 (WHQL): installed but ran into the error in the screenshot above.
    • 25.2.1: installed but ran into the error in the screenshot above.

Additional Information

  • VM OS: Windows 11 Pro 23H2 22631.2715
  • VM Software: Docker, image: https://github.com/dockur/windows
  • GPU Model: AMD Radeon RX 7900 GRE
  • GPU Model: AMD Ryzen R9 5900X (SVM and IOMMU enabled in BIOS)

Please provide any additional details or error messages to further assist in troubleshooting this issue.

0 Likes
1 Reply

For the RX 7900 series, using Proxmox, I had good success with PCIE passthrough last time I attempted it. Here are my personal notes. Perhaps you can try the ROMBAR=0 or Docker VM equivalent workaround in your case.

------------------------------------

AMD has long supported virtualisation of graphics cards... adopting industry standards such as SR-IOV with its MxGPU series of graphics cards, enabling public cloud gaming and compute partners

While our drivers were written and optimized for passing enterprise and server-class GPUs to virtual machines, they can often also function with consumer Radeon cards. This is a bit nuanced as consumer cards are not officially supported for virtualization or tested on a regular basis for PCIE passthrough.

Summary

  • This article aims to share our "Best Known Configuration" in order to passthrough Radeon consumer cards
  • Software stack: Proxmox 8.1.4 (Debian based), running Linux kernel 6.5.13-5-pve, using Qemu/KVM hypervisor
  • I tested with Radeon 7900 GRE and 7900 XTX, but some other RDNA1 and RNDA2 GPUs have also been reported successful with this method.

Configuration Rationale

  • Radeon drivers detect virtualization vendors such as KVM, Hyper-V, or VMWare, so we can implement specific optimizations and features.
  • Server cards do not have traditional HDMI/Display Ports... instead they are intended to use with cloud compute, Remote Desktop, or VNC style software
  • Hence our drivers tell Windows to create a "vDisplay" and do not initialize any physical monitor connections by default *** this may improve in future Windows drivers
  • What does this mean? Radeon GPUs will have a black screen when the driver loads.
    • If you want your physical display to work (instead of only remote software), then we need to change the default hypervisor vendor
  • UEFI is the software that runs on your BIOS when your computer first boots, before any operating system like Windows or Linux
    • Firmware running on graphics cards interact with the UEFI, to do one-time setup of hardware and displays
    • Running inside of a virtual machine, a second UEFI runs with a virtual BIOS... re-initializing the graphics card
    • Re-init of the GPU can lead to a mismatched state... in order to avoid any possible technical issues, we can mask the VideoBIOS ROM stored on the GPU
  • Resizable BAR (Smart Access Memory) - there is a conflict between this setting and virtualization due to the host Linux. Recommended to keep REBAR = OFF in the host system BIOS.
  • VFIO is a Linux kernel module that acts as a driver stub, we need to force kernel module to load ahead of the drivers for (Graphics, Audio, USB) controllers on Radeon cards
    • Failure to do this results in the Linux driver and the virtual machine conflicting over the hardware state of the GPU. Some workarounds are possible but not covered here.
  • IOMMU is a technology to separate PCIE devices into groups
    • Typically the primary PCIE slot and M.2 connector off the CPU are in their own groups, whereas PCIE slots off the chipset are often grouped together with USB, SATA, Network controllers.
    • We don't recommend passthrough of PCIE devices located in groups with other hardware, as it requires complex ACS workarounds, not covered here.
    • In the diagram below, there are 3 IOMMU groups (green boxes)
      • we can safely passthrough GPU (IOMMU#1) or NVME (IOMMU#2) without issue, whereas the storage controller (IOMMU#3) would be problematic.
    • PCIE_Am4_Diagram_VFIO.png

Host Machine Setup

  • Virtualization (SVM) needs to be Enabled in the BIOS
    • Resizable BAR should be DISABLED.
    • Above 4G should be ENABLED
  • IOMMU needs to be configured in the kernel
    • Edit /etc/default/grub to add the iommu options to the CMDLINE variable in GRUB bootloader
      • GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on intel_iommu=on iommu=pt"
    • If your Proxmox uses Systemd-Boot, /etc/kernel/cmdline would be the correct file, but that configuration variant is not covered here
  • Force VFIO kernel module to load at boot and before drivers used on Radeon cards
    • Add vfio_pci to: /etc/modules
    • Add the following to: /etc/modprobe.d/vfio.conf

      • softdep amdgpu pre: vfio-pci # Graphics
      • softdep snd-hda-intel pre: vfio-pci # HDMI Audio
      • softdep xhci-pci pre: vfio-pci # Usb-C
      • softdep i2c-designware-pci pre: vfio-pci # Usb-C
  • Update the initramfs and Proxmox boot config (automatically does Grub or SystemD) to save these changes

    • update-initramfs -k all -u
    • proxmox-boot-tool refresh

      image2025-3-18_22-57-33.png

Virtual Machine Setup

  • Open "Shell" on PVE node, using nano to modify the VM configuration, example: /etc/pve/qemu-server/100.conf
  • pve_console.png
    • Create "args:" line to force vendor to AMDKVMAMD, so the vDisplay is NOT created and the physical display is initialized instead
    • Adding "hidden=1" to "cpu:" gives us a visible indication in the GUI
      • args: -cpu 'host,hv_vendor_id=AMDKVMAMD,kvm=off'
      • cpu: host,hidden=1
  • In the PVE VM hardware configuration, add Radeon PCIE device to VM ("All Functions"=1, "ROMBAR"=0, "primary GPU"=1)
    • Configure ROMBAR=0 so GPU is not re-initialized by the virtual BIOS, to avoid potential reset problems
      • Caveat: the physical display will NOT show the virtual BIOS/OS Booloader
    • Also configure Virtual GPU (such as "Standard VGA", "QXL", "VirtIO") etc, which shows the BIOS/bootloader using VNC/Console interface, before Windows drivers are loaded
       
        • When a graphics driver is not installed, the default "MS basic device" driver Windows provides is used.
          • MS Basic will only function for the Virtual GPU, since it was initialized by the virtual BIOS --> Its expected to get error Code31/43 on the Radeon device at this point.

Windows Guest Setup

  • Boot the VM for the first time, use the VNC/Console interface, which is common practice for accessing remote VMs
    • Install Windows as desired
    • Install all VFIO guest drivers (networking, balloon, emulated graphics, etc) from the CD
    • Install Radeon driver from amd.com
      • Once the Radeon driver loads, the physical display will light up
    • Configure Windows to "Show only on 2", the HDMI/DisplayPort physical display, so it will seamlessly switch between virtual GPU console and physical display after boot.
  • display_settings.png

 

Known issues

  • Sleep / resume within Windows may fail after X cycles. Recommended to disable sleep timeout and instead shutdown your VM when your done your work.
  • Driver updates may temporarily cause physical display to go black. Recommended to install drivers updates via Proxmox/Qemu virtual console or VNC

Configuration

Full Proxmox VM configuration file example (100.conf):

affinity: 1-5,7-11
args: -cpu 'host,hv_vendor_id=AMDKVMAMD,kvm=off'
bios: ovmf
boot: order=scsi0;ide2
cores: 10
cpu: host,hidden=1
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00,pcie=1,rombar=0,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 12000
meta: creation-qemu=8.1.5,ctime=1708978360
name: win
net0: e1000=BC:24:11:59:8C:D3,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,backup=0,discard=on,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=a4acc156-baaf-4705-9bcc-2347fda113d6
sockets: 1
vga: std
vmgenid: 2c881a54-40d3-4525-aaa3-0dd8ee5f1227

Resources

https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF

https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines

Conclusion

We are working on enabling more use-cases for our Radeon cards on virtual machines and Linux, stay tuned for future updates and thanks for your support.

0 Likes