Linux Format

GPU passthroug­h

Experience near-native graphics performanc­e in a VM with the magic of PCI passthroug­h. It’s better than watching rabbits being pulled out of hats!

-

lately, Steam Play has been getting a great deal of attention from Linux gamers, and with good reason. It brings a whole raft of titles to the platform, most of which stood zero chance of ever seeing an official port. However, many of the officially blessed titles don’t perform as well on Linux as they do on Windows, and many other titles run into all kinds of bugs, which prevent them being played properly at all. Make no mistake, the Valve’s Proton Wine fork and DXVK sponsorshi­p are great, but they’re not going to convince hardcore gamers to switch to Linux anytime soon. Even native Linux ports from the likes of Feral and Aspyr rarely compete performanc­e-wise with their Windows counterpar­ts.

Virtual Function IO (VFIO) enables the Linux kernel to pass PCI hardware directly to VMs with negligible overhead. This includes complicate­d hardware such as expensive graphics cards, so you can see where this is going. Besides gaming on a Windows VM, this is useful if you have a deep learning or mining environmen­t set up in a VM. Many people that have this working have reported close to 99 per cent native performanc­e. In order to get this working though, there are several hoops to jump though.

First, so there’s no confusion let’s be clear: This requires two graphics cards, the host OS relinquish­es all control of the passed-through GPU to the VM, so it’s as good as dead to it. There are also a bunch of other hardware requiremen­ts that we’ve tried to sum up in the box (right). If you’re using two dedicated cards, make sure you have sufficient power to feed them. Theoretica­lly, the one on the host shouldn’t be drawing much power while the VM is busy, but you never know. Modern GPUs have become efficient as well as powerful, but on a busy day the gruntier models can happily draw 250W.

I need IOMMU

As well as activating it in the BIOS, we need to tell the kernel to use IOMMU by way of some bootloader options, which we can either add to the linux line in /boot/grub/grub.cfg, or we can use the recommende­d drill of adding them to the GRUB_CMDLINE_LINUX_ DEFAULT line in /etc/default/grub and then running grub-mkconfig . On an Intel system, the requisite

option is intel_iommu=on , while on AMD it’s amd_ iommu=on . Reboot, then run the following script (appropriat­ed from the helpful Arch Wiki VFIO page, which is worth a read, https://wiki.archlinux.org/ index.php/PCI_passthroug­h_via_OVMF): #!/bin/bash shopt -s nullglob for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*} printf ‘IOMMU Group %s ‘ “$n” lspci -nns “${d##*/}” done;

PCI hardware can only be passed through in a manner which respects these groupings. If your graphics card (the one you want to pass through) is in the same group as important or complicate­d looking devices the easiest solution is to move it to another slot and try again. In our case the script produced: … IOMMU Group 16 01:00.0 VGA ... Hawaii XT / Grenada XT [Radeon R9 290X/390X] [1002:67b0] IOMMU Group 16 01:00.1 Audio device ... Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8] …

This is actually a pretty ideal case. The HDMI audio device on our GPU is in its IOMMU group, so we need to pass that through as well, granting our VM both video and audio hardware.

Once kernel drivers bind to our pass-through GPU, neither hell nor high water will persuade them to let go of it, so we need to stop this happening. This is achieved with the vfio-pci driver which we need to inform of the pass-through device’s vendor and device IDs (the

numerals at the end of the output above). Using identical GPUs on the guest and host causes problems here, because these IDs are identical (check the Arch wiki for work around guidance). To pass through our GPU and Audio device, we create a file /etc/modprobe.d/vfio.conf containing: options vfio-pci ids=1002:67b0,1002:aac8

For this magic to work, we need the VFIO modules to be loaded with the initramfs. On Ubuntu, this is done through the /etc/initramfs-tools/modules file, while on Arch (which is what we use btw) we edit the MODULES= line in /etc/mkinitcpio.conf. Either way, add the four modules vfio, vfio_iommu_type1, vfio_pci and vfio_virqfd so that they appear before anything video-related. Most likely, there won’t be any modules here at all, unless you’ve got early KMS or some such set up. Now we regenerate the initramfs with sudo update-initramfs -u on Ubuntu or mkinitcpio -k on Arch. Reboot and check the output of: $ dmesg | grep -i vfio Hopefully you should see some reassuring runes: [ 1.488517] VFIO - User Level meta-driver version: 0.3 [ 1.489469] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+me m:owns=none [ 1.504260] vfio_pci: add [1002:67b0[ffff:ffff]] class 0x000000/00000000 [ 1.520925] vfio_pci: add [1002:aac8[ffff:ffff]] class 0x000000/00000000 [ 5.429259] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+me m:owns=none This may not be what you see, and this may not be the end of the world. Check the output of $ lspci -nnk -d 1002:67b0 changing the vendor-device pair as appropriat­e. This will show whether or not the VFIO driver has correctly bound to the device.

Fair but firm-ware

For any of this to work the passed-through GPU must support UEFI, and the VM must be configured to boot with it. Open Virtual Machine Firmware (OVMF) is a port of Intel’s Tianocore (aka EFI Developmen­t Kit II, or edk2) UEFI firmware which allows VMs to reap the benefits of UEFI. Install it with sudo apt install ovmf . We’ll assume you’ve already got the QEMU, Libvirt and virtmanage­r packages from the previous section. We need to tell Libvirt where to find this firmware, which requires adding a line to /etc/libvirt/qemu.conf. Search this file for nvram , where you’ll find some explanatio­n, and add this line below those comments: nvram = [ “/usr/share/OVMF/OVMF_CODE.fd:/usr/share/ OVMF/OVMF_VARS.fd” ] This is for Ubuntu. On Arch change the path to /usr/

share/ovmf/x64/. Finally, we’re ready to set up the VM in virt-manager. This follows the same process as before only it uses a Windows ISO (which you can download from Microsoft). When you get the final stage of the wizard, be sure to check the Customize box so that you can select UEFI booting (see screenshot, right). We’re running out of space so we won’t say much about installing Windows – we’re sure you can figure it out – but do set up a VirtIO SCSI disk and experiment with turning off caching if disk I/O slows things down. Once Windows is installed we can shut it down and set up the pass through magick.

From the VM’s Details dialog, click Add Hardware, choose PCI Host Devices and select everything from the IOMMU group we interrogat­ed earlier. Passing through a USB keyboard and mouse is a good idea too, but these need to be separate from the ones you use to control the host. We no longer require the virtual devices (Spice channel, QXL video adapter, and the emulated peripheral­s) so these can be deleted. And now the moment of truth: boot the VM, plug in (or switch your monitor’s input) the passthroug­h GPU and keyboard, hit some keys and hopefully you’ll be booting into Windows. There’s a phrase we never thought we’d utter!

 ??  ??
 ??  ?? Yin and yang: we used an Nvidia GTX 680 for the host and a slightly newer, faster Radeon 290 OCX for the guest.
Yin and yang: we used an Nvidia GTX 680 for the host and a slightly newer, faster Radeon 290 OCX for the guest.
 ??  ?? The Firmware setting can’t be changed after you’ve booted the install medium, so choose UEFI.
The Firmware setting can’t be changed after you’ve booted the install medium, so choose UEFI.

Newspapers in English

Newspapers from Australia