Running a virtual GPU
Discover how to harness Intel’s GVT-g technology to virtualise your GPU.
We’ve seen that virtualisation (with appropriate hardware support) is much faster than conventional emulation. And we’ve seen that when using paravirtualised VirtIO devices we can speed that up even more. But we can go further. What if, for example, we gave a virtual machine its own physical graphics card?
This technique, known generally as VFIO, has been around for a while. For the particular case of using PCI passthrough with graphics cards, the result is that VMs can run graphically intensive applications to within a hair’s breadth of native speeds. This enables Linux users to run Windows VMs and play games without taking a performance hit. This is an alternative to Proton, albeit one which is a little tricky to set up and requires that the host machine has (at least) two graphics cards. We covered this in LXF261.
This time around we’ll look at a slightly different technique for users of Intel hardware. This is known as vGT and enables your actual GPU to be segmented into virtual GPUs that behave exactly as though they were connected via PCI passthrough. Equivalent technologies do exist for Nvidia and AMD cards, although they’re not commonly found on their consumer cards.
The first thing you need to do is ensure that your onboard graphics are supported, which requires at least a Haswell (sixth-generation) processor. Then we need to change some kernel options. Start by running the following instruction: $ sudo nano /etc/default/grub
and find the line that looks like
GRUB_CMDLINE_LINUX_DEFAULT= “quiet splash”
The right-hand side may not be identical, depending on your setup, but no matter. Add the parameters:
intel_iommu=on i915.enable_guc=0
before the closing quote. Then save, exit and run:
$ sudo update-grub
to update the bootloader.
Next, we need to change the GPU module options. Edit (again using sudo) the file /etc/modprobe.d/i915. conf and add the following line to it:
options i915 enable_gvt=1
We need to also update the initramfs so that the i915 driver is loaded (and respects our settings) early in the boot process. On Ubuntu this involves adding i915 to
/etc/initramfs-tools/modules and then regenerating with the following:
$ sudo update-initramfs -u -k all
Finally, we need to ensure some modules are automatically loaded at boot. Create a new file with:
$ sudo nano /etc/modules-load.d/gvt.conf
and populate it with:
kvmgt vfio-iommu-type1 mdev
Now reboot so that the changes take effect.
Graphics card details
We need to find the domain number and PCI address for our graphics card. This is to be found in the output from the command:
$ lspci -D -nn
Look for the numbers before VGA compatible controller of the form 0000:00:02.0 . The first six digits are the domain number and, confusingly, the whole thing is the PCI address. If we now run the following (replacing the domain number and address as appropriate, noting that colons must be escaped with backslashes, and that tab-completion is your friend):
$ ls /sys/devices/pci0000\:00/0000\:00\:02.0/mdev_ supported_types
then you should see a few directories. Each of these represents a specific virtual GPU configuration, which you can find out about by looking at the description
files within, for example:
$ cat /sys/devices/.../mdev_supported_types/i915GVTg_V5_4/description low_gm_size: 128MB high_gm_size: 512MB fence: 4 resolution: 1920x1200 weight: 4
In general the smaller the final number, the more resources the vGPU will have.
Next, we need a Globally Unique Identifier (GUID), which is a string of 32 hex digits. You can generate a random one (and store it in a variable) by running GVT_GUID=$(uuidgen) or by typing guid into DuckDuckGo. Since GUIDs are essentially 128-bit numbers, there’s a very low probability of a collision between randomly generated ones (which does mean that they’re very likely unique, rather than actually unique).
We’ll use this GUID to create a new vGPU. For ease of reading (and consistency with the Arch Wiki without which this section wouldn’t be possible), we’ll also substitute $GVT_DOM, $GVT_PCI and $GVT_TYPE
for the various identifiers we’ve come across so far. For clarity these might look like: GVT_DOM=0000:00 GVT_PCI=0000:00:02.0 GVT_TYPE=i915-GVTg_V5_4
Now we use these variables so that the below command is easier to type:
$ echo $GVT_GUID | sudo tee /sys/devices/pci$GVT_ DOM/$GVT_PCI/mdev_supported_types/$GVT_TYPE/ create
All going to plan, this will have created a new device. Let’s check this by looking at the PCI bus:
$ ls /sys/bus/pci/devices/$GVT_PCI
There should be a subdirectory matching the $GVT_ GUID generated earlier. It’s possible to create more vGPUs by repeating the process with different GUIDs, though precisely how many is limited by your video RAM. You can remove them by echoing 1 at the remove
node inside the subdirectory.
In order to have a libvirt virtual machine use the virtualised GPU, we need to tweak its configuration. This is quite easy to do with Boxes (Edit XML), so we’ll go back to the first VM we set up there and endow it with graphical superpowers. We must add the following stanza inside the
When you start the virtual machine it probably won’t work, and if you study the logs you’ll see it’s because of a rather parochial permissions error. You can fix this with the following:
$ sudo chmod o+rw /dev/vfio/0
You might have noticed that Boxes makes it possible to have our VM boot via EFI, using files from the Open Virtual Machine Firmware (OVMF) project. This needs to be done when the machine is first created, otherwise we’d end up with a paradoxical VM. We’ve stuck with a classic BIOS setup here, partly because some GVT-g setups currently don’t work with UEFI guests. If you want a UEFI virtual machine then you’ll need to install the ovmf package. The reason why we haven’t bothered with this is that this particular brand of GPU virtualisation does requires extra effort to work with UEFI guests.
And so concludes our glorious foray into virtualisation. Let us know if you did anything different, or how differently you would do what we did here.