SkillAgentSearch skills...

Vfio

A script for easy pci and usb passthrough along with disks, iso's and other useful flags for quick tinkering with less of a headache. I use it for VM gaming and other PCI/LiveCD/PXE/VM/RawImage testing given the script's accessibility.

Install / Use

/learn @ipaqmaster/Vfio

README

vfio

About this script

This is a bash script I've put a lot of love into to avoid libvirt guest definitions with hardcoded PCI paths among other virtualization testing which it also helps make very quick for me.

It starts a VM by calling qemu-system-x86_64 directly but can automatically handle a bunch of additional optional arguments with the convenience of a terminal with command history. The aim of this script is to make changing how the geust starts up as simple as backspacing something out of the command. Particularly a focus on PCIe devices.

This script makes Windows VM gaming a pinch on my single NVIDIA gpu configuration and also some of my older dual-gpu machines which can take advantage of Looking Glass.

Now if only NVIDIA would officially support vGPUs on their consumer hardware.

What does it actually do

On the surface it's just a qemu wrapper but the magic is in the additional optional arguments it can handle. With these features I swear by it for any VFIO or general virtualization tinkering (kernel testing, physical disk booting, etc).

It:

  • Gives a clear rundown of everything it has been asked to do and by default in a 'Dry' mode to avoid wreaking havoc without saying go first. (-run)

  • Provides the exact QEMU arguments it intends to launch with in case you want to dive into QEMU manually by hand or use the arguments for something else.

  • Safely warns or fails if a critical environment disrepency is detected such as VT-d/IOMMU not being enabled in the bios, or boot flags. Among other environment catches before running itself into a wall.

  • Takes an optional regular expression of PCI and USB devices to pass to the VM when starting it.

    • My favorite is -PCI 'NVIDIA|USB' to give it my graphics card with all host USB controllers on m single-gpu host.
  • Takes as many virtual disks or iso's as you want to pass in with a QEMU iothread for each virtual disk.

    • You could save some overhead over using virtual disks by passing in an entire NVMe/HBA/SATA controller with -PCI 'SATA' assuming your host isn't booted via the same controller.

    • Otherwise it also accepts more specific device details such as the PCI device path, model and other discernable features from lspci -D if you only want to passthrough speicifc devices: -PCI 0000:06:00.0, -PCI MegaRAID, -PCI abcd:1234.

  • Automatically unbinds all specified PCI devices from their driver's onto vfio-pci if they're not already on it.
    (No need for early host driver blocking or early PCI device vfio binding)

  • Automatically rebinds all specified PCI devices back to their originating driver on guest shutdown
    (When applicable)

  • Can automatically kill the display-manager for a guest to use if it detects the GPU is stuck unbinding
    (Commonly unavoidable in single GPU graphical desktop scenarios)
    But also rebinds the card back to its driver on guest exit and restarts the DM back to the login screen. Almost seamless...

  • Can make network bridges automatically or attaches the VM to an existing bridge with a tap adapter if specified
    giving your VM a true Layer 2 network presence on your LAN with direct exposure for RDP, SSH, and all.
    (Useful to avoid the default "One way trip" nat adapter)

  • Can describe your host's IOMMU groups and CPU thread pairings per core to aid with vcpu pinning (And isolation planning).

    • e.g. -iommugroups and -cputhreads
  • Can take optional vcpu pinning arguments to help avoid stutter due to clashing host and guest cpu activity.

    • No automatic isolation support just yet. I haven't found a modern method better than boot-time cpu isolation via kernel args that I'd be happy to implement while avoiding reboots. systemctl set-property --runtime slice management doesn't take care of IRQ and other isolation performance tweaks I'd like to manage in realtime. But might be enough for desktop use-cases. For now, I've provided some powerful cpu isolation boot argument examples below. Sorry!
  • Can dynamically allocate hugepages for the VM on the fly if host memory isn't too fragmented. Otherwise if pre-allocated and enough free it will notice existing hugepages and use them.

  • Optionally enables Looking Glass support if you have a second d/iGPU on the host to continue graphically while the guest runs.

  • Can take a romfile argument for any GPU devices being passed through if needed for your setup.

  • Optionally includes key hyperv enlightenments for Windows guest performance.

  • And many more little bits and pieces! (-help)

What's supported?

This script has been designed on and for Archlinux however is mostly generic and the tools it relies on can be added to any system with whichever package manager is supplied. It will work just fine on any distro shipping a modern kernel and qemu binary. At worst some distros may store OVMF_CODE.fd elsewhere and its path will have to be specified with the -bios argument - I'll likely add add an array of well-known locations later so those on other distros don't have to specify that.

I've also confirmed that this works on Ubuntu 20.04 and 18.04 as well - But again, it'll work on anything shipping modern kernel and qemu versions. For PCIe passthrough specifically the host will need to support either Intel's VT-d or AMD-Vi (May just be labelled IOMMU in its bios). These features are well aged so even my older PC ~2011 PC can run this script and do PCIe passthrough just fine albeit at a slower performance than the hardware of today. Just don't forget to add the relevant AMD or Intel VFIO arguments to your kernel's boot arguments.

Even my partially retired desktop from 2011 (i7-3930K, ASUS SABERTOOTH X79, 2x16GB DDR3, 2x SATA SSDs mirrored) can run this script with the two older NVIDIA GPUs in it with a Looking Glass client window on its desktop to the guest.

My 2020 PC (3900x, DDR4@3600, single 2080Ti, dedicated M.2 NVMe for the guest) has no trouble playing games such as Overwatch, MK11 Multiplayer and Battlefield One/3/4/2042 in a fully seamless experience. The VM feels like a real computer (without checking Device Manager and seeing virtual hardware). There's no stutters or telltale signs that its not a real computer nor under heavy load when vcpu pinning and host core isolation are done correctly. The dedicated NVMe for PCIe passthrough for booting the guest has been the most impactful piece of the puzzle in my experience. Disk IO latency is more than half the battle.

Why make this

The primary motivation is that its for fun. I love my field and the 'edutainment' I take in from this profession fuels the fire. WINE (Proton, etc) continue to improve and there's only so many titles which can't be played right in Linux anymore but for some good reasons to make this:

  1. It lets me do a lot of "scratch pad" debugging/testing/kernel patching/hacking and other test which can be entirely forgotten in a blink. (Or by hitting ^a^x in QEMU's terminal window which is even faster)
  2. A game title may employ a driver-level Anti-Cheat solution, completely borking the multiplayer experience in Linux (Which you can't just throw at WINE). While not every game will let you play in a VM without further tweaking a number of them are okay with this.
  3. Despite the best efforts of the Linux community a title may be too stubborn to function without a native windows environment. Sometimes the path of least resistance is the best.

Primarily with this solution I see many tutorials copy pasting scripts and other libvirtd hooks hardcoding PCIe addresses and performing either redundant or potentially dangerous 'catch all' actions blindly. This script lets me avoid this "set and forget" attitude which helps prevent poor decisions such as blacklisting entire graphics drivers on a system which I see often soft-brick users computers, or hardcoding PCIe addresses in libvirtd which which often leads to a 'broken' (Must be reconfigured) libvirtd guest when another PCIe device gets plugged in overnight or a bios update shifts everything.

I figured I'd write my own all-in-one here to help make modifying what hardware a guest receives as easy as typing a few extra words into the command invocation. Or backspacing them. All with the convenience of the up arrow in my shell history.

In general this script has been very useful in my tinkering even outside VFIO and gaming. At this point I swear by it for quick and easy screwing around with QEMU just on the desktop. Especially for booting external drives or USB sticks in just a few keystrokes.

The script, arguments, examples and an installation example

General usage arguments

-agents

If set, sets -qemu-agent and -spice, enabling both during VM runtime. See their own flags below.

-avoidVirtio / -noVirtio

If set, tries to pick more traditional QEMU devices for the best compatibility with a booting guest. Likely to be useful during new Windows installs if you don't have the virtio iso to pass in with the installer iso. Also useful in other departments such as testing kernels and initramfs combinations where virtio isn't a kernel inbuilt or initramfs as a late module.

-iommu / -iommugroups / -iommugrouping

Prints IOMMU groupings if available then exists.

-cputhreads / -showpairs / -showthreads / -showcpu

Prints host core count and shares which threads belong to which core Useful knowledge for setting up isolation in kernel arguments and when pinning the guest with -pinvcpus

-ignorevtcon / -ignoreframebuffer / -leavefb / -leaveframebuffer / leavevtcon Intentionally leave the vtcon and efi-framebuffer bindings alone Primarily added to work around kernel bug 216475 Prevents restoring vtcon's at a cost of as many GPU swaps from host to guest as desired.

-image /dev/zvol/zpoolName/windows -imageformat raw

If set, att

Related Skills

View on GitHub
GitHub Stars206
CategoryDevelopment
Updated1mo ago
Forks14

Languages

Shell

Security Score

100/100

Audited on Feb 27, 2026

No findings