1VFIO - "Virtual Function I/O"[1] 2------------------------------------------------------------------------------- 3Many modern system now provide DMA and interrupt remapping facilities 4to help ensure I/O devices behave within the boundaries they've been 5allotted. This includes x86 hardware with AMD-Vi and Intel VT-d, 6POWER systems with Partitionable Endpoints (PEs) and embedded PowerPC 7systems such as Freescale PAMU. The VFIO driver is an IOMMU/device 8agnostic framework for exposing direct device access to userspace, in 9a secure, IOMMU protected environment. In other words, this allows 10safe[2], non-privileged, userspace drivers. 11 12Why do we want that? Virtual machines often make use of direct device 13access ("device assignment") when configured for the highest possible 14I/O performance. From a device and host perspective, this simply 15turns the VM into a userspace driver, with the benefits of 16significantly reduced latency, higher bandwidth, and direct use of 17bare-metal device drivers[3]. 18 19Some applications, particularly in the high performance computing 20field, also benefit from low-overhead, direct device access from 21userspace. Examples include network adapters (often non-TCP/IP based) 22and compute accelerators. Prior to VFIO, these drivers had to either 23go through the full development cycle to become proper upstream 24driver, be maintained out of tree, or make use of the UIO framework, 25which has no notion of IOMMU protection, limited interrupt support, 26and requires root privileges to access things like PCI configuration 27space. 28 29The VFIO driver framework intends to unify these, replacing both the 30KVM PCI specific device assignment code as well as provide a more 31secure, more featureful userspace driver environment than UIO. 32 33Groups, Devices, and IOMMUs 34------------------------------------------------------------------------------- 35 36Devices are the main target of any I/O driver. Devices typically 37create a programming interface made up of I/O access, interrupts, 38and DMA. Without going into the details of each of these, DMA is 39by far the most critical aspect for maintaining a secure environment 40as allowing a device read-write access to system memory imposes the 41greatest risk to the overall system integrity. 42 43To help mitigate this risk, many modern IOMMUs now incorporate 44isolation properties into what was, in many cases, an interface only 45meant for translation (ie. solving the addressing problems of devices 46with limited address spaces). With this, devices can now be isolated 47from each other and from arbitrary memory access, thus allowing 48things like secure direct assignment of devices into virtual machines. 49 50This isolation is not always at the granularity of a single device 51though. Even when an IOMMU is capable of this, properties of devices, 52interconnects, and IOMMU topologies can each reduce this isolation. 53For instance, an individual device may be part of a larger multi- 54function enclosure. While the IOMMU may be able to distinguish 55between devices within the enclosure, the enclosure may not require 56transactions between devices to reach the IOMMU. Examples of this 57could be anything from a multi-function PCI device with backdoors 58between functions to a non-PCI-ACS (Access Control Services) capable 59bridge allowing redirection without reaching the IOMMU. Topology 60can also play a factor in terms of hiding devices. A PCIe-to-PCI 61bridge masks the devices behind it, making transaction appear as if 62from the bridge itself. Obviously IOMMU design plays a major factor 63as well. 64 65Therefore, while for the most part an IOMMU may have device level 66granularity, any system is susceptible to reduced granularity. The 67IOMMU API therefore supports a notion of IOMMU groups. A group is 68a set of devices which is isolatable from all other devices in the 69system. Groups are therefore the unit of ownership used by VFIO. 70 71While the group is the minimum granularity that must be used to 72ensure secure user access, it's not necessarily the preferred 73granularity. In IOMMUs which make use of page tables, it may be 74possible to share a set of page tables between different groups, 75reducing the overhead both to the platform (reduced TLB thrashing, 76reduced duplicate page tables), and to the user (programming only 77a single set of translations). For this reason, VFIO makes use of 78a container class, which may hold one or more groups. A container 79is created by simply opening the /dev/vfio/vfio character device. 80 81On its own, the container provides little functionality, with all 82but a couple version and extension query interfaces locked away. 83The user needs to add a group into the container for the next level 84of functionality. To do this, the user first needs to identify the 85group associated with the desired device. This can be done using 86the sysfs links described in the example below. By unbinding the 87device from the host driver and binding it to a VFIO driver, a new 88VFIO group will appear for the group as /dev/vfio/$GROUP, where 89$GROUP is the IOMMU group number of which the device is a member. 90If the IOMMU group contains multiple devices, each will need to 91be bound to a VFIO driver before operations on the VFIO group 92are allowed (it's also sufficient to only unbind the device from 93host drivers if a VFIO driver is unavailable; this will make the 94group available, but not that particular device). TBD - interface 95for disabling driver probing/locking a device. 96 97Once the group is ready, it may be added to the container by opening 98the VFIO group character device (/dev/vfio/$GROUP) and using the 99VFIO_GROUP_SET_CONTAINER ioctl, passing the file descriptor of the 100previously opened container file. If desired and if the IOMMU driver 101supports sharing the IOMMU context between groups, multiple groups may 102be set to the same container. If a group fails to set to a container 103with existing groups, a new empty container will need to be used 104instead. 105 106With a group (or groups) attached to a container, the remaining 107ioctls become available, enabling access to the VFIO IOMMU interfaces. 108Additionally, it now becomes possible to get file descriptors for each 109device within a group using an ioctl on the VFIO group file descriptor. 110 111The VFIO device API includes ioctls for describing the device, the I/O 112regions and their read/write/mmap offsets on the device descriptor, as 113well as mechanisms for describing and registering interrupt 114notifications. 115 116VFIO Usage Example 117------------------------------------------------------------------------------- 118 119Assume user wants to access PCI device 0000:06:0d.0 120 121$ readlink /sys/bus/pci/devices/0000:06:0d.0/iommu_group 122../../../../kernel/iommu_groups/26 123 124This device is therefore in IOMMU group 26. This device is on the 125pci bus, therefore the user will make use of vfio-pci to manage the 126group: 127 128# modprobe vfio-pci 129 130Binding this device to the vfio-pci driver creates the VFIO group 131character devices for this group: 132 133$ lspci -n -s 0000:06:0d.0 13406:0d.0 0401: 1102:0002 (rev 08) 135# echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind 136# echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id 137 138Now we need to look at what other devices are in the group to free 139it for use by VFIO: 140 141$ ls -l /sys/bus/pci/devices/0000:06:0d.0/iommu_group/devices 142total 0 143lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:00:1e.0 -> 144 ../../../../devices/pci0000:00/0000:00:1e.0 145lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.0 -> 146 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0 147lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.1 -> 148 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1 149 150This device is behind a PCIe-to-PCI bridge[4], therefore we also 151need to add device 0000:06:0d.1 to the group following the same 152procedure as above. Device 0000:00:1e.0 is a bridge that does 153not currently have a host driver, therefore it's not required to 154bind this device to the vfio-pci driver (vfio-pci does not currently 155support PCI bridges). 156 157The final step is to provide the user with access to the group if 158unprivileged operation is desired (note that /dev/vfio/vfio provides 159no capabilities on its own and is therefore expected to be set to 160mode 0666 by the system). 161 162# chown user:user /dev/vfio/26 163 164The user now has full access to all the devices and the iommu for this 165group and can access them as follows: 166 167 int container, group, device, i; 168 struct vfio_group_status group_status = 169 { .argsz = sizeof(group_status) }; 170 struct vfio_iommu_type1_info iommu_info = { .argsz = sizeof(iommu_info) }; 171 struct vfio_iommu_type1_dma_map dma_map = { .argsz = sizeof(dma_map) }; 172 struct vfio_device_info device_info = { .argsz = sizeof(device_info) }; 173 174 /* Create a new container */ 175 container = open("/dev/vfio/vfio", O_RDWR); 176 177 if (ioctl(container, VFIO_GET_API_VERSION) != VFIO_API_VERSION) 178 /* Unknown API version */ 179 180 if (!ioctl(container, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU)) 181 /* Doesn't support the IOMMU driver we want. */ 182 183 /* Open the group */ 184 group = open("/dev/vfio/26", O_RDWR); 185 186 /* Test the group is viable and available */ 187 ioctl(group, VFIO_GROUP_GET_STATUS, &group_status); 188 189 if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) 190 /* Group is not viable (ie, not all devices bound for vfio) */ 191 192 /* Add the group to the container */ 193 ioctl(group, VFIO_GROUP_SET_CONTAINER, &container); 194 195 /* Enable the IOMMU model we want */ 196 ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU); 197 198 /* Get addition IOMMU info */ 199 ioctl(container, VFIO_IOMMU_GET_INFO, &iommu_info); 200 201 /* Allocate some space and setup a DMA mapping */ 202 dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE, 203 MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); 204 dma_map.size = 1024 * 1024; 205 dma_map.iova = 0; /* 1MB starting at 0x0 from device view */ 206 dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; 207 208 ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map); 209 210 /* Get a file descriptor for the device */ 211 device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0"); 212 213 /* Test and setup the device */ 214 ioctl(device, VFIO_DEVICE_GET_INFO, &device_info); 215 216 for (i = 0; i < device_info.num_regions; i++) { 217 struct vfio_region_info reg = { .argsz = sizeof(reg) }; 218 219 reg.index = i; 220 221 ioctl(device, VFIO_DEVICE_GET_REGION_INFO, ®); 222 223 /* Setup mappings... read/write offsets, mmaps 224 * For PCI devices, config space is a region */ 225 } 226 227 for (i = 0; i < device_info.num_irqs; i++) { 228 struct vfio_irq_info irq = { .argsz = sizeof(irq) }; 229 230 irq.index = i; 231 232 ioctl(device, VFIO_DEVICE_GET_IRQ_INFO, &irq); 233 234 /* Setup IRQs... eventfds, VFIO_DEVICE_SET_IRQS */ 235 } 236 237 /* Gratuitous device reset and go... */ 238 ioctl(device, VFIO_DEVICE_RESET); 239 240VFIO User API 241------------------------------------------------------------------------------- 242 243Please see include/linux/vfio.h for complete API documentation. 244 245VFIO bus driver API 246------------------------------------------------------------------------------- 247 248VFIO bus drivers, such as vfio-pci make use of only a few interfaces 249into VFIO core. When devices are bound and unbound to the driver, 250the driver should call vfio_add_group_dev() and vfio_del_group_dev() 251respectively: 252 253extern int vfio_add_group_dev(struct iommu_group *iommu_group, 254 struct device *dev, 255 const struct vfio_device_ops *ops, 256 void *device_data); 257 258extern void *vfio_del_group_dev(struct device *dev); 259 260vfio_add_group_dev() indicates to the core to begin tracking the 261specified iommu_group and register the specified dev as owned by 262a VFIO bus driver. The driver provides an ops structure for callbacks 263similar to a file operations structure: 264 265struct vfio_device_ops { 266 int (*open)(void *device_data); 267 void (*release)(void *device_data); 268 ssize_t (*read)(void *device_data, char __user *buf, 269 size_t count, loff_t *ppos); 270 ssize_t (*write)(void *device_data, const char __user *buf, 271 size_t size, loff_t *ppos); 272 long (*ioctl)(void *device_data, unsigned int cmd, 273 unsigned long arg); 274 int (*mmap)(void *device_data, struct vm_area_struct *vma); 275}; 276 277Each function is passed the device_data that was originally registered 278in the vfio_add_group_dev() call above. This allows the bus driver 279an easy place to store its opaque, private data. The open/release 280callbacks are issued when a new file descriptor is created for a 281device (via VFIO_GROUP_GET_DEVICE_FD). The ioctl interface provides 282a direct pass through for VFIO_DEVICE_* ioctls. The read/write/mmap 283interfaces implement the device region access defined by the device's 284own VFIO_DEVICE_GET_REGION_INFO ioctl. 285 286 287PPC64 sPAPR implementation note 288------------------------------------------------------------------------------- 289 290This implementation has some specifics: 291 2921) On older systems (POWER7 with P5IOC2/IODA1) only one IOMMU group per 293container is supported as an IOMMU table is allocated at the boot time, 294one table per a IOMMU group which is a Partitionable Endpoint (PE) 295(PE is often a PCI domain but not always). 296Newer systems (POWER8 with IODA2) have improved hardware design which allows 297to remove this limitation and have multiple IOMMU groups per a VFIO container. 298 2992) The hardware supports so called DMA windows - the PCI address range 300within which DMA transfer is allowed, any attempt to access address space 301out of the window leads to the whole PE isolation. 302 3033) PPC64 guests are paravirtualized but not fully emulated. There is an API 304to map/unmap pages for DMA, and it normally maps 1..32 pages per call and 305currently there is no way to reduce the number of calls. In order to make things 306faster, the map/unmap handling has been implemented in real mode which provides 307an excellent performance which has limitations such as inability to do 308locked pages accounting in real time. 309 3104) According to sPAPR specification, A Partitionable Endpoint (PE) is an I/O 311subtree that can be treated as a unit for the purposes of partitioning and 312error recovery. A PE may be a single or multi-function IOA (IO Adapter), a 313function of a multi-function IOA, or multiple IOAs (possibly including switch 314and bridge structures above the multiple IOAs). PPC64 guests detect PCI errors 315and recover from them via EEH RTAS services, which works on the basis of 316additional ioctl commands. 317 318So 4 additional ioctls have been added: 319 320 VFIO_IOMMU_SPAPR_TCE_GET_INFO - returns the size and the start 321 of the DMA window on the PCI bus. 322 323 VFIO_IOMMU_ENABLE - enables the container. The locked pages accounting 324 is done at this point. This lets user first to know what 325 the DMA window is and adjust rlimit before doing any real job. 326 327 VFIO_IOMMU_DISABLE - disables the container. 328 329 VFIO_EEH_PE_OP - provides an API for EEH setup, error detection and recovery. 330 331The code flow from the example above should be slightly changed: 332 333 struct vfio_eeh_pe_op pe_op = { .argsz = sizeof(pe_op), .flags = 0 }; 334 335 ..... 336 /* Add the group to the container */ 337 ioctl(group, VFIO_GROUP_SET_CONTAINER, &container); 338 339 /* Enable the IOMMU model we want */ 340 ioctl(container, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU) 341 342 /* Get addition sPAPR IOMMU info */ 343 vfio_iommu_spapr_tce_info spapr_iommu_info; 344 ioctl(container, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &spapr_iommu_info); 345 346 if (ioctl(container, VFIO_IOMMU_ENABLE)) 347 /* Cannot enable container, may be low rlimit */ 348 349 /* Allocate some space and setup a DMA mapping */ 350 dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE, 351 MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); 352 353 dma_map.size = 1024 * 1024; 354 dma_map.iova = 0; /* 1MB starting at 0x0 from device view */ 355 dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; 356 357 /* Check here is .iova/.size are within DMA window from spapr_iommu_info */ 358 ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map); 359 360 /* Get a file descriptor for the device */ 361 device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0"); 362 363 .... 364 365 /* Gratuitous device reset and go... */ 366 ioctl(device, VFIO_DEVICE_RESET); 367 368 /* Make sure EEH is supported */ 369 ioctl(container, VFIO_CHECK_EXTENSION, VFIO_EEH); 370 371 /* Enable the EEH functionality on the device */ 372 pe_op.op = VFIO_EEH_PE_ENABLE; 373 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 374 375 /* You're suggested to create additional data struct to represent 376 * PE, and put child devices belonging to same IOMMU group to the 377 * PE instance for later reference. 378 */ 379 380 /* Check the PE's state and make sure it's in functional state */ 381 pe_op.op = VFIO_EEH_PE_GET_STATE; 382 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 383 384 /* Save device state using pci_save_state(). 385 * EEH should be enabled on the specified device. 386 */ 387 388 .... 389 390 /* When 0xFF's returned from reading PCI config space or IO BARs 391 * of the PCI device. Check the PE's state to see if that has been 392 * frozen. 393 */ 394 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 395 396 /* Waiting for pending PCI transactions to be completed and don't 397 * produce any more PCI traffic from/to the affected PE until 398 * recovery is finished. 399 */ 400 401 /* Enable IO for the affected PE and collect logs. Usually, the 402 * standard part of PCI config space, AER registers are dumped 403 * as logs for further analysis. 404 */ 405 pe_op.op = VFIO_EEH_PE_UNFREEZE_IO; 406 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 407 408 /* 409 * Issue PE reset: hot or fundamental reset. Usually, hot reset 410 * is enough. However, the firmware of some PCI adapters would 411 * require fundamental reset. 412 */ 413 pe_op.op = VFIO_EEH_PE_RESET_HOT; 414 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 415 pe_op.op = VFIO_EEH_PE_RESET_DEACTIVATE; 416 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 417 418 /* Configure the PCI bridges for the affected PE */ 419 pe_op.op = VFIO_EEH_PE_CONFIGURE; 420 ioctl(container, VFIO_EEH_PE_OP, &pe_op); 421 422 /* Restored state we saved at initialization time. pci_restore_state() 423 * is good enough as an example. 424 */ 425 426 /* Hopefully, error is recovered successfully. Now, you can resume to 427 * start PCI traffic to/from the affected PE. 428 */ 429 430 .... 431 4325) There is v2 of SPAPR TCE IOMMU. It deprecates VFIO_IOMMU_ENABLE/ 433VFIO_IOMMU_DISABLE and implements 2 new ioctls: 434VFIO_IOMMU_SPAPR_REGISTER_MEMORY and VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY 435(which are unsupported in v1 IOMMU). 436 437PPC64 paravirtualized guests generate a lot of map/unmap requests, 438and the handling of those includes pinning/unpinning pages and updating 439mm::locked_vm counter to make sure we do not exceed the rlimit. 440The v2 IOMMU splits accounting and pinning into separate operations: 441 442- VFIO_IOMMU_SPAPR_REGISTER_MEMORY/VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY ioctls 443receive a user space address and size of the block to be pinned. 444Bisecting is not supported and VFIO_IOMMU_UNREGISTER_MEMORY is expected to 445be called with the exact address and size used for registering 446the memory block. The userspace is not expected to call these often. 447The ranges are stored in a linked list in a VFIO container. 448 449- VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA ioctls only update the actual 450IOMMU table and do not do pinning; instead these check that the userspace 451address is from pre-registered range. 452 453This separation helps in optimizing DMA for guests. 454 4556) sPAPR specification allows guests to have an additional DMA window(s) on 456a PCI bus with a variable page size. Two ioctls have been added to support 457this: VFIO_IOMMU_SPAPR_TCE_CREATE and VFIO_IOMMU_SPAPR_TCE_REMOVE. 458The platform has to support the functionality or error will be returned to 459the userspace. The existing hardware supports up to 2 DMA windows, one is 4602GB long, uses 4K pages and called "default 32bit window"; the other can 461be as big as entire RAM, use different page size, it is optional - guests 462create those in run-time if the guest driver supports 64bit DMA. 463 464VFIO_IOMMU_SPAPR_TCE_CREATE receives a page shift, a DMA window size and 465a number of TCE table levels (if a TCE table is going to be big enough and 466the kernel may not be able to allocate enough of physically contiguous memory). 467It creates a new window in the available slot and returns the bus address where 468the new window starts. Due to hardware limitation, the user space cannot choose 469the location of DMA windows. 470 471VFIO_IOMMU_SPAPR_TCE_REMOVE receives the bus start address of the window 472and removes it. 473 474------------------------------------------------------------------------------- 475 476[1] VFIO was originally an acronym for "Virtual Function I/O" in its 477initial implementation by Tom Lyon while as Cisco. We've since 478outgrown the acronym, but it's catchy. 479 480[2] "safe" also depends upon a device being "well behaved". It's 481possible for multi-function devices to have backdoors between 482functions and even for single function devices to have alternative 483access to things like PCI config space through MMIO registers. To 484guard against the former we can include additional precautions in the 485IOMMU driver to group multi-function PCI devices together 486(iommu=group_mf). The latter we can't prevent, but the IOMMU should 487still provide isolation. For PCI, SR-IOV Virtual Functions are the 488best indicator of "well behaved", as these are designed for 489virtualization usage models. 490 491[3] As always there are trade-offs to virtual machine device 492assignment that are beyond the scope of VFIO. It's expected that 493future IOMMU technologies will reduce some, but maybe not all, of 494these trade-offs. 495 496[4] In this case the device is below a PCI bridge, so transactions 497from either function of the device are indistinguishable to the iommu: 498 499-[0000:00]-+-1e.0-[06]--+-0d.0 500 \-0d.1 501 50200:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) 503