1The memory API 2============== 3 4The memory API models the memory and I/O buses and controllers of a QEMU 5machine. It attempts to allow modelling of: 6 7 - ordinary RAM 8 - memory-mapped I/O (MMIO) 9 - memory controllers that can dynamically reroute physical memory regions 10 to different destinations 11 12The memory model provides support for 13 14 - tracking RAM changes by the guest 15 - setting up coalesced memory for kvm 16 - setting up ioeventfd regions for kvm 17 18Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks 19(leaves) are RAM and MMIO regions, while other nodes represent 20buses, memory controllers, and memory regions that have been rerouted. 21 22In addition to MemoryRegion objects, the memory API provides AddressSpace 23objects for every root and possibly for intermediate MemoryRegions too. 24These represent memory as seen from the CPU or a device's viewpoint. 25 26Types of regions 27---------------- 28 29There are multiple types of memory regions (all represented by a single C type 30MemoryRegion): 31 32- RAM: a RAM region is simply a range of host memory that can be made available 33 to the guest. 34 You typically initialize these with memory_region_init_ram(). Some special 35 purposes require the variants memory_region_init_resizeable_ram(), 36 memory_region_init_ram_from_file(), or memory_region_init_ram_ptr(). 37 38- MMIO: a range of guest memory that is implemented by host callbacks; 39 each read or write causes a callback to be called on the host. 40 You initialize these with memory_region_init_io(), passing it a 41 MemoryRegionOps structure describing the callbacks. 42 43- ROM: a ROM memory region works like RAM for reads (directly accessing 44 a region of host memory), but like MMIO for writes (invoking a callback). 45 You initialize these with memory_region_init_rom_device(). 46 47- IOMMU region: an IOMMU region translates addresses of accesses made to it 48 and forwards them to some other target memory region. As the name suggests, 49 these are only needed for modelling an IOMMU, not for simple devices. 50 You initialize these with memory_region_init_iommu(). 51 52- container: a container simply includes other memory regions, each at 53 a different offset. Containers are useful for grouping several regions 54 into one unit. For example, a PCI BAR may be composed of a RAM region 55 and an MMIO region. 56 57 A container's subregions are usually non-overlapping. In some cases it is 58 useful to have overlapping regions; for example a memory controller that 59 can overlay a subregion of RAM with MMIO or ROM, or a PCI controller 60 that does not prevent card from claiming overlapping BARs. 61 62 You initialize a pure container with memory_region_init(). 63 64- alias: a subsection of another region. Aliases allow a region to be 65 split apart into discontiguous regions. Examples of uses are memory banks 66 used when the guest address space is smaller than the amount of RAM 67 addressed, or a memory controller that splits main memory to expose a "PCI 68 hole". Aliases may point to any type of region, including other aliases, 69 but an alias may not point back to itself, directly or indirectly. 70 You initialize these with memory_region_init_alias(). 71 72- reservation region: a reservation region is primarily for debugging. 73 It claims I/O space that is not supposed to be handled by QEMU itself. 74 The typical use is to track parts of the address space which will be 75 handled by the host kernel when KVM is enabled. 76 You initialize these with memory_region_init_reservation(), or by 77 passing a NULL callback parameter to memory_region_init_io(). 78 79It is valid to add subregions to a region which is not a pure container 80(that is, to an MMIO, RAM or ROM region). This means that the region 81will act like a container, except that any addresses within the container's 82region which are not claimed by any subregion are handled by the 83container itself (ie by its MMIO callbacks or RAM backing). However 84it is generally possible to achieve the same effect with a pure container 85one of whose subregions is a low priority "background" region covering 86the whole address range; this is often clearer and is preferred. 87Subregions cannot be added to an alias region. 88 89Region names 90------------ 91 92Regions are assigned names by the constructor. For most regions these are 93only used for debugging purposes, but RAM regions also use the name to identify 94live migration sections. This means that RAM region names need to have ABI 95stability. 96 97Region lifecycle 98---------------- 99 100A region is created by one of the memory_region_init*() functions and 101attached to an object, which acts as its owner or parent. QEMU ensures 102that the owner object remains alive as long as the region is visible to 103the guest, or as long as the region is in use by a virtual CPU or another 104device. For example, the owner object will not die between an 105address_space_map operation and the corresponding address_space_unmap. 106 107After creation, a region can be added to an address space or a 108container with memory_region_add_subregion(), and removed using 109memory_region_del_subregion(). 110 111Various region attributes (read-only, dirty logging, coalesced mmio, 112ioeventfd) can be changed during the region lifecycle. They take effect 113as soon as the region is made visible. This can be immediately, later, 114or never. 115 116Destruction of a memory region happens automatically when the owner 117object dies. 118 119If however the memory region is part of a dynamically allocated data 120structure, you should call object_unparent() to destroy the memory region 121before the data structure is freed. For an example see VFIOMSIXInfo 122and VFIOQuirk in hw/vfio/pci.c. 123 124You must not destroy a memory region as long as it may be in use by a 125device or CPU. In order to do this, as a general rule do not create or 126destroy memory regions dynamically during a device's lifetime, and only 127call object_unparent() in the memory region owner's instance_finalize 128callback. The dynamically allocated data structure that contains the 129memory region then should obviously be freed in the instance_finalize 130callback as well. 131 132If you break this rule, the following situation can happen: 133 134- the memory region's owner had a reference taken via memory_region_ref 135 (for example by address_space_map) 136 137- the region is unparented, and has no owner anymore 138 139- when address_space_unmap is called, the reference to the memory region's 140 owner is leaked. 141 142 143There is an exception to the above rule: it is okay to call 144object_unparent at any time for an alias or a container region. It is 145therefore also okay to create or destroy alias and container regions 146dynamically during a device's lifetime. 147 148This exceptional usage is valid because aliases and containers only help 149QEMU building the guest's memory map; they are never accessed directly. 150memory_region_ref and memory_region_unref are never called on aliases 151or containers, and the above situation then cannot happen. Exploiting 152this exception is rarely necessary, and therefore it is discouraged, 153but nevertheless it is used in a few places. 154 155For regions that "have no owner" (NULL is passed at creation time), the 156machine object is actually used as the owner. Since instance_finalize is 157never called for the machine object, you must never call object_unparent 158on regions that have no owner, unless they are aliases or containers. 159 160 161Overlapping regions and priority 162-------------------------------- 163Usually, regions may not overlap each other; a memory address decodes into 164exactly one target. In some cases it is useful to allow regions to overlap, 165and sometimes to control which of an overlapping regions is visible to the 166guest. This is done with memory_region_add_subregion_overlap(), which 167allows the region to overlap any other region in the same container, and 168specifies a priority that allows the core to decide which of two regions at 169the same address are visible (highest wins). 170Priority values are signed, and the default value is zero. This means that 171you can use memory_region_add_subregion_overlap() both to specify a region 172that must sit 'above' any others (with a positive priority) and also a 173background region that sits 'below' others (with a negative priority). 174 175If the higher priority region in an overlap is a container or alias, then 176the lower priority region will appear in any "holes" that the higher priority 177region has left by not mapping subregions to that area of its address range. 178(This applies recursively -- if the subregions are themselves containers or 179aliases that leave holes then the lower priority region will appear in these 180holes too.) 181 182For example, suppose we have a container A of size 0x8000 with two subregions 183B and C. B is a container mapped at 0x2000, size 0x4000, priority 2; C is 184an MMIO region mapped at 0x0, size 0x6000, priority 1. B currently has two 185of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at 186offset 0x2000. As a diagram: 187 188 0 1000 2000 3000 4000 5000 6000 7000 8000 189 |------|------|------|------|------|------|------|------| 190 A: [ ] 191 C: [CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC] 192 B: [ ] 193 D: [DDDDD] 194 E: [EEEEE] 195 196The regions that will be seen within this address range then are: 197 [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC] 198 199Since B has higher priority than C, its subregions appear in the flat map 200even where they overlap with C. In ranges where B has not mapped anything 201C's region appears. 202 203If B had provided its own MMIO operations (ie it was not a pure container) 204then these would be used for any addresses in its range not handled by 205D or E, and the result would be: 206 [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB] 207 208Priority values are local to a container, because the priorities of two 209regions are only compared when they are both children of the same container. 210This means that the device in charge of the container (typically modelling 211a bus or a memory controller) can use them to manage the interaction of 212its child regions without any side effects on other parts of the system. 213In the example above, the priorities of D and E are unimportant because 214they do not overlap each other. It is the relative priority of B and C 215that causes D and E to appear on top of C: D and E's priorities are never 216compared against the priority of C. 217 218Visibility 219---------- 220The memory core uses the following rules to select a memory region when the 221guest accesses an address: 222 223- all direct subregions of the root region are matched against the address, in 224 descending priority order 225 - if the address lies outside the region offset/size, the subregion is 226 discarded 227 - if the subregion is a leaf (RAM or MMIO), the search terminates, returning 228 this leaf region 229 - if the subregion is a container, the same algorithm is used within the 230 subregion (after the address is adjusted by the subregion offset) 231 - if the subregion is an alias, the search is continued at the alias target 232 (after the address is adjusted by the subregion offset and alias offset) 233 - if a recursive search within a container or alias subregion does not 234 find a match (because of a "hole" in the container's coverage of its 235 address range), then if this is a container with its own MMIO or RAM 236 backing the search terminates, returning the container itself. Otherwise 237 we continue with the next subregion in priority order 238- if none of the subregions match the address then the search terminates 239 with no match found 240 241Example memory map 242------------------ 243 244system_memory: container@0-2^48-1 245 | 246 +---- lomem: alias@0-0xdfffffff ---> #ram (0-0xdfffffff) 247 | 248 +---- himem: alias@0x100000000-0x11fffffff ---> #ram (0xe0000000-0xffffffff) 249 | 250 +---- vga-window: alias@0xa0000-0xbffff ---> #pci (0xa0000-0xbffff) 251 | (prio 1) 252 | 253 +---- pci-hole: alias@0xe0000000-0xffffffff ---> #pci (0xe0000000-0xffffffff) 254 255pci (0-2^32-1) 256 | 257 +--- vga-area: container@0xa0000-0xbffff 258 | | 259 | +--- alias@0x00000-0x7fff ---> #vram (0x010000-0x017fff) 260 | | 261 | +--- alias@0x08000-0xffff ---> #vram (0x020000-0x027fff) 262 | 263 +---- vram: ram@0xe1000000-0xe1ffffff 264 | 265 +---- vga-mmio: mmio@0xe2000000-0xe200ffff 266 267ram: ram@0x00000000-0xffffffff 268 269This is a (simplified) PC memory map. The 4GB RAM block is mapped into the 270system address space via two aliases: "lomem" is a 1:1 mapping of the first 2713.5GB; "himem" maps the last 0.5GB at address 4GB. This leaves 0.5GB for the 272so-called PCI hole, that allows a 32-bit PCI bus to exist in a system with 2734GB of memory. 274 275The memory controller diverts addresses in the range 640K-768K to the PCI 276address space. This is modelled using the "vga-window" alias, mapped at a 277higher priority so it obscures the RAM at the same addresses. The vga window 278can be removed by programming the memory controller; this is modelled by 279removing the alias and exposing the RAM underneath. 280 281The pci address space is not a direct child of the system address space, since 282we only want parts of it to be visible (we accomplish this using aliases). 283It has two subregions: vga-area models the legacy vga window and is occupied 284by two 32K memory banks pointing at two sections of the framebuffer. 285In addition the vram is mapped as a BAR at address e1000000, and an additional 286BAR containing MMIO registers is mapped after it. 287 288Note that if the guest maps a BAR outside the PCI hole, it would not be 289visible as the pci-hole alias clips it to a 0.5GB range. 290 291MMIO Operations 292--------------- 293 294MMIO regions are provided with ->read() and ->write() callbacks; in addition 295various constraints can be supplied to control how these callbacks are called: 296 297 - .valid.min_access_size, .valid.max_access_size define the access sizes 298 (in bytes) which the device accepts; accesses outside this range will 299 have device and bus specific behaviour (ignored, or machine check) 300 - .valid.unaligned specifies that the *device being modelled* supports 301 unaligned accesses; if false, unaligned accesses will invoke the 302 appropriate bus or CPU specific behaviour. 303 - .impl.min_access_size, .impl.max_access_size define the access sizes 304 (in bytes) supported by the *implementation*; other access sizes will be 305 emulated using the ones available. For example a 4-byte write will be 306 emulated using four 1-byte writes, if .impl.max_access_size = 1. 307 - .impl.unaligned specifies that the *implementation* supports unaligned 308 accesses; if false, unaligned accesses will be emulated by two aligned 309 accesses. 310 - .old_mmio eases the porting of code that was formerly using 311 cpu_register_io_memory(). It should not be used in new code. 312