linux/Documentation/DMA-API-HOWTO.txt
<<
>>
Prefs
   1=========================
   2Dynamic DMA mapping Guide
   3=========================
   4
   5:Author: David S. Miller <davem@redhat.com>
   6:Author: Richard Henderson <rth@cygnus.com>
   7:Author: Jakub Jelinek <jakub@redhat.com>
   8
   9This is a guide to device driver writers on how to use the DMA API
  10with example pseudo-code.  For a concise description of the API, see
  11DMA-API.txt.
  12
  13CPU and DMA addresses
  14=====================
  15
  16There are several kinds of addresses involved in the DMA API, and it's
  17important to understand the differences.
  18
  19The kernel normally uses virtual addresses.  Any address returned by
  20kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
  21be stored in a ``void *``.
  22
  23The virtual memory system (TLB, page tables, etc.) translates virtual
  24addresses to CPU physical addresses, which are stored as "phys_addr_t" or
  25"resource_size_t".  The kernel manages device resources like registers as
  26physical addresses.  These are the addresses in /proc/iomem.  The physical
  27address is not directly useful to a driver; it must use ioremap() to map
  28the space and produce a virtual address.
  29
  30I/O devices use a third kind of address: a "bus address".  If a device has
  31registers at an MMIO address, or if it performs DMA to read or write system
  32memory, the addresses used by the device are bus addresses.  In some
  33systems, bus addresses are identical to CPU physical addresses, but in
  34general they are not.  IOMMUs and host bridges can produce arbitrary
  35mappings between physical and bus addresses.
  36
  37From a device's point of view, DMA uses the bus address space, but it may
  38be restricted to a subset of that space.  For example, even if a system
  39supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
  40so devices only need to use 32-bit DMA addresses.
  41
  42Here's a picture and some examples::
  43
  44               CPU                  CPU                  Bus
  45             Virtual              Physical             Address
  46             Address              Address               Space
  47              Space                Space
  48
  49            +-------+             +------+             +------+
  50            |       |             |MMIO  |   Offset    |      |
  51            |       |  Virtual    |Space |   applied   |      |
  52          C +-------+ --------> B +------+ ----------> +------+ A
  53            |       |  mapping    |      |   by host   |      |
  54  +-----+   |       |             |      |   bridge    |      |   +--------+
  55  |     |   |       |             +------+             |      |   |        |
  56  | CPU |   |       |             | RAM  |             |      |   | Device |
  57  |     |   |       |             |      |             |      |   |        |
  58  +-----+   +-------+             +------+             +------+   +--------+
  59            |       |  Virtual    |Buffer|   Mapping   |      |
  60          X +-------+ --------> Y +------+ <---------- +------+ Z
  61            |       |  mapping    | RAM  |   by IOMMU
  62            |       |             |      |
  63            |       |             |      |
  64            +-------+             +------+
  65
  66During the enumeration process, the kernel learns about I/O devices and
  67their MMIO space and the host bridges that connect them to the system.  For
  68example, if a PCI device has a BAR, the kernel reads the bus address (A)
  69from the BAR and converts it to a CPU physical address (B).  The address B
  70is stored in a struct resource and usually exposed via /proc/iomem.  When a
  71driver claims a device, it typically uses ioremap() to map physical address
  72B at a virtual address (C).  It can then use, e.g., ioread32(C), to access
  73the device registers at bus address A.
  74
  75If the device supports DMA, the driver sets up a buffer using kmalloc() or
  76a similar interface, which returns a virtual address (X).  The virtual
  77memory system maps X to a physical address (Y) in system RAM.  The driver
  78can use virtual address X to access the buffer, but the device itself
  79cannot because DMA doesn't go through the CPU virtual memory system.
  80
  81In some simple systems, the device can do DMA directly to physical address
  82Y.  But in many others, there is IOMMU hardware that translates DMA
  83addresses to physical addresses, e.g., it translates Z to Y.  This is part
  84of the reason for the DMA API: the driver can give a virtual address X to
  85an interface like dma_map_single(), which sets up any required IOMMU
  86mapping and returns the DMA address Z.  The driver then tells the device to
  87do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
  88RAM.
  89
  90So that Linux can use the dynamic DMA mapping, it needs some help from the
  91drivers, namely it has to take into account that DMA addresses should be
  92mapped only for the time they are actually used and unmapped after the DMA
  93transfer.
  94
  95The following API will work of course even on platforms where no such
  96hardware exists.
  97
  98Note that the DMA API works with any bus independent of the underlying
  99microprocessor architecture. You should use the DMA API rather than the
 100bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
 101pci_map_*() interfaces.
 102
 103First of all, you should make sure::
 104
 105        #include <linux/dma-mapping.h>
 106
 107is in your driver, which provides the definition of dma_addr_t.  This type
 108can hold any valid DMA address for the platform and should be used
 109everywhere you hold a DMA address returned from the DMA mapping functions.
 110
 111What memory is DMA'able?
 112========================
 113
 114The first piece of information you must know is what kernel memory can
 115be used with the DMA mapping facilities.  There has been an unwritten
 116set of rules regarding this, and this text is an attempt to finally
 117write them down.
 118
 119If you acquired your memory via the page allocator
 120(i.e. __get_free_page*()) or the generic memory allocators
 121(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
 122that memory using the addresses returned from those routines.
 123
 124This means specifically that you may _not_ use the memory/addresses
 125returned from vmalloc() for DMA.  It is possible to DMA to the
 126_underlying_ memory mapped into a vmalloc() area, but this requires
 127walking page tables to get the physical addresses, and then
 128translating each of those pages back to a kernel address using
 129something like __va().  [ EDIT: Update this when we integrate
 130Gerd Knorr's generic code which does this. ]
 131
 132This rule also means that you may use neither kernel image addresses
 133(items in data/text/bss segments), nor module image addresses, nor
 134stack addresses for DMA.  These could all be mapped somewhere entirely
 135different than the rest of physical memory.  Even if those classes of
 136memory could physically work with DMA, you'd need to ensure the I/O
 137buffers were cacheline-aligned.  Without that, you'd see cacheline
 138sharing problems (data corruption) on CPUs with DMA-incoherent caches.
 139(The CPU could write to one word, DMA would write to a different one
 140in the same cache line, and one of them could be overwritten.)
 141
 142Also, this means that you cannot take the return of a kmap()
 143call and DMA to/from that.  This is similar to vmalloc().
 144
 145What about block I/O and networking buffers?  The block I/O and
 146networking subsystems make sure that the buffers they use are valid
 147for you to DMA from/to.
 148
 149DMA addressing limitations
 150==========================
 151
 152Does your device have any DMA addressing limitations?  For example, is
 153your device only capable of driving the low order 24-bits of address?
 154If so, you need to inform the kernel of this fact.
 155
 156By default, the kernel assumes that your device can address the full
 15732-bits.  For a 64-bit capable device, this needs to be increased.
 158And for a device with limitations, as discussed in the previous
 159paragraph, it needs to be decreased.
 160
 161Special note about PCI: PCI-X specification requires PCI-X devices to
 162support 64-bit addressing (DAC) for all transactions.  And at least
 163one platform (SGI SN2) requires 64-bit consistent allocations to
 164operate correctly when the IO bus is in PCI-X mode.
 165
 166For correct operation, you must interrogate the kernel in your device
 167probe routine to see if the DMA controller on the machine can properly
 168support the DMA addressing limitation your device has.  It is good
 169style to do this even if your device holds the default setting,
 170because this shows that you did think about these issues wrt. your
 171device.
 172
 173The query is performed via a call to dma_set_mask_and_coherent()::
 174
 175        int dma_set_mask_and_coherent(struct device *dev, u64 mask);
 176
 177which will query the mask for both streaming and coherent APIs together.
 178If you have some special requirements, then the following two separate
 179queries can be used instead:
 180
 181        The query for streaming mappings is performed via a call to
 182        dma_set_mask()::
 183
 184                int dma_set_mask(struct device *dev, u64 mask);
 185
 186        The query for consistent allocations is performed via a call
 187        to dma_set_coherent_mask()::
 188
 189                int dma_set_coherent_mask(struct device *dev, u64 mask);
 190
 191Here, dev is a pointer to the device struct of your device, and mask
 192is a bit mask describing which bits of an address your device
 193supports.  It returns zero if your card can perform DMA properly on
 194the machine given the address mask you provided.  In general, the
 195device struct of your device is embedded in the bus-specific device
 196struct of your device.  For example, &pdev->dev is a pointer to the
 197device struct of a PCI device (pdev is a pointer to the PCI device
 198struct of your device).
 199
 200If it returns non-zero, your device cannot perform DMA properly on
 201this platform, and attempting to do so will result in undefined
 202behavior.  You must either use a different mask, or not use DMA.
 203
 204This means that in the failure case, you have three options:
 205
 2061) Use another DMA mask, if possible (see below).
 2072) Use some non-DMA mode for data transfer, if possible.
 2083) Ignore this device and do not initialize it.
 209
 210It is recommended that your driver print a kernel KERN_WARNING message
 211when you end up performing either #2 or #3.  In this manner, if a user
 212of your driver reports that performance is bad or that the device is not
 213even detected, you can ask them for the kernel messages to find out
 214exactly why.
 215
 216The standard 32-bit addressing device would do something like this::
 217
 218        if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
 219                dev_warn(dev, "mydev: No suitable DMA available\n");
 220                goto ignore_this_device;
 221        }
 222
 223Another common scenario is a 64-bit capable device.  The approach here
 224is to try for 64-bit addressing, but back down to a 32-bit mask that
 225should not fail.  The kernel may fail the 64-bit mask not because the
 226platform is not capable of 64-bit addressing.  Rather, it may fail in
 227this case simply because 32-bit addressing is done more efficiently
 228than 64-bit addressing.  For example, Sparc64 PCI SAC addressing is
 229more efficient than DAC addressing.
 230
 231Here is how you would handle a 64-bit capable device which can drive
 232all 64-bits when accessing streaming DMA::
 233
 234        int using_dac;
 235
 236        if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
 237                using_dac = 1;
 238        } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
 239                using_dac = 0;
 240        } else {
 241                dev_warn(dev, "mydev: No suitable DMA available\n");
 242                goto ignore_this_device;
 243        }
 244
 245If a card is capable of using 64-bit consistent allocations as well,
 246the case would look like this::
 247
 248        int using_dac, consistent_using_dac;
 249
 250        if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
 251                using_dac = 1;
 252                consistent_using_dac = 1;
 253        } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
 254                using_dac = 0;
 255                consistent_using_dac = 0;
 256        } else {
 257                dev_warn(dev, "mydev: No suitable DMA available\n");
 258                goto ignore_this_device;
 259        }
 260
 261The coherent mask will always be able to set the same or a smaller mask as
 262the streaming mask. However for the rare case that a device driver only
 263uses consistent allocations, one would have to check the return value from
 264dma_set_coherent_mask().
 265
 266Finally, if your device can only drive the low 24-bits of
 267address you might do something like::
 268
 269        if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
 270                dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
 271                goto ignore_this_device;
 272        }
 273
 274When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
 275returns zero, the kernel saves away this mask you have provided.  The
 276kernel will use this information later when you make DMA mappings.
 277
 278There is a case which we are aware of at this time, which is worth
 279mentioning in this documentation.  If your device supports multiple
 280functions (for example a sound card provides playback and record
 281functions) and the various different functions have _different_
 282DMA addressing limitations, you may wish to probe each mask and
 283only provide the functionality which the machine can handle.  It
 284is important that the last call to dma_set_mask() be for the
 285most specific mask.
 286
 287Here is pseudo-code showing how this might be done::
 288
 289        #define PLAYBACK_ADDRESS_BITS   DMA_BIT_MASK(32)
 290        #define RECORD_ADDRESS_BITS     DMA_BIT_MASK(24)
 291
 292        struct my_sound_card *card;
 293        struct device *dev;
 294
 295        ...
 296        if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
 297                card->playback_enabled = 1;
 298        } else {
 299                card->playback_enabled = 0;
 300                dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
 301                       card->name);
 302        }
 303        if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
 304                card->record_enabled = 1;
 305        } else {
 306                card->record_enabled = 0;
 307                dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
 308                       card->name);
 309        }
 310
 311A sound card was used as an example here because this genre of PCI
 312devices seems to be littered with ISA chips given a PCI front end,
 313and thus retaining the 16MB DMA addressing limitations of ISA.
 314
 315Types of DMA mappings
 316=====================
 317
 318There are two types of DMA mappings:
 319
 320- Consistent DMA mappings which are usually mapped at driver
 321  initialization, unmapped at the end and for which the hardware should
 322  guarantee that the device and the CPU can access the data
 323  in parallel and will see updates made by each other without any
 324  explicit software flushing.
 325
 326  Think of "consistent" as "synchronous" or "coherent".
 327
 328  The current default is to return consistent memory in the low 32
 329  bits of the DMA space.  However, for future compatibility you should
 330  set the consistent mask even if this default is fine for your
 331  driver.
 332
 333  Good examples of what to use consistent mappings for are:
 334
 335        - Network card DMA ring descriptors.
 336        - SCSI adapter mailbox command data structures.
 337        - Device firmware microcode executed out of
 338          main memory.
 339
 340  The invariant these examples all require is that any CPU store
 341  to memory is immediately visible to the device, and vice
 342  versa.  Consistent mappings guarantee this.
 343
 344  .. important::
 345
 346             Consistent DMA memory does not preclude the usage of
 347             proper memory barriers.  The CPU may reorder stores to
 348             consistent memory just as it may normal memory.  Example:
 349             if it is important for the device to see the first word
 350             of a descriptor updated before the second, you must do
 351             something like::
 352
 353                desc->word0 = address;
 354                wmb();
 355                desc->word1 = DESC_VALID;
 356
 357             in order to get correct behavior on all platforms.
 358
 359             Also, on some platforms your driver may need to flush CPU write
 360             buffers in much the same way as it needs to flush write buffers
 361             found in PCI bridges (such as by reading a register's value
 362             after writing it).
 363
 364- Streaming DMA mappings which are usually mapped for one DMA
 365  transfer, unmapped right after it (unless you use dma_sync_* below)
 366  and for which hardware can optimize for sequential accesses.
 367
 368  Think of "streaming" as "asynchronous" or "outside the coherency
 369  domain".
 370
 371  Good examples of what to use streaming mappings for are:
 372
 373        - Networking buffers transmitted/received by a device.
 374        - Filesystem buffers written/read by a SCSI device.
 375
 376  The interfaces for using this type of mapping were designed in
 377  such a way that an implementation can make whatever performance
 378  optimizations the hardware allows.  To this end, when using
 379  such mappings you must be explicit about what you want to happen.
 380
 381Neither type of DMA mapping has alignment restrictions that come from
 382the underlying bus, although some devices may have such restrictions.
 383Also, systems with caches that aren't DMA-coherent will work better
 384when the underlying buffers don't share cache lines with other data.
 385
 386
 387Using Consistent DMA mappings
 388=============================
 389
 390To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
 391you should do::
 392
 393        dma_addr_t dma_handle;
 394
 395        cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
 396
 397where device is a ``struct device *``. This may be called in interrupt
 398context with the GFP_ATOMIC flag.
 399
 400Size is the length of the region you want to allocate, in bytes.
 401
 402This routine will allocate RAM for that region, so it acts similarly to
 403__get_free_pages() (but takes size instead of a page order).  If your
 404driver needs regions sized smaller than a page, you may prefer using
 405the dma_pool interface, described below.
 406
 407The consistent DMA mapping interfaces, for non-NULL dev, will by
 408default return a DMA address which is 32-bit addressable.  Even if the
 409device indicates (via DMA mask) that it may address the upper 32-bits,
 410consistent allocation will only return > 32-bit addresses for DMA if
 411the consistent DMA mask has been explicitly changed via
 412dma_set_coherent_mask().  This is true of the dma_pool interface as
 413well.
 414
 415dma_alloc_coherent() returns two values: the virtual address which you
 416can use to access it from the CPU and dma_handle which you pass to the
 417card.
 418
 419The CPU virtual address and the DMA address are both
 420guaranteed to be aligned to the smallest PAGE_SIZE order which
 421is greater than or equal to the requested size.  This invariant
 422exists (for example) to guarantee that if you allocate a chunk
 423which is smaller than or equal to 64 kilobytes, the extent of the
 424buffer you receive will not cross a 64K boundary.
 425
 426To unmap and free such a DMA region, you call::
 427
 428        dma_free_coherent(dev, size, cpu_addr, dma_handle);
 429
 430where dev, size are the same as in the above call and cpu_addr and
 431dma_handle are the values dma_alloc_coherent() returned to you.
 432This function may not be called in interrupt context.
 433
 434If your driver needs lots of smaller memory regions, you can write
 435custom code to subdivide pages returned by dma_alloc_coherent(),
 436or you can use the dma_pool API to do that.  A dma_pool is like
 437a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
 438Also, it understands common hardware constraints for alignment,
 439like queue heads needing to be aligned on N byte boundaries.
 440
 441Create a dma_pool like this::
 442
 443        struct dma_pool *pool;
 444
 445        pool = dma_pool_create(name, dev, size, align, boundary);
 446
 447The "name" is for diagnostics (like a kmem_cache name); dev and size
 448are as above.  The device's hardware alignment requirement for this
 449type of data is "align" (which is expressed in bytes, and must be a
 450power of two).  If your device has no boundary crossing restrictions,
 451pass 0 for boundary; passing 4096 says memory allocated from this pool
 452must not cross 4KByte boundaries (but at that time it may be better to
 453use dma_alloc_coherent() directly instead).
 454
 455Allocate memory from a DMA pool like this::
 456
 457        cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
 458
 459flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
 460holding SMP locks), GFP_ATOMIC otherwise.  Like dma_alloc_coherent(),
 461this returns two values, cpu_addr and dma_handle.
 462
 463Free memory that was allocated from a dma_pool like this::
 464
 465        dma_pool_free(pool, cpu_addr, dma_handle);
 466
 467where pool is what you passed to dma_pool_alloc(), and cpu_addr and
 468dma_handle are the values dma_pool_alloc() returned. This function
 469may be called in interrupt context.
 470
 471Destroy a dma_pool by calling::
 472
 473        dma_pool_destroy(pool);
 474
 475Make sure you've called dma_pool_free() for all memory allocated
 476from a pool before you destroy the pool. This function may not
 477be called in interrupt context.
 478
 479DMA Direction
 480=============
 481
 482The interfaces described in subsequent portions of this document
 483take a DMA direction argument, which is an integer and takes on
 484one of the following values::
 485
 486 DMA_BIDIRECTIONAL
 487 DMA_TO_DEVICE
 488 DMA_FROM_DEVICE
 489 DMA_NONE
 490
 491You should provide the exact DMA direction if you know it.
 492
 493DMA_TO_DEVICE means "from main memory to the device"
 494DMA_FROM_DEVICE means "from the device to main memory"
 495It is the direction in which the data moves during the DMA
 496transfer.
 497
 498You are _strongly_ encouraged to specify this as precisely
 499as you possibly can.
 500
 501If you absolutely cannot know the direction of the DMA transfer,
 502specify DMA_BIDIRECTIONAL.  It means that the DMA can go in
 503either direction.  The platform guarantees that you may legally
 504specify this, and that it will work, but this may be at the
 505cost of performance for example.
 506
 507The value DMA_NONE is to be used for debugging.  One can
 508hold this in a data structure before you come to know the
 509precise direction, and this will help catch cases where your
 510direction tracking logic has failed to set things up properly.
 511
 512Another advantage of specifying this value precisely (outside of
 513potential platform-specific optimizations of such) is for debugging.
 514Some platforms actually have a write permission boolean which DMA
 515mappings can be marked with, much like page protections in the user
 516program address space.  Such platforms can and do report errors in the
 517kernel logs when the DMA controller hardware detects violation of the
 518permission setting.
 519
 520Only streaming mappings specify a direction, consistent mappings
 521implicitly have a direction attribute setting of
 522DMA_BIDIRECTIONAL.
 523
 524The SCSI subsystem tells you the direction to use in the
 525'sc_data_direction' member of the SCSI command your driver is
 526working on.
 527
 528For Networking drivers, it's a rather simple affair.  For transmit
 529packets, map/unmap them with the DMA_TO_DEVICE direction
 530specifier.  For receive packets, just the opposite, map/unmap them
 531with the DMA_FROM_DEVICE direction specifier.
 532
 533Using Streaming DMA mappings
 534============================
 535
 536The streaming DMA mapping routines can be called from interrupt
 537context.  There are two versions of each map/unmap, one which will
 538map/unmap a single memory region, and one which will map/unmap a
 539scatterlist.
 540
 541To map a single region, you do::
 542
 543        struct device *dev = &my_dev->dev;
 544        dma_addr_t dma_handle;
 545        void *addr = buffer->ptr;
 546        size_t size = buffer->len;
 547
 548        dma_handle = dma_map_single(dev, addr, size, direction);
 549        if (dma_mapping_error(dev, dma_handle)) {
 550                /*
 551                 * reduce current DMA mapping usage,
 552                 * delay and try again later or
 553                 * reset driver.
 554                 */
 555                goto map_error_handling;
 556        }
 557
 558and to unmap it::
 559
 560        dma_unmap_single(dev, dma_handle, size, direction);
 561
 562You should call dma_mapping_error() as dma_map_single() could fail and return
 563error.  Doing so will ensure that the mapping code will work correctly on all
 564DMA implementations without any dependency on the specifics of the underlying
 565implementation. Using the returned address without checking for errors could
 566result in failures ranging from panics to silent data corruption.  The same
 567applies to dma_map_page() as well.
 568
 569You should call dma_unmap_single() when the DMA activity is finished, e.g.,
 570from the interrupt which told you that the DMA transfer is done.
 571
 572Using CPU pointers like this for single mappings has a disadvantage:
 573you cannot reference HIGHMEM memory in this way.  Thus, there is a
 574map/unmap interface pair akin to dma_{map,unmap}_single().  These
 575interfaces deal with page/offset pairs instead of CPU pointers.
 576Specifically::
 577
 578        struct device *dev = &my_dev->dev;
 579        dma_addr_t dma_handle;
 580        struct page *page = buffer->page;
 581        unsigned long offset = buffer->offset;
 582        size_t size = buffer->len;
 583
 584        dma_handle = dma_map_page(dev, page, offset, size, direction);
 585        if (dma_mapping_error(dev, dma_handle)) {
 586                /*
 587                 * reduce current DMA mapping usage,
 588                 * delay and try again later or
 589                 * reset driver.
 590                 */
 591                goto map_error_handling;
 592        }
 593
 594        ...
 595
 596        dma_unmap_page(dev, dma_handle, size, direction);
 597
 598Here, "offset" means byte offset within the given page.
 599
 600You should call dma_mapping_error() as dma_map_page() could fail and return
 601error as outlined under the dma_map_single() discussion.
 602
 603You should call dma_unmap_page() when the DMA activity is finished, e.g.,
 604from the interrupt which told you that the DMA transfer is done.
 605
 606With scatterlists, you map a region gathered from several regions by::
 607
 608        int i, count = dma_map_sg(dev, sglist, nents, direction);
 609        struct scatterlist *sg;
 610
 611        for_each_sg(sglist, sg, count, i) {
 612                hw_address[i] = sg_dma_address(sg);
 613                hw_len[i] = sg_dma_len(sg);
 614        }
 615
 616where nents is the number of entries in the sglist.
 617
 618The implementation is free to merge several consecutive sglist entries
 619into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
 620consecutive sglist entries can be merged into one provided the first one
 621ends and the second one starts on a page boundary - in fact this is a huge
 622advantage for cards which either cannot do scatter-gather or have very
 623limited number of scatter-gather entries) and returns the actual number
 624of sg entries it mapped them to. On failure 0 is returned.
 625
 626Then you should loop count times (note: this can be less than nents times)
 627and use sg_dma_address() and sg_dma_len() macros where you previously
 628accessed sg->address and sg->length as shown above.
 629
 630To unmap a scatterlist, just call::
 631
 632        dma_unmap_sg(dev, sglist, nents, direction);
 633
 634Again, make sure DMA activity has already finished.
 635
 636.. note::
 637
 638        The 'nents' argument to the dma_unmap_sg call must be
 639        the _same_ one you passed into the dma_map_sg call,
 640        it should _NOT_ be the 'count' value _returned_ from the
 641        dma_map_sg call.
 642
 643Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
 644counterpart, because the DMA address space is a shared resource and
 645you could render the machine unusable by consuming all DMA addresses.
 646
 647If you need to use the same streaming DMA region multiple times and touch
 648the data in between the DMA transfers, the buffer needs to be synced
 649properly in order for the CPU and device to see the most up-to-date and
 650correct copy of the DMA buffer.
 651
 652So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
 653transfer call either::
 654
 655        dma_sync_single_for_cpu(dev, dma_handle, size, direction);
 656
 657or::
 658
 659        dma_sync_sg_for_cpu(dev, sglist, nents, direction);
 660
 661as appropriate.
 662
 663Then, if you wish to let the device get at the DMA area again,
 664finish accessing the data with the CPU, and then before actually
 665giving the buffer to the hardware call either::
 666
 667        dma_sync_single_for_device(dev, dma_handle, size, direction);
 668
 669or::
 670
 671        dma_sync_sg_for_device(dev, sglist, nents, direction);
 672
 673as appropriate.
 674
 675.. note::
 676
 677              The 'nents' argument to dma_sync_sg_for_cpu() and
 678              dma_sync_sg_for_device() must be the same passed to
 679              dma_map_sg(). It is _NOT_ the count returned by
 680              dma_map_sg().
 681
 682After the last DMA transfer call one of the DMA unmap routines
 683dma_unmap_{single,sg}(). If you don't touch the data from the first
 684dma_map_*() call till dma_unmap_*(), then you don't have to call the
 685dma_sync_*() routines at all.
 686
 687Here is pseudo code which shows a situation in which you would need
 688to use the dma_sync_*() interfaces::
 689
 690        my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
 691        {
 692                dma_addr_t mapping;
 693
 694                mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
 695                if (dma_mapping_error(cp->dev, mapping)) {
 696                        /*
 697                         * reduce current DMA mapping usage,
 698                         * delay and try again later or
 699                         * reset driver.
 700                         */
 701                        goto map_error_handling;
 702                }
 703
 704                cp->rx_buf = buffer;
 705                cp->rx_len = len;
 706                cp->rx_dma = mapping;
 707
 708                give_rx_buf_to_card(cp);
 709        }
 710
 711        ...
 712
 713        my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
 714        {
 715                struct my_card *cp = devid;
 716
 717                ...
 718                if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
 719                        struct my_card_header *hp;
 720
 721                        /* Examine the header to see if we wish
 722                         * to accept the data.  But synchronize
 723                         * the DMA transfer with the CPU first
 724                         * so that we see updated contents.
 725                         */
 726                        dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
 727                                                cp->rx_len,
 728                                                DMA_FROM_DEVICE);
 729
 730                        /* Now it is safe to examine the buffer. */
 731                        hp = (struct my_card_header *) cp->rx_buf;
 732                        if (header_is_ok(hp)) {
 733                                dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
 734                                                 DMA_FROM_DEVICE);
 735                                pass_to_upper_layers(cp->rx_buf);
 736                                make_and_setup_new_rx_buf(cp);
 737                        } else {
 738                                /* CPU should not write to
 739                                 * DMA_FROM_DEVICE-mapped area,
 740                                 * so dma_sync_single_for_device() is
 741                                 * not needed here. It would be required
 742                                 * for DMA_BIDIRECTIONAL mapping if
 743                                 * the memory was modified.
 744                                 */
 745                                give_rx_buf_to_card(cp);
 746                        }
 747                }
 748        }
 749
 750Drivers converted fully to this interface should not use virt_to_bus() any
 751longer, nor should they use bus_to_virt(). Some drivers have to be changed a
 752little bit, because there is no longer an equivalent to bus_to_virt() in the
 753dynamic DMA mapping scheme - you have to always store the DMA addresses
 754returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
 755calls (dma_map_sg() stores them in the scatterlist itself if the platform
 756supports dynamic DMA mapping in hardware) in your driver structures and/or
 757in the card registers.
 758
 759All drivers should be using these interfaces with no exceptions.  It
 760is planned to completely remove virt_to_bus() and bus_to_virt() as
 761they are entirely deprecated.  Some ports already do not provide these
 762as it is impossible to correctly support them.
 763
 764Handling Errors
 765===============
 766
 767DMA address space is limited on some architectures and an allocation
 768failure can be determined by:
 769
 770- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
 771
 772- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
 773  by using dma_mapping_error()::
 774
 775        dma_addr_t dma_handle;
 776
 777        dma_handle = dma_map_single(dev, addr, size, direction);
 778        if (dma_mapping_error(dev, dma_handle)) {
 779                /*
 780                 * reduce current DMA mapping usage,
 781                 * delay and try again later or
 782                 * reset driver.
 783                 */
 784                goto map_error_handling;
 785        }
 786
 787- unmap pages that are already mapped, when mapping error occurs in the middle
 788  of a multiple page mapping attempt. These example are applicable to
 789  dma_map_page() as well.
 790
 791Example 1::
 792
 793        dma_addr_t dma_handle1;
 794        dma_addr_t dma_handle2;
 795
 796        dma_handle1 = dma_map_single(dev, addr, size, direction);
 797        if (dma_mapping_error(dev, dma_handle1)) {
 798                /*
 799                 * reduce current DMA mapping usage,
 800                 * delay and try again later or
 801                 * reset driver.
 802                 */
 803                goto map_error_handling1;
 804        }
 805        dma_handle2 = dma_map_single(dev, addr, size, direction);
 806        if (dma_mapping_error(dev, dma_handle2)) {
 807                /*
 808                 * reduce current DMA mapping usage,
 809                 * delay and try again later or
 810                 * reset driver.
 811                 */
 812                goto map_error_handling2;
 813        }
 814
 815        ...
 816
 817        map_error_handling2:
 818                dma_unmap_single(dma_handle1);
 819        map_error_handling1:
 820
 821Example 2::
 822
 823        /*
 824         * if buffers are allocated in a loop, unmap all mapped buffers when
 825         * mapping error is detected in the middle
 826         */
 827
 828        dma_addr_t dma_addr;
 829        dma_addr_t array[DMA_BUFFERS];
 830        int save_index = 0;
 831
 832        for (i = 0; i < DMA_BUFFERS; i++) {
 833
 834                ...
 835
 836                dma_addr = dma_map_single(dev, addr, size, direction);
 837                if (dma_mapping_error(dev, dma_addr)) {
 838                        /*
 839                         * reduce current DMA mapping usage,
 840                         * delay and try again later or
 841                         * reset driver.
 842                         */
 843                        goto map_error_handling;
 844                }
 845                array[i].dma_addr = dma_addr;
 846                save_index++;
 847        }
 848
 849        ...
 850
 851        map_error_handling:
 852
 853        for (i = 0; i < save_index; i++) {
 854
 855                ...
 856
 857                dma_unmap_single(array[i].dma_addr);
 858        }
 859
 860Networking drivers must call dev_kfree_skb() to free the socket buffer
 861and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
 862(ndo_start_xmit). This means that the socket buffer is just dropped in
 863the failure case.
 864
 865SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
 866fails in the queuecommand hook. This means that the SCSI subsystem
 867passes the command to the driver again later.
 868
 869Optimizing Unmap State Space Consumption
 870========================================
 871
 872On many platforms, dma_unmap_{single,page}() is simply a nop.
 873Therefore, keeping track of the mapping address and length is a waste
 874of space.  Instead of filling your drivers up with ifdefs and the like
 875to "work around" this (which would defeat the whole purpose of a
 876portable API) the following facilities are provided.
 877
 878Actually, instead of describing the macros one by one, we'll
 879transform some example code.
 880
 8811) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
 882   Example, before::
 883
 884        struct ring_state {
 885                struct sk_buff *skb;
 886                dma_addr_t mapping;
 887                __u32 len;
 888        };
 889
 890   after::
 891
 892        struct ring_state {
 893                struct sk_buff *skb;
 894                DEFINE_DMA_UNMAP_ADDR(mapping);
 895                DEFINE_DMA_UNMAP_LEN(len);
 896        };
 897
 8982) Use dma_unmap_{addr,len}_set() to set these values.
 899   Example, before::
 900
 901        ringp->mapping = FOO;
 902        ringp->len = BAR;
 903
 904   after::
 905
 906        dma_unmap_addr_set(ringp, mapping, FOO);
 907        dma_unmap_len_set(ringp, len, BAR);
 908
 9093) Use dma_unmap_{addr,len}() to access these values.
 910   Example, before::
 911
 912        dma_unmap_single(dev, ringp->mapping, ringp->len,
 913                         DMA_FROM_DEVICE);
 914
 915   after::
 916
 917        dma_unmap_single(dev,
 918                         dma_unmap_addr(ringp, mapping),
 919                         dma_unmap_len(ringp, len),
 920                         DMA_FROM_DEVICE);
 921
 922It really should be self-explanatory.  We treat the ADDR and LEN
 923separately, because it is possible for an implementation to only
 924need the address in order to perform the unmap operation.
 925
 926Platform Issues
 927===============
 928
 929If you are just writing drivers for Linux and do not maintain
 930an architecture port for the kernel, you can safely skip down
 931to "Closing".
 932
 9331) Struct scatterlist requirements.
 934
 935   You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
 936   supports IOMMUs (including software IOMMU).
 937
 9382) ARCH_DMA_MINALIGN
 939
 940   Architectures must ensure that kmalloc'ed buffer is
 941   DMA-safe. Drivers and subsystems depend on it. If an architecture
 942   isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
 943   the CPU cache is identical to data in main memory),
 944   ARCH_DMA_MINALIGN must be set so that the memory allocator
 945   makes sure that kmalloc'ed buffer doesn't share a cache line with
 946   the others. See arch/arm/include/asm/cache.h as an example.
 947
 948   Note that ARCH_DMA_MINALIGN is about DMA memory alignment
 949   constraints. You don't need to worry about the architecture data
 950   alignment constraints (e.g. the alignment constraints about 64-bit
 951   objects).
 952
 953Closing
 954=======
 955
 956This document, and the API itself, would not be in its current
 957form without the feedback and suggestions from numerous individuals.
 958We would like to specifically mention, in no particular order, the
 959following people::
 960
 961        Russell King <rmk@arm.linux.org.uk>
 962        Leo Dagum <dagum@barrel.engr.sgi.com>
 963        Ralf Baechle <ralf@oss.sgi.com>
 964        Grant Grundler <grundler@cup.hp.com>
 965        Jay Estabrook <Jay.Estabrook@compaq.com>
 966        Thomas Sailer <sailer@ife.ee.ethz.ch>
 967        Andrea Arcangeli <andrea@suse.de>
 968        Jens Axboe <jens.axboe@oracle.com>
 969        David Mosberger-Tang <davidm@hpl.hp.com>
 970