1 Dynamic DMA mapping using the generic device 2 ============================================ 3 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 5 6This document describes the DMA API. For a more gentle introduction 7of the API (and actual examples) see 8Documentation/DMA-API-HOWTO.txt. 9 10This API is split into two pieces. Part I describes the API. Part II 11describes the extensions to the API for supporting non-consistent 12memory machines. Unless you know that your driver absolutely has to 13support non-consistent platforms (this is usually only legacy 14platforms) you should only use the API described in part I. 15 16Part I - dma_ API 17------------------------------------- 18 19To get the dma_ API, you must #include <linux/dma-mapping.h> 20 21 22Part Ia - Using large dma-coherent buffers 23------------------------------------------ 24 25void * 26dma_alloc_coherent(struct device *dev, size_t size, 27 dma_addr_t *dma_handle, gfp_t flag) 28 29Consistent memory is memory for which a write by either the device or 30the processor can immediately be read by the processor or device 31without having to worry about caching effects. (You may however need 32to make sure to flush the processor's write buffers before telling 33devices to read that memory.) 34 35This routine allocates a region of <size> bytes of consistent memory. 36It also returns a <dma_handle> which may be cast to an unsigned 37integer the same width as the bus and used as the physical address 38base of the region. 39 40Returns: a pointer to the allocated region (in the processor's virtual 41address space) or NULL if the allocation failed. 42 43Note: consistent memory can be expensive on some platforms, and the 44minimum allocation length may be as big as a page, so you should 45consolidate your requests for consistent memory as much as possible. 46The simplest way to do that is to use the dma_pool calls (see below). 47 48The flag parameter (dma_alloc_coherent only) allows the caller to 49specify the GFP_ flags (see kmalloc) for the allocation (the 50implementation may choose to ignore flags that affect the location of 51the returned memory, like GFP_DMA). 52 53void * 54dma_zalloc_coherent(struct device *dev, size_t size, 55 dma_addr_t *dma_handle, gfp_t flag) 56 57Wraps dma_alloc_coherent() and also zeroes the returned memory if the 58allocation attempt succeeded. 59 60void 61dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 62 dma_addr_t dma_handle) 63 64Free the region of consistent memory you previously allocated. dev, 65size and dma_handle must all be the same as those passed into the 66consistent allocate. cpu_addr must be the virtual address returned by 67the consistent allocate. 68 69Note that unlike their sibling allocation calls, these routines 70may only be called with IRQs enabled. 71 72 73Part Ib - Using small dma-coherent buffers 74------------------------------------------ 75 76To get this part of the dma_ API, you must #include <linux/dmapool.h> 77 78Many drivers need lots of small dma-coherent memory regions for DMA 79descriptors or I/O buffers. Rather than allocating in units of a page 80or more using dma_alloc_coherent(), you can use DMA pools. These work 81much like a struct kmem_cache, except that they use the dma-coherent allocator, 82not __get_free_pages(). Also, they understand common hardware constraints 83for alignment, like queue heads needing to be aligned on N-byte boundaries. 84 85 86 struct dma_pool * 87 dma_pool_create(const char *name, struct device *dev, 88 size_t size, size_t align, size_t alloc); 89 90The pool create() routines initialize a pool of dma-coherent buffers 91for use with a given device. It must be called in a context which 92can sleep. 93 94The "name" is for diagnostics (like a struct kmem_cache name); dev and size 95are like what you'd pass to dma_alloc_coherent(). The device's hardware 96alignment requirement for this type of data is "align" (which is expressed 97in bytes, and must be a power of two). If your device has no boundary 98crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 99from this pool must not cross 4KByte boundaries. 100 101 102 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 103 dma_addr_t *dma_handle); 104 105This allocates memory from the pool; the returned memory will meet the size 106and alignment requirements specified at creation time. Pass GFP_ATOMIC to 107prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks), 108pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns 109two values: an address usable by the cpu, and the dma address usable by the 110pool's device. 111 112 113 void dma_pool_free(struct dma_pool *pool, void *vaddr, 114 dma_addr_t addr); 115 116This puts memory back into the pool. The pool is what was passed to 117the pool allocation routine; the cpu (vaddr) and dma addresses are what 118were returned when that routine allocated the memory being freed. 119 120 121 void dma_pool_destroy(struct dma_pool *pool); 122 123The pool destroy() routines free the resources of the pool. They must be 124called in a context which can sleep. Make sure you've freed all allocated 125memory back to the pool before you destroy it. 126 127 128Part Ic - DMA addressing limitations 129------------------------------------ 130 131int 132dma_supported(struct device *dev, u64 mask) 133 134Checks to see if the device can support DMA to the memory described by 135mask. 136 137Returns: 1 if it can and 0 if it can't. 138 139Notes: This routine merely tests to see if the mask is possible. It 140won't change the current mask settings. It is more intended as an 141internal API for use by the platform than an external API for use by 142driver writers. 143 144int 145dma_set_mask_and_coherent(struct device *dev, u64 mask) 146 147Checks to see if the mask is possible and updates the device 148streaming and coherent DMA mask parameters if it is. 149 150Returns: 0 if successful and a negative error if not. 151 152int 153dma_set_mask(struct device *dev, u64 mask) 154 155Checks to see if the mask is possible and updates the device 156parameters if it is. 157 158Returns: 0 if successful and a negative error if not. 159 160int 161dma_set_coherent_mask(struct device *dev, u64 mask) 162 163Checks to see if the mask is possible and updates the device 164parameters if it is. 165 166Returns: 0 if successful and a negative error if not. 167 168u64 169dma_get_required_mask(struct device *dev) 170 171This API returns the mask that the platform requires to 172operate efficiently. Usually this means the returned mask 173is the minimum required to cover all of memory. Examining the 174required mask gives drivers with variable descriptor sizes the 175opportunity to use smaller descriptors as necessary. 176 177Requesting the required mask does not alter the current mask. If you 178wish to take advantage of it, you should issue a dma_set_mask() 179call to set the mask to the value returned. 180 181 182Part Id - Streaming DMA mappings 183-------------------------------- 184 185dma_addr_t 186dma_map_single(struct device *dev, void *cpu_addr, size_t size, 187 enum dma_data_direction direction) 188 189Maps a piece of processor virtual memory so it can be accessed by the 190device and returns the physical handle of the memory. 191 192The direction for both api's may be converted freely by casting. 193However the dma_ API uses a strongly typed enumerator for its 194direction: 195 196DMA_NONE no direction (used for debugging) 197DMA_TO_DEVICE data is going from the memory to the device 198DMA_FROM_DEVICE data is coming from the device to the memory 199DMA_BIDIRECTIONAL direction isn't known 200 201Notes: Not all memory regions in a machine can be mapped by this 202API. Further, regions that appear to be physically contiguous in 203kernel virtual space may not be contiguous as physical memory. Since 204this API does not provide any scatter/gather capability, it will fail 205if the user tries to map a non-physically contiguous piece of memory. 206For this reason, it is recommended that memory mapped by this API be 207obtained only from sources which guarantee it to be physically contiguous 208(like kmalloc). 209 210Further, the physical address of the memory must be within the 211dma_mask of the device (the dma_mask represents a bit mask of the 212addressable region for the device. I.e., if the physical address of 213the memory anded with the dma_mask is still equal to the physical 214address, then the device can perform DMA to the memory). In order to 215ensure that the memory allocated by kmalloc is within the dma_mask, 216the driver may specify various platform-dependent flags to restrict 217the physical memory range of the allocation (e.g. on x86, GFP_DMA 218guarantees to be within the first 16Mb of available physical memory, 219as required by ISA devices). 220 221Note also that the above constraints on physical contiguity and 222dma_mask may not apply if the platform has an IOMMU (a device which 223supplies a physical to virtual mapping between the I/O memory bus and 224the device). However, to be portable, device driver writers may *not* 225assume that such an IOMMU exists. 226 227Warnings: Memory coherency operates at a granularity called the cache 228line width. In order for memory mapped by this API to operate 229correctly, the mapped region must begin exactly on a cache line 230boundary and end exactly on one (to prevent two separately mapped 231regions from sharing a single cache line). Since the cache line size 232may not be known at compile time, the API will not enforce this 233requirement. Therefore, it is recommended that driver writers who 234don't take special care to determine the cache line size at run time 235only map virtual regions that begin and end on page boundaries (which 236are guaranteed also to be cache line boundaries). 237 238DMA_TO_DEVICE synchronisation must be done after the last modification 239of the memory region by the software and before it is handed off to 240the driver. Once this primitive is used, memory covered by this 241primitive should be treated as read-only by the device. If the device 242may write to it at any point, it should be DMA_BIDIRECTIONAL (see 243below). 244 245DMA_FROM_DEVICE synchronisation must be done before the driver 246accesses data that may be changed by the device. This memory should 247be treated as read-only by the driver. If the driver needs to write 248to it at any point, it should be DMA_BIDIRECTIONAL (see below). 249 250DMA_BIDIRECTIONAL requires special handling: it means that the driver 251isn't sure if the memory was modified before being handed off to the 252device and also isn't sure if the device will also modify it. Thus, 253you must always sync bidirectional memory twice: once before the 254memory is handed off to the device (to make sure all memory changes 255are flushed from the processor) and once before the data may be 256accessed after being used by the device (to make sure any processor 257cache lines are updated with data that the device may have changed). 258 259void 260dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 261 enum dma_data_direction direction) 262 263Unmaps the region previously mapped. All the parameters passed in 264must be identical to those passed in (and returned) by the mapping 265API. 266 267dma_addr_t 268dma_map_page(struct device *dev, struct page *page, 269 unsigned long offset, size_t size, 270 enum dma_data_direction direction) 271void 272dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 273 enum dma_data_direction direction) 274 275API for mapping and unmapping for pages. All the notes and warnings 276for the other mapping APIs apply here. Also, although the <offset> 277and <size> parameters are provided to do partial page mapping, it is 278recommended that you never use these unless you really know what the 279cache width is. 280 281int 282dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 283 284In some circumstances dma_map_single and dma_map_page will fail to create 285a mapping. A driver can check for these errors by testing the returned 286dma address with dma_mapping_error(). A non-zero return value means the mapping 287could not be created and the driver should take appropriate action (e.g. 288reduce current DMA mapping usage or delay and try again later). 289 290 int 291 dma_map_sg(struct device *dev, struct scatterlist *sg, 292 int nents, enum dma_data_direction direction) 293 294Returns: the number of physical segments mapped (this may be shorter 295than <nents> passed in if some elements of the scatter/gather list are 296physically or virtually adjacent and an IOMMU maps them with a single 297entry). 298 299Please note that the sg cannot be mapped again if it has been mapped once. 300The mapping process is allowed to destroy information in the sg. 301 302As with the other mapping interfaces, dma_map_sg can fail. When it 303does, 0 is returned and a driver must take appropriate action. It is 304critical that the driver do something, in the case of a block driver 305aborting the request or even oopsing is better than doing nothing and 306corrupting the filesystem. 307 308With scatterlists, you use the resulting mapping like this: 309 310 int i, count = dma_map_sg(dev, sglist, nents, direction); 311 struct scatterlist *sg; 312 313 for_each_sg(sglist, sg, count, i) { 314 hw_address[i] = sg_dma_address(sg); 315 hw_len[i] = sg_dma_len(sg); 316 } 317 318where nents is the number of entries in the sglist. 319 320The implementation is free to merge several consecutive sglist entries 321into one (e.g. with an IOMMU, or if several pages just happen to be 322physically contiguous) and returns the actual number of sg entries it 323mapped them to. On failure 0, is returned. 324 325Then you should loop count times (note: this can be less than nents times) 326and use sg_dma_address() and sg_dma_len() macros where you previously 327accessed sg->address and sg->length as shown above. 328 329 void 330 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 331 int nhwentries, enum dma_data_direction direction) 332 333Unmap the previously mapped scatter/gather list. All the parameters 334must be the same as those and passed in to the scatter/gather mapping 335API. 336 337Note: <nents> must be the number you passed in, *not* the number of 338physical entries returned. 339 340void 341dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 342 enum dma_data_direction direction) 343void 344dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 345 enum dma_data_direction direction) 346void 347dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 348 enum dma_data_direction direction) 349void 350dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 351 enum dma_data_direction direction) 352 353Synchronise a single contiguous or scatter/gather mapping for the cpu 354and device. With the sync_sg API, all the parameters must be the same 355as those passed into the single mapping API. With the sync_single API, 356you can use dma_handle and size parameters that aren't identical to 357those passed into the single mapping API to do a partial sync. 358 359Notes: You must do this: 360 361- Before reading values that have been written by DMA from the device 362 (use the DMA_FROM_DEVICE direction) 363- After writing values that will be written to the device using DMA 364 (use the DMA_TO_DEVICE) direction 365- before *and* after handing memory to the device if the memory is 366 DMA_BIDIRECTIONAL 367 368See also dma_map_single(). 369 370dma_addr_t 371dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, 372 enum dma_data_direction dir, 373 struct dma_attrs *attrs) 374 375void 376dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, 377 size_t size, enum dma_data_direction dir, 378 struct dma_attrs *attrs) 379 380int 381dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, 382 int nents, enum dma_data_direction dir, 383 struct dma_attrs *attrs) 384 385void 386dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, 387 int nents, enum dma_data_direction dir, 388 struct dma_attrs *attrs) 389 390The four functions above are just like the counterpart functions 391without the _attrs suffixes, except that they pass an optional 392struct dma_attrs*. 393 394struct dma_attrs encapsulates a set of "dma attributes". For the 395definition of struct dma_attrs see linux/dma-attrs.h. 396 397The interpretation of dma attributes is architecture-specific, and 398each attribute should be documented in Documentation/DMA-attributes.txt. 399 400If struct dma_attrs* is NULL, the semantics of each of these 401functions is identical to those of the corresponding function 402without the _attrs suffix. As a result dma_map_single_attrs() 403can generally replace dma_map_single(), etc. 404 405As an example of the use of the *_attrs functions, here's how 406you could pass an attribute DMA_ATTR_FOO when mapping memory 407for DMA: 408 409#include <linux/dma-attrs.h> 410/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and 411 * documented in Documentation/DMA-attributes.txt */ 412... 413 414 DEFINE_DMA_ATTRS(attrs); 415 dma_set_attr(DMA_ATTR_FOO, &attrs); 416 .... 417 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr); 418 .... 419 420Architectures that care about DMA_ATTR_FOO would check for its 421presence in their implementations of the mapping and unmapping 422routines, e.g.: 423 424void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, 425 size_t size, enum dma_data_direction dir, 426 struct dma_attrs *attrs) 427{ 428 .... 429 int foo = dma_get_attr(DMA_ATTR_FOO, attrs); 430 .... 431 if (foo) 432 /* twizzle the frobnozzle */ 433 .... 434 435 436Part II - Advanced dma_ usage 437----------------------------- 438 439Warning: These pieces of the DMA API should not be used in the 440majority of cases, since they cater for unlikely corner cases that 441don't belong in usual drivers. 442 443If you don't understand how cache line coherency works between a 444processor and an I/O device, you should not be using this part of the 445API at all. 446 447void * 448dma_alloc_noncoherent(struct device *dev, size_t size, 449 dma_addr_t *dma_handle, gfp_t flag) 450 451Identical to dma_alloc_coherent() except that the platform will 452choose to return either consistent or non-consistent memory as it sees 453fit. By using this API, you are guaranteeing to the platform that you 454have all the correct and necessary sync points for this memory in the 455driver should it choose to return non-consistent memory. 456 457Note: where the platform can return consistent memory, it will 458guarantee that the sync points become nops. 459 460Warning: Handling non-consistent memory is a real pain. You should 461only ever use this API if you positively know your driver will be 462required to work on one of the rare (usually non-PCI) architectures 463that simply cannot make consistent memory. 464 465void 466dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 467 dma_addr_t dma_handle) 468 469Free memory allocated by the nonconsistent API. All parameters must 470be identical to those passed in (and returned by 471dma_alloc_noncoherent()). 472 473int 474dma_get_cache_alignment(void) 475 476Returns the processor cache alignment. This is the absolute minimum 477alignment *and* width that you must observe when either mapping 478memory or doing partial flushes. 479 480Notes: This API may return a number *larger* than the actual cache 481line, but it will guarantee that one or more cache lines fit exactly 482into the width returned by this call. It will also always be a power 483of two for easy alignment. 484 485void 486dma_cache_sync(struct device *dev, void *vaddr, size_t size, 487 enum dma_data_direction direction) 488 489Do a partial sync of memory that was allocated by 490dma_alloc_noncoherent(), starting at virtual address vaddr and 491continuing on for size. Again, you *must* observe the cache line 492boundaries when doing this. 493 494int 495dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 496 dma_addr_t device_addr, size_t size, int 497 flags) 498 499Declare region of memory to be handed out by dma_alloc_coherent when 500it's asked for coherent memory for this device. 501 502bus_addr is the physical address to which the memory is currently 503assigned in the bus responding region (this will be used by the 504platform to perform the mapping). 505 506device_addr is the physical address the device needs to be programmed 507with actually to address this memory (this will be handed out as the 508dma_addr_t in dma_alloc_coherent()). 509 510size is the size of the area (must be multiples of PAGE_SIZE). 511 512flags can be or'd together and are: 513 514DMA_MEMORY_MAP - request that the memory returned from 515dma_alloc_coherent() be directly writable. 516 517DMA_MEMORY_IO - request that the memory returned from 518dma_alloc_coherent() be addressable using read/write/memcpy_toio etc. 519 520One or both of these flags must be present. 521 522DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by 523dma_alloc_coherent of any child devices of this one (for memory residing 524on a bridge). 525 526DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 527Do not allow dma_alloc_coherent() to fall back to system memory when 528it's out of memory in the declared region. 529 530The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and 531must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO 532if only DMA_MEMORY_MAP were passed in) for success or zero for 533failure. 534 535Note, for DMA_MEMORY_IO returns, all subsequent memory returned by 536dma_alloc_coherent() may no longer be accessed directly, but instead 537must be accessed using the correct bus functions. If your driver 538isn't prepared to handle this contingency, it should not specify 539DMA_MEMORY_IO in the input flags. 540 541As a simplification for the platforms, only *one* such region of 542memory may be declared per device. 543 544For reasons of efficiency, most platforms choose to track the declared 545region only at the granularity of a page. For smaller allocations, 546you should use the dma_pool() API. 547 548void 549dma_release_declared_memory(struct device *dev) 550 551Remove the memory region previously declared from the system. This 552API performs *no* in-use checking for this region and will return 553unconditionally having removed all the required structures. It is the 554driver's job to ensure that no parts of this memory region are 555currently in use. 556 557void * 558dma_mark_declared_memory_occupied(struct device *dev, 559 dma_addr_t device_addr, size_t size) 560 561This is used to occupy specific regions of the declared space 562(dma_alloc_coherent() will hand out the first free region it finds). 563 564device_addr is the *device* address of the region requested. 565 566size is the size (and should be a page-sized multiple). 567 568The return value will be either a pointer to the processor virtual 569address of the memory, or an error (via PTR_ERR()) if any part of the 570region is occupied. 571 572Part III - Debug drivers use of the DMA-API 573------------------------------------------- 574 575The DMA-API as described above as some constraints. DMA addresses must be 576released with the corresponding function with the same size for example. With 577the advent of hardware IOMMUs it becomes more and more important that drivers 578do not violate those constraints. In the worst case such a violation can 579result in data corruption up to destroyed filesystems. 580 581To debug drivers and find bugs in the usage of the DMA-API checking code can 582be compiled into the kernel which will tell the developer about those 583violations. If your architecture supports it you can select the "Enable 584debugging of DMA-API usage" option in your kernel configuration. Enabling this 585option has a performance impact. Do not enable it in production kernels. 586 587If you boot the resulting kernel will contain code which does some bookkeeping 588about what DMA memory was allocated for which device. If this code detects an 589error it prints a warning message with some details into your kernel log. An 590example warning message may look like this: 591 592------------[ cut here ]------------ 593WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 594 check_unmap+0x203/0x490() 595Hardware name: 596forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong 597 function [device address=0x00000000640444be] [size=66 bytes] [mapped as 598single] [unmapped as page] 599Modules linked in: nfsd exportfs bridge stp llc r8169 600Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 601Call Trace: 602 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 603 [<ffffffff80647b70>] _spin_unlock+0x10/0x30 604 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 605 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 606 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 607 [<ffffffff80252f96>] queue_work+0x56/0x60 608 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 609 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 610 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 611 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 612 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 613 [<ffffffff803c7ea3>] check_unmap+0x203/0x490 614 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 615 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 616 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 617 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 618 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 619 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 620 [<ffffffff8020c093>] ret_from_intr+0x0/0xa 621 <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 622 623The driver developer can find the driver and the device including a stacktrace 624of the DMA-API call which caused this warning. 625 626Per default only the first error will result in a warning message. All other 627errors will only silently counted. This limitation exist to prevent the code 628from flooding your kernel log. To support debugging a device driver this can 629be disabled via debugfs. See the debugfs interface documentation below for 630details. 631 632The debugfs directory for the DMA-API debugging code is called dma-api/. In 633this directory the following files can currently be found: 634 635 dma-api/all_errors This file contains a numeric value. If this 636 value is not equal to zero the debugging code 637 will print a warning for every error it finds 638 into the kernel log. Be careful with this 639 option, as it can easily flood your logs. 640 641 dma-api/disabled This read-only file contains the character 'Y' 642 if the debugging code is disabled. This can 643 happen when it runs out of memory or if it was 644 disabled at boot time 645 646 dma-api/error_count This file is read-only and shows the total 647 numbers of errors found. 648 649 dma-api/num_errors The number in this file shows how many 650 warnings will be printed to the kernel log 651 before it stops. This number is initialized to 652 one at system boot and be set by writing into 653 this file 654 655 dma-api/min_free_entries 656 This read-only file can be read to get the 657 minimum number of free dma_debug_entries the 658 allocator has ever seen. If this value goes 659 down to zero the code will disable itself 660 because it is not longer reliable. 661 662 dma-api/num_free_entries 663 The current number of free dma_debug_entries 664 in the allocator. 665 666 dma-api/driver-filter 667 You can write a name of a driver into this file 668 to limit the debug output to requests from that 669 particular driver. Write an empty string to 670 that file to disable the filter and see 671 all errors again. 672 673If you have this code compiled into your kernel it will be enabled by default. 674If you want to boot without the bookkeeping anyway you can provide 675'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 676Notice that you can not enable it again at runtime. You have to reboot to do 677so. 678 679If you want to see debug messages only for a special device driver you can 680specify the dma_debug_driver=<drivername> parameter. This will enable the 681driver filter at boot time. The debug code will only print errors for that 682driver afterwards. This filter can be disabled or changed later using debugfs. 683 684When the code disables itself at runtime this is most likely because it ran 685out of dma_debug_entries. These entries are preallocated at boot. The number 686of preallocated entries is defined per architecture. If it is too low for you 687boot with 'dma_debug_entries=<your_desired_number>' to overwrite the 688architectural default. 689 690void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); 691 692dma-debug interface debug_dma_mapping_error() to debug drivers that fail 693to check dma mapping errors on addresses returned by dma_map_single() and 694dma_map_page() interfaces. This interface clears a flag set by 695debug_dma_map_page() to indicate that dma_mapping_error() has been called by 696the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 697this flag is still set, prints warning message that includes call trace that 698leads up to the unmap. This interface can be called from dma_mapping_error() 699routines to enable dma mapping error check debugging. 700 701