1============================================ 2Dynamic DMA mapping using the generic device 3============================================ 4 5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 6 7This document describes the DMA API. For a more gentle introduction 8of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. 9 10This API is split into two pieces. Part I describes the basic API. 11Part II describes extensions for supporting non-consistent memory 12machines. Unless you know that your driver absolutely has to support 13non-consistent platforms (this is usually only legacy platforms) you 14should only use the API described in part I. 15 16Part I - dma_API 17---------------- 18 19To get the dma_API, you must #include <linux/dma-mapping.h>. This 20provides dma_addr_t and the interfaces described below. 21 22A dma_addr_t can hold any valid DMA address for the platform. It can be 23given to a device to use as a DMA source or target. A CPU cannot reference 24a dma_addr_t directly because there may be translation between its physical 25address space and the DMA address space. 26 27Part Ia - Using large DMA-coherent buffers 28------------------------------------------ 29 30:: 31 32 void * 33 dma_alloc_coherent(struct device *dev, size_t size, 34 dma_addr_t *dma_handle, gfp_t flag) 35 36Consistent memory is memory for which a write by either the device or 37the processor can immediately be read by the processor or device 38without having to worry about caching effects. (You may however need 39to make sure to flush the processor's write buffers before telling 40devices to read that memory.) 41 42This routine allocates a region of <size> bytes of consistent memory. 43 44It returns a pointer to the allocated region (in the processor's virtual 45address space) or NULL if the allocation failed. 46 47It also returns a <dma_handle> which may be cast to an unsigned integer the 48same width as the bus and given to the device as the DMA address base of 49the region. 50 51Note: consistent memory can be expensive on some platforms, and the 52minimum allocation length may be as big as a page, so you should 53consolidate your requests for consistent memory as much as possible. 54The simplest way to do that is to use the dma_pool calls (see below). 55 56The flag parameter (dma_alloc_coherent() only) allows the caller to 57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the 58implementation may choose to ignore flags that affect the location of 59the returned memory, like GFP_DMA). 60 61:: 62 63 void 64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 65 dma_addr_t dma_handle) 66 67Free a region of consistent memory you previously allocated. dev, 68size and dma_handle must all be the same as those passed into 69dma_alloc_coherent(). cpu_addr must be the virtual address returned by 70the dma_alloc_coherent(). 71 72Note that unlike their sibling allocation calls, these routines 73may only be called with IRQs enabled. 74 75 76Part Ib - Using small DMA-coherent buffers 77------------------------------------------ 78 79To get this part of the dma_API, you must #include <linux/dmapool.h> 80 81Many drivers need lots of small DMA-coherent memory regions for DMA 82descriptors or I/O buffers. Rather than allocating in units of a page 83or more using dma_alloc_coherent(), you can use DMA pools. These work 84much like a struct kmem_cache, except that they use the DMA-coherent allocator, 85not __get_free_pages(). Also, they understand common hardware constraints 86for alignment, like queue heads needing to be aligned on N-byte boundaries. 87 88 89:: 90 91 struct dma_pool * 92 dma_pool_create(const char *name, struct device *dev, 93 size_t size, size_t align, size_t alloc); 94 95dma_pool_create() initializes a pool of DMA-coherent buffers 96for use with a given device. It must be called in a context which 97can sleep. 98 99The "name" is for diagnostics (like a struct kmem_cache name); dev and size 100are like what you'd pass to dma_alloc_coherent(). The device's hardware 101alignment requirement for this type of data is "align" (which is expressed 102in bytes, and must be a power of two). If your device has no boundary 103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 104from this pool must not cross 4KByte boundaries. 105 106:: 107 108 void * 109 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 110 dma_addr_t *handle) 111 112Wraps dma_pool_alloc() and also zeroes the returned memory if the 113allocation attempt succeeded. 114 115 116:: 117 118 void * 119 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 120 dma_addr_t *dma_handle); 121 122This allocates memory from the pool; the returned memory will meet the 123size and alignment requirements specified at creation time. Pass 124GFP_ATOMIC to prevent blocking, or if it's permitted (not 125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 126blocking. Like dma_alloc_coherent(), this returns two values: an 127address usable by the CPU, and the DMA address usable by the pool's 128device. 129 130:: 131 132 void 133 dma_pool_free(struct dma_pool *pool, void *vaddr, 134 dma_addr_t addr); 135 136This puts memory back into the pool. The pool is what was passed to 137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 138were returned when that routine allocated the memory being freed. 139 140:: 141 142 void 143 dma_pool_destroy(struct dma_pool *pool); 144 145dma_pool_destroy() frees the resources of the pool. It must be 146called in a context which can sleep. Make sure you've freed all allocated 147memory back to the pool before you destroy it. 148 149 150Part Ic - DMA addressing limitations 151------------------------------------ 152 153:: 154 155 int 156 dma_set_mask_and_coherent(struct device *dev, u64 mask) 157 158Checks to see if the mask is possible and updates the device 159streaming and coherent DMA mask parameters if it is. 160 161Returns: 0 if successful and a negative error if not. 162 163:: 164 165 int 166 dma_set_mask(struct device *dev, u64 mask) 167 168Checks to see if the mask is possible and updates the device 169parameters if it is. 170 171Returns: 0 if successful and a negative error if not. 172 173:: 174 175 int 176 dma_set_coherent_mask(struct device *dev, u64 mask) 177 178Checks to see if the mask is possible and updates the device 179parameters if it is. 180 181Returns: 0 if successful and a negative error if not. 182 183:: 184 185 u64 186 dma_get_required_mask(struct device *dev) 187 188This API returns the mask that the platform requires to 189operate efficiently. Usually this means the returned mask 190is the minimum required to cover all of memory. Examining the 191required mask gives drivers with variable descriptor sizes the 192opportunity to use smaller descriptors as necessary. 193 194Requesting the required mask does not alter the current mask. If you 195wish to take advantage of it, you should issue a dma_set_mask() 196call to set the mask to the value returned. 197 198:: 199 200 size_t 201 dma_direct_max_mapping_size(struct device *dev); 202 203Returns the maximum size of a mapping for the device. The size parameter 204of the mapping functions like dma_map_single(), dma_map_page() and 205others should not be larger than the returned value. 206 207Part Id - Streaming DMA mappings 208-------------------------------- 209 210:: 211 212 dma_addr_t 213 dma_map_single(struct device *dev, void *cpu_addr, size_t size, 214 enum dma_data_direction direction) 215 216Maps a piece of processor virtual memory so it can be accessed by the 217device and returns the DMA address of the memory. 218 219The direction for both APIs may be converted freely by casting. 220However the dma_API uses a strongly typed enumerator for its 221direction: 222 223======================= ============================================= 224DMA_NONE no direction (used for debugging) 225DMA_TO_DEVICE data is going from the memory to the device 226DMA_FROM_DEVICE data is coming from the device to the memory 227DMA_BIDIRECTIONAL direction isn't known 228======================= ============================================= 229 230.. note:: 231 232 Not all memory regions in a machine can be mapped by this API. 233 Further, contiguous kernel virtual space may not be contiguous as 234 physical memory. Since this API does not provide any scatter/gather 235 capability, it will fail if the user tries to map a non-physically 236 contiguous piece of memory. For this reason, memory to be mapped by 237 this API should be obtained from sources which guarantee it to be 238 physically contiguous (like kmalloc). 239 240 Further, the DMA address of the memory must be within the 241 dma_mask of the device (the dma_mask is a bit mask of the 242 addressable region for the device, i.e., if the DMA address of 243 the memory ANDed with the dma_mask is still equal to the DMA 244 address, then the device can perform DMA to the memory). To 245 ensure that the memory allocated by kmalloc is within the dma_mask, 246 the driver may specify various platform-dependent flags to restrict 247 the DMA address range of the allocation (e.g., on x86, GFP_DMA 248 guarantees to be within the first 16MB of available DMA addresses, 249 as required by ISA devices). 250 251 Note also that the above constraints on physical contiguity and 252 dma_mask may not apply if the platform has an IOMMU (a device which 253 maps an I/O DMA address to a physical memory address). However, to be 254 portable, device driver writers may *not* assume that such an IOMMU 255 exists. 256 257.. warning:: 258 259 Memory coherency operates at a granularity called the cache 260 line width. In order for memory mapped by this API to operate 261 correctly, the mapped region must begin exactly on a cache line 262 boundary and end exactly on one (to prevent two separately mapped 263 regions from sharing a single cache line). Since the cache line size 264 may not be known at compile time, the API will not enforce this 265 requirement. Therefore, it is recommended that driver writers who 266 don't take special care to determine the cache line size at run time 267 only map virtual regions that begin and end on page boundaries (which 268 are guaranteed also to be cache line boundaries). 269 270 DMA_TO_DEVICE synchronisation must be done after the last modification 271 of the memory region by the software and before it is handed off to 272 the device. Once this primitive is used, memory covered by this 273 primitive should be treated as read-only by the device. If the device 274 may write to it at any point, it should be DMA_BIDIRECTIONAL (see 275 below). 276 277 DMA_FROM_DEVICE synchronisation must be done before the driver 278 accesses data that may be changed by the device. This memory should 279 be treated as read-only by the driver. If the driver needs to write 280 to it at any point, it should be DMA_BIDIRECTIONAL (see below). 281 282 DMA_BIDIRECTIONAL requires special handling: it means that the driver 283 isn't sure if the memory was modified before being handed off to the 284 device and also isn't sure if the device will also modify it. Thus, 285 you must always sync bidirectional memory twice: once before the 286 memory is handed off to the device (to make sure all memory changes 287 are flushed from the processor) and once before the data may be 288 accessed after being used by the device (to make sure any processor 289 cache lines are updated with data that the device may have changed). 290 291:: 292 293 void 294 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 295 enum dma_data_direction direction) 296 297Unmaps the region previously mapped. All the parameters passed in 298must be identical to those passed in (and returned) by the mapping 299API. 300 301:: 302 303 dma_addr_t 304 dma_map_page(struct device *dev, struct page *page, 305 unsigned long offset, size_t size, 306 enum dma_data_direction direction) 307 308 void 309 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 310 enum dma_data_direction direction) 311 312API for mapping and unmapping for pages. All the notes and warnings 313for the other mapping APIs apply here. Also, although the <offset> 314and <size> parameters are provided to do partial page mapping, it is 315recommended that you never use these unless you really know what the 316cache width is. 317 318:: 319 320 dma_addr_t 321 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, 322 enum dma_data_direction dir, unsigned long attrs) 323 324 void 325 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, 326 enum dma_data_direction dir, unsigned long attrs) 327 328API for mapping and unmapping for MMIO resources. All the notes and 329warnings for the other mapping APIs apply here. The API should only be 330used to map device MMIO resources, mapping of RAM is not permitted. 331 332:: 333 334 int 335 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 336 337In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() 338will fail to create a mapping. A driver can check for these errors by testing 339the returned DMA address with dma_mapping_error(). A non-zero return value 340means the mapping could not be created and the driver should take appropriate 341action (e.g. reduce current DMA mapping usage or delay and try again later). 342 343:: 344 345 int 346 dma_map_sg(struct device *dev, struct scatterlist *sg, 347 int nents, enum dma_data_direction direction) 348 349Returns: the number of DMA address segments mapped (this may be shorter 350than <nents> passed in if some elements of the scatter/gather list are 351physically or virtually adjacent and an IOMMU maps them with a single 352entry). 353 354Please note that the sg cannot be mapped again if it has been mapped once. 355The mapping process is allowed to destroy information in the sg. 356 357As with the other mapping interfaces, dma_map_sg() can fail. When it 358does, 0 is returned and a driver must take appropriate action. It is 359critical that the driver do something, in the case of a block driver 360aborting the request or even oopsing is better than doing nothing and 361corrupting the filesystem. 362 363With scatterlists, you use the resulting mapping like this:: 364 365 int i, count = dma_map_sg(dev, sglist, nents, direction); 366 struct scatterlist *sg; 367 368 for_each_sg(sglist, sg, count, i) { 369 hw_address[i] = sg_dma_address(sg); 370 hw_len[i] = sg_dma_len(sg); 371 } 372 373where nents is the number of entries in the sglist. 374 375The implementation is free to merge several consecutive sglist entries 376into one (e.g. with an IOMMU, or if several pages just happen to be 377physically contiguous) and returns the actual number of sg entries it 378mapped them to. On failure 0, is returned. 379 380Then you should loop count times (note: this can be less than nents times) 381and use sg_dma_address() and sg_dma_len() macros where you previously 382accessed sg->address and sg->length as shown above. 383 384:: 385 386 void 387 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 388 int nents, enum dma_data_direction direction) 389 390Unmap the previously mapped scatter/gather list. All the parameters 391must be the same as those and passed in to the scatter/gather mapping 392API. 393 394Note: <nents> must be the number you passed in, *not* the number of 395DMA address entries returned. 396 397:: 398 399 void 400 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 401 size_t size, 402 enum dma_data_direction direction) 403 404 void 405 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 406 size_t size, 407 enum dma_data_direction direction) 408 409 void 410 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 411 int nents, 412 enum dma_data_direction direction) 413 414 void 415 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 416 int nents, 417 enum dma_data_direction direction) 418 419Synchronise a single contiguous or scatter/gather mapping for the CPU 420and device. With the sync_sg API, all the parameters must be the same 421as those passed into the single mapping API. With the sync_single API, 422you can use dma_handle and size parameters that aren't identical to 423those passed into the single mapping API to do a partial sync. 424 425 426.. note:: 427 428 You must do this: 429 430 - Before reading values that have been written by DMA from the device 431 (use the DMA_FROM_DEVICE direction) 432 - After writing values that will be written to the device using DMA 433 (use the DMA_TO_DEVICE) direction 434 - before *and* after handing memory to the device if the memory is 435 DMA_BIDIRECTIONAL 436 437See also dma_map_single(). 438 439:: 440 441 dma_addr_t 442 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, 443 enum dma_data_direction dir, 444 unsigned long attrs) 445 446 void 447 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, 448 size_t size, enum dma_data_direction dir, 449 unsigned long attrs) 450 451 int 452 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, 453 int nents, enum dma_data_direction dir, 454 unsigned long attrs) 455 456 void 457 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, 458 int nents, enum dma_data_direction dir, 459 unsigned long attrs) 460 461The four functions above are just like the counterpart functions 462without the _attrs suffixes, except that they pass an optional 463dma_attrs. 464 465The interpretation of DMA attributes is architecture-specific, and 466each attribute should be documented in Documentation/DMA-attributes.txt. 467 468If dma_attrs are 0, the semantics of each of these functions 469is identical to those of the corresponding function 470without the _attrs suffix. As a result dma_map_single_attrs() 471can generally replace dma_map_single(), etc. 472 473As an example of the use of the ``*_attrs`` functions, here's how 474you could pass an attribute DMA_ATTR_FOO when mapping memory 475for DMA:: 476 477 #include <linux/dma-mapping.h> 478 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and 479 * documented in Documentation/DMA-attributes.txt */ 480 ... 481 482 unsigned long attr; 483 attr |= DMA_ATTR_FOO; 484 .... 485 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); 486 .... 487 488Architectures that care about DMA_ATTR_FOO would check for its 489presence in their implementations of the mapping and unmapping 490routines, e.g.::: 491 492 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, 493 size_t size, enum dma_data_direction dir, 494 unsigned long attrs) 495 { 496 .... 497 if (attrs & DMA_ATTR_FOO) 498 /* twizzle the frobnozzle */ 499 .... 500 } 501 502 503Part II - Advanced dma usage 504---------------------------- 505 506Warning: These pieces of the DMA API should not be used in the 507majority of cases, since they cater for unlikely corner cases that 508don't belong in usual drivers. 509 510If you don't understand how cache line coherency works between a 511processor and an I/O device, you should not be using this part of the 512API at all. 513 514:: 515 516 void * 517 dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, 518 gfp_t flag, unsigned long attrs) 519 520Identical to dma_alloc_coherent() except that when the 521DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the 522platform will choose to return either consistent or non-consistent memory 523as it sees fit. By using this API, you are guaranteeing to the platform 524that you have all the correct and necessary sync points for this memory 525in the driver should it choose to return non-consistent memory. 526 527Note: where the platform can return consistent memory, it will 528guarantee that the sync points become nops. 529 530Warning: Handling non-consistent memory is a real pain. You should 531only use this API if you positively know your driver will be 532required to work on one of the rare (usually non-PCI) architectures 533that simply cannot make consistent memory. 534 535:: 536 537 void 538 dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, 539 dma_addr_t dma_handle, unsigned long attrs) 540 541Free memory allocated by the dma_alloc_attrs(). All common 542parameters must be identical to those otherwise passed to dma_free_coherent, 543and the attrs argument must be identical to the attrs passed to 544dma_alloc_attrs(). 545 546:: 547 548 int 549 dma_get_cache_alignment(void) 550 551Returns the processor cache alignment. This is the absolute minimum 552alignment *and* width that you must observe when either mapping 553memory or doing partial flushes. 554 555.. note:: 556 557 This API may return a number *larger* than the actual cache 558 line, but it will guarantee that one or more cache lines fit exactly 559 into the width returned by this call. It will also always be a power 560 of two for easy alignment. 561 562:: 563 564 void 565 dma_cache_sync(struct device *dev, void *vaddr, size_t size, 566 enum dma_data_direction direction) 567 568Do a partial sync of memory that was allocated by dma_alloc_attrs() with 569the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and 570continuing on for size. Again, you *must* observe the cache line 571boundaries when doing this. 572 573:: 574 575 int 576 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 577 dma_addr_t device_addr, size_t size); 578 579Declare region of memory to be handed out by dma_alloc_coherent() when 580it's asked for coherent memory for this device. 581 582phys_addr is the CPU physical address to which the memory is currently 583assigned (this will be ioremapped so the CPU can access the region). 584 585device_addr is the DMA address the device needs to be programmed 586with to actually address this memory (this will be handed out as the 587dma_addr_t in dma_alloc_coherent()). 588 589size is the size of the area (must be multiples of PAGE_SIZE). 590 591As a simplification for the platforms, only *one* such region of 592memory may be declared per device. 593 594For reasons of efficiency, most platforms choose to track the declared 595region only at the granularity of a page. For smaller allocations, 596you should use the dma_pool() API. 597 598:: 599 600 void 601 dma_release_declared_memory(struct device *dev) 602 603Remove the memory region previously declared from the system. This 604API performs *no* in-use checking for this region and will return 605unconditionally having removed all the required structures. It is the 606driver's job to ensure that no parts of this memory region are 607currently in use. 608 609Part III - Debug drivers use of the DMA-API 610------------------------------------------- 611 612The DMA-API as described above has some constraints. DMA addresses must be 613released with the corresponding function with the same size for example. With 614the advent of hardware IOMMUs it becomes more and more important that drivers 615do not violate those constraints. In the worst case such a violation can 616result in data corruption up to destroyed filesystems. 617 618To debug drivers and find bugs in the usage of the DMA-API checking code can 619be compiled into the kernel which will tell the developer about those 620violations. If your architecture supports it you can select the "Enable 621debugging of DMA-API usage" option in your kernel configuration. Enabling this 622option has a performance impact. Do not enable it in production kernels. 623 624If you boot the resulting kernel will contain code which does some bookkeeping 625about what DMA memory was allocated for which device. If this code detects an 626error it prints a warning message with some details into your kernel log. An 627example warning message may look like this:: 628 629 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 630 check_unmap+0x203/0x490() 631 Hardware name: 632 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong 633 function [device address=0x00000000640444be] [size=66 bytes] [mapped as 634 single] [unmapped as page] 635 Modules linked in: nfsd exportfs bridge stp llc r8169 636 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 637 Call Trace: 638 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 639 [<ffffffff80647b70>] _spin_unlock+0x10/0x30 640 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 641 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 642 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 643 [<ffffffff80252f96>] queue_work+0x56/0x60 644 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 645 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 646 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 647 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 648 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 649 [<ffffffff803c7ea3>] check_unmap+0x203/0x490 650 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 651 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 652 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 653 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 654 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 655 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 656 [<ffffffff8020c093>] ret_from_intr+0x0/0xa 657 <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 658 659The driver developer can find the driver and the device including a stacktrace 660of the DMA-API call which caused this warning. 661 662Per default only the first error will result in a warning message. All other 663errors will only silently counted. This limitation exist to prevent the code 664from flooding your kernel log. To support debugging a device driver this can 665be disabled via debugfs. See the debugfs interface documentation below for 666details. 667 668The debugfs directory for the DMA-API debugging code is called dma-api/. In 669this directory the following files can currently be found: 670 671=============================== =============================================== 672dma-api/all_errors This file contains a numeric value. If this 673 value is not equal to zero the debugging code 674 will print a warning for every error it finds 675 into the kernel log. Be careful with this 676 option, as it can easily flood your logs. 677 678dma-api/disabled This read-only file contains the character 'Y' 679 if the debugging code is disabled. This can 680 happen when it runs out of memory or if it was 681 disabled at boot time 682 683dma-api/dump This read-only file contains current DMA 684 mappings. 685 686dma-api/error_count This file is read-only and shows the total 687 numbers of errors found. 688 689dma-api/num_errors The number in this file shows how many 690 warnings will be printed to the kernel log 691 before it stops. This number is initialized to 692 one at system boot and be set by writing into 693 this file 694 695dma-api/min_free_entries This read-only file can be read to get the 696 minimum number of free dma_debug_entries the 697 allocator has ever seen. If this value goes 698 down to zero the code will attempt to increase 699 nr_total_entries to compensate. 700 701dma-api/num_free_entries The current number of free dma_debug_entries 702 in the allocator. 703 704dma-api/nr_total_entries The total number of dma_debug_entries in the 705 allocator, both free and used. 706 707dma-api/driver_filter You can write a name of a driver into this file 708 to limit the debug output to requests from that 709 particular driver. Write an empty string to 710 that file to disable the filter and see 711 all errors again. 712=============================== =============================================== 713 714If you have this code compiled into your kernel it will be enabled by default. 715If you want to boot without the bookkeeping anyway you can provide 716'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 717Notice that you can not enable it again at runtime. You have to reboot to do 718so. 719 720If you want to see debug messages only for a special device driver you can 721specify the dma_debug_driver=<drivername> parameter. This will enable the 722driver filter at boot time. The debug code will only print errors for that 723driver afterwards. This filter can be disabled or changed later using debugfs. 724 725When the code disables itself at runtime this is most likely because it ran 726out of dma_debug_entries and was unable to allocate more on-demand. 65536 727entries are preallocated at boot - if this is too low for you boot with 728'dma_debug_entries=<your_desired_number>' to overwrite the default. Note 729that the code allocates entries in batches, so the exact number of 730preallocated entries may be greater than the actual number requested. The 731code will print to the kernel log each time it has dynamically allocated 732as many entries as were initially preallocated. This is to indicate that a 733larger preallocation size may be appropriate, or if it happens continually 734that a driver may be leaking mappings. 735 736:: 737 738 void 739 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); 740 741dma-debug interface debug_dma_mapping_error() to debug drivers that fail 742to check DMA mapping errors on addresses returned by dma_map_single() and 743dma_map_page() interfaces. This interface clears a flag set by 744debug_dma_map_page() to indicate that dma_mapping_error() has been called by 745the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 746this flag is still set, prints warning message that includes call trace that 747leads up to the unmap. This interface can be called from dma_mapping_error() 748routines to enable DMA mapping error check debugging. 749