aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/DMA-API.txt152
1 files changed, 15 insertions, 137 deletions
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 0fc5728ed487..5e018f67b409 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -4,20 +4,18 @@
4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5 5
6This document describes the DMA API. For a more gentle introduction 6This document describes the DMA API. For a more gentle introduction
7phrased in terms of the pci_ equivalents (and actual examples) see 7of the API (and actual examples) see
8Documentation/PCI/PCI-DMA-mapping.txt. 8Documentation/PCI/PCI-DMA-mapping.txt.
9 9
10This API is split into two pieces. Part I describes the API and the 10This API is split into two pieces. Part I describes the API. Part II
11corresponding pci_ API. Part II describes the extensions to the API 11describes the extensions to the API for supporting non-consistent
12for supporting non-consistent memory machines. Unless you know that 12memory machines. Unless you know that your driver absolutely has to
13your driver absolutely has to support non-consistent platforms (this 13support non-consistent platforms (this is usually only legacy
14is usually only legacy platforms) you should only use the API 14platforms) you should only use the API described in part I.
15described in part I.
16 15
17Part I - pci_ and dma_ Equivalent API 16Part I - dma_ API
18------------------------------------- 17-------------------------------------
19 18
20To get the pci_ API, you must #include <linux/pci.h>
21To get the dma_ API, you must #include <linux/dma-mapping.h> 19To get the dma_ API, you must #include <linux/dma-mapping.h>
22 20
23 21
@@ -27,9 +25,6 @@ Part Ia - Using large dma-coherent buffers
27void * 25void *
28dma_alloc_coherent(struct device *dev, size_t size, 26dma_alloc_coherent(struct device *dev, size_t size,
29 dma_addr_t *dma_handle, gfp_t flag) 27 dma_addr_t *dma_handle, gfp_t flag)
30void *
31pci_alloc_consistent(struct pci_dev *dev, size_t size,
32 dma_addr_t *dma_handle)
33 28
34Consistent memory is memory for which a write by either the device or 29Consistent memory is memory for which a write by either the device or
35the processor can immediately be read by the processor or device 30the processor can immediately be read by the processor or device
@@ -53,15 +48,11 @@ The simplest way to do that is to use the dma_pool calls (see below).
53The flag parameter (dma_alloc_coherent only) allows the caller to 48The flag parameter (dma_alloc_coherent only) allows the caller to
54specify the GFP_ flags (see kmalloc) for the allocation (the 49specify the GFP_ flags (see kmalloc) for the allocation (the
55implementation may choose to ignore flags that affect the location of 50implementation may choose to ignore flags that affect the location of
56the returned memory, like GFP_DMA). For pci_alloc_consistent, you 51the returned memory, like GFP_DMA).
57must assume GFP_ATOMIC behaviour.
58 52
59void 53void
60dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 54dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
61 dma_addr_t dma_handle) 55 dma_addr_t dma_handle)
62void
63pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr,
64 dma_addr_t dma_handle)
65 56
66Free the region of consistent memory you previously allocated. dev, 57Free the region of consistent memory you previously allocated. dev,
67size and dma_handle must all be the same as those passed into the 58size and dma_handle must all be the same as those passed into the
@@ -89,10 +80,6 @@ for alignment, like queue heads needing to be aligned on N-byte boundaries.
89 dma_pool_create(const char *name, struct device *dev, 80 dma_pool_create(const char *name, struct device *dev,
90 size_t size, size_t align, size_t alloc); 81 size_t size, size_t align, size_t alloc);
91 82
92 struct pci_pool *
93 pci_pool_create(const char *name, struct pci_device *dev,
94 size_t size, size_t align, size_t alloc);
95
96The pool create() routines initialize a pool of dma-coherent buffers 83The pool create() routines initialize a pool of dma-coherent buffers
97for use with a given device. It must be called in a context which 84for use with a given device. It must be called in a context which
98can sleep. 85can sleep.
@@ -108,9 +95,6 @@ from this pool must not cross 4KByte boundaries.
108 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 95 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
109 dma_addr_t *dma_handle); 96 dma_addr_t *dma_handle);
110 97
111 void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags,
112 dma_addr_t *dma_handle);
113
114This allocates memory from the pool; the returned memory will meet the size 98This allocates memory from the pool; the returned memory will meet the size
115and alignment requirements specified at creation time. Pass GFP_ATOMIC to 99and alignment requirements specified at creation time. Pass GFP_ATOMIC to
116prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks), 100prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
@@ -122,9 +106,6 @@ pool's device.
122 void dma_pool_free(struct dma_pool *pool, void *vaddr, 106 void dma_pool_free(struct dma_pool *pool, void *vaddr,
123 dma_addr_t addr); 107 dma_addr_t addr);
124 108
125 void pci_pool_free(struct pci_pool *pool, void *vaddr,
126 dma_addr_t addr);
127
128This puts memory back into the pool. The pool is what was passed to 109This puts memory back into the pool. The pool is what was passed to
129the pool allocation routine; the cpu (vaddr) and dma addresses are what 110the pool allocation routine; the cpu (vaddr) and dma addresses are what
130were returned when that routine allocated the memory being freed. 111were returned when that routine allocated the memory being freed.
@@ -132,8 +113,6 @@ were returned when that routine allocated the memory being freed.
132 113
133 void dma_pool_destroy(struct dma_pool *pool); 114 void dma_pool_destroy(struct dma_pool *pool);
134 115
135 void pci_pool_destroy(struct pci_pool *pool);
136
137The pool destroy() routines free the resources of the pool. They must be 116The pool destroy() routines free the resources of the pool. They must be
138called in a context which can sleep. Make sure you've freed all allocated 117called in a context which can sleep. Make sure you've freed all allocated
139memory back to the pool before you destroy it. 118memory back to the pool before you destroy it.
@@ -144,8 +123,6 @@ Part Ic - DMA addressing limitations
144 123
145int 124int
146dma_supported(struct device *dev, u64 mask) 125dma_supported(struct device *dev, u64 mask)
147int
148pci_dma_supported(struct pci_dev *hwdev, u64 mask)
149 126
150Checks to see if the device can support DMA to the memory described by 127Checks to see if the device can support DMA to the memory described by
151mask. 128mask.
@@ -159,8 +136,6 @@ driver writers.
159 136
160int 137int
161dma_set_mask(struct device *dev, u64 mask) 138dma_set_mask(struct device *dev, u64 mask)
162int
163pci_set_dma_mask(struct pci_device *dev, u64 mask)
164 139
165Checks to see if the mask is possible and updates the device 140Checks to see if the mask is possible and updates the device
166parameters if it is. 141parameters if it is.
@@ -169,8 +144,6 @@ Returns: 0 if successful and a negative error if not.
169 144
170int 145int
171dma_set_coherent_mask(struct device *dev, u64 mask) 146dma_set_coherent_mask(struct device *dev, u64 mask)
172int
173pci_set_consistent_dma_mask(struct pci_device *dev, u64 mask)
174 147
175Checks to see if the mask is possible and updates the device 148Checks to see if the mask is possible and updates the device
176parameters if it is. 149parameters if it is.
@@ -197,9 +170,6 @@ Part Id - Streaming DMA mappings
197dma_addr_t 170dma_addr_t
198dma_map_single(struct device *dev, void *cpu_addr, size_t size, 171dma_map_single(struct device *dev, void *cpu_addr, size_t size,
199 enum dma_data_direction direction) 172 enum dma_data_direction direction)
200dma_addr_t
201pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
202 int direction)
203 173
204Maps a piece of processor virtual memory so it can be accessed by the 174Maps a piece of processor virtual memory so it can be accessed by the
205device and returns the physical handle of the memory. 175device and returns the physical handle of the memory.
@@ -208,14 +178,10 @@ The direction for both api's may be converted freely by casting.
208However the dma_ API uses a strongly typed enumerator for its 178However the dma_ API uses a strongly typed enumerator for its
209direction: 179direction:
210 180
211DMA_NONE = PCI_DMA_NONE no direction (used for 181DMA_NONE no direction (used for debugging)
212 debugging) 182DMA_TO_DEVICE data is going from the memory to the device
213DMA_TO_DEVICE = PCI_DMA_TODEVICE data is going from the 183DMA_FROM_DEVICE data is coming from the device to the memory
214 memory to the device 184DMA_BIDIRECTIONAL direction isn't known
215DMA_FROM_DEVICE = PCI_DMA_FROMDEVICE data is coming from
216 the device to the
217 memory
218DMA_BIDIRECTIONAL = PCI_DMA_BIDIRECTIONAL direction isn't known
219 185
220Notes: Not all memory regions in a machine can be mapped by this 186Notes: Not all memory regions in a machine can be mapped by this
221API. Further, regions that appear to be physically contiguous in 187API. Further, regions that appear to be physically contiguous in
@@ -278,9 +244,6 @@ cache lines are updated with data that the device may have changed).
278void 244void
279dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 245dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
280 enum dma_data_direction direction) 246 enum dma_data_direction direction)
281void
282pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
283 size_t size, int direction)
284 247
285Unmaps the region previously mapped. All the parameters passed in 248Unmaps the region previously mapped. All the parameters passed in
286must be identical to those passed in (and returned) by the mapping 249must be identical to those passed in (and returned) by the mapping
@@ -290,15 +253,9 @@ dma_addr_t
290dma_map_page(struct device *dev, struct page *page, 253dma_map_page(struct device *dev, struct page *page,
291 unsigned long offset, size_t size, 254 unsigned long offset, size_t size,
292 enum dma_data_direction direction) 255 enum dma_data_direction direction)
293dma_addr_t
294pci_map_page(struct pci_dev *hwdev, struct page *page,
295 unsigned long offset, size_t size, int direction)
296void 256void
297dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 257dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
298 enum dma_data_direction direction) 258 enum dma_data_direction direction)
299void
300pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
301 size_t size, int direction)
302 259
303API for mapping and unmapping for pages. All the notes and warnings 260API for mapping and unmapping for pages. All the notes and warnings
304for the other mapping APIs apply here. Also, although the <offset> 261for the other mapping APIs apply here. Also, although the <offset>
@@ -309,9 +266,6 @@ cache width is.
309int 266int
310dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 267dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
311 268
312int
313pci_dma_mapping_error(struct pci_dev *hwdev, dma_addr_t dma_addr)
314
315In some circumstances dma_map_single and dma_map_page will fail to create 269In some circumstances dma_map_single and dma_map_page will fail to create
316a mapping. A driver can check for these errors by testing the returned 270a mapping. A driver can check for these errors by testing the returned
317dma address with dma_mapping_error(). A non-zero return value means the mapping 271dma address with dma_mapping_error(). A non-zero return value means the mapping
@@ -321,9 +275,6 @@ reduce current DMA mapping usage or delay and try again later).
321 int 275 int
322 dma_map_sg(struct device *dev, struct scatterlist *sg, 276 dma_map_sg(struct device *dev, struct scatterlist *sg,
323 int nents, enum dma_data_direction direction) 277 int nents, enum dma_data_direction direction)
324 int
325 pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
326 int nents, int direction)
327 278
328Returns: the number of physical segments mapped (this may be shorter 279Returns: the number of physical segments mapped (this may be shorter
329than <nents> passed in if some elements of the scatter/gather list are 280than <nents> passed in if some elements of the scatter/gather list are
@@ -363,9 +314,6 @@ accessed sg->address and sg->length as shown above.
363 void 314 void
364 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 315 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
365 int nhwentries, enum dma_data_direction direction) 316 int nhwentries, enum dma_data_direction direction)
366 void
367 pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
368 int nents, int direction)
369 317
370Unmap the previously mapped scatter/gather list. All the parameters 318Unmap the previously mapped scatter/gather list. All the parameters
371must be the same as those and passed in to the scatter/gather mapping 319must be the same as those and passed in to the scatter/gather mapping
@@ -378,26 +326,14 @@ void
378dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 326dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
379 enum dma_data_direction direction) 327 enum dma_data_direction direction)
380void 328void
381pci_dma_sync_single_for_cpu(struct pci_dev *hwdev, dma_addr_t dma_handle,
382 size_t size, int direction)
383void
384dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 329dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
385 enum dma_data_direction direction) 330 enum dma_data_direction direction)
386void 331void
387pci_dma_sync_single_for_device(struct pci_dev *hwdev, dma_addr_t dma_handle,
388 size_t size, int direction)
389void
390dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 332dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
391 enum dma_data_direction direction) 333 enum dma_data_direction direction)
392void 334void
393pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sg,
394 int nelems, int direction)
395void
396dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 335dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
397 enum dma_data_direction direction) 336 enum dma_data_direction direction)
398void
399pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg,
400 int nelems, int direction)
401 337
402Synchronise a single contiguous or scatter/gather mapping for the cpu 338Synchronise a single contiguous or scatter/gather mapping for the cpu
403and device. With the sync_sg API, all the parameters must be the same 339and device. With the sync_sg API, all the parameters must be the same
@@ -482,70 +418,12 @@ void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
482 .... 418 ....
483 419
484 420
485Part Ie - Optimizing Unmap State Space Consumption
486--------------------------------
487
488On some platforms, dma_unmap_{single,page}() is simply a nop.
489Therefore, keeping track of the mapping address and length is a waste
490of space. Instead of filling your drivers up with ifdefs and the like
491to "work around" this (which would defeat the whole purpose of a
492portable API) the following facilities are provided.
493
494Actually, instead of describing the macros one by one, we'll
495transform some example code.
496
4971) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
498 Example, before:
499
500 struct ring_state {
501 struct sk_buff *skb;
502 dma_addr_t mapping;
503 __u32 len;
504 };
505
506 after:
507
508 struct ring_state {
509 struct sk_buff *skb;
510 DEFINE_DMA_UNMAP_ADDR(mapping);
511 DEFINE_DMA_UNMAP_LEN(len);
512 };
513
5142) Use dma_unmap_{addr,len}_set to set these values.
515 Example, before:
516
517 ringp->mapping = FOO;
518 ringp->len = BAR;
519
520 after:
521
522 dma_unmap_addr_set(ringp, mapping, FOO);
523 dma_unmap_len_set(ringp, len, BAR);
524
5253) Use dma_unmap_{addr,len} to access these values.
526 Example, before:
527
528 dma_unmap_single(dev, ringp->mapping, ringp->len,
529 DMA_FROM_DEVICE);
530
531 after:
532
533 dma_unmap_single(dev,
534 dma_unmap_addr(ringp, mapping),
535 dma_unmap_len(ringp, len),
536 DMA_FROM_DEVICE);
537
538It really should be self-explanatory. We treat the ADDR and LEN
539separately, because it is possible for an implementation to only
540need the address in order to perform the unmap operation.
541
542
543Part II - Advanced dma_ usage 421Part II - Advanced dma_ usage
544----------------------------- 422-----------------------------
545 423
546Warning: These pieces of the DMA API have no PCI equivalent. They 424Warning: These pieces of the DMA API should not be used in the
547should also not be used in the majority of cases, since they cater for 425majority of cases, since they cater for unlikely corner cases that
548unlikely corner cases that don't belong in usual drivers. 426don't belong in usual drivers.
549 427
550If you don't understand how cache line coherency works between a 428If you don't understand how cache line coherency works between a
551processor and an I/O device, you should not be using this part of the 429processor and an I/O device, you should not be using this part of the