aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/DMA-API.txt
diff options
context:
space:
mode:
authorDavid Brownell <david-b@pacbell.net>2006-04-01 13:21:52 -0500
committerGreg Kroah-Hartman <gregkh@suse.de>2006-04-14 15:25:26 -0400
commit21440d313358043b0ce5e43b00ff3c9b35a8616c (patch)
tree32f3ed659a76ad6e4a6061b57346178cf3fa6256 /Documentation/DMA-API.txt
parent2d1e1c754d641bb8a32f0ce909dcff32906830ef (diff)
[PATCH] dma doc updates
This updates the DMA API documentation to address a few issues: - The dma_map_sg() call results are used like pci_map_sg() results: using sg_dma_address() and sg_dma_len(). That's not wholly obvious to folk reading _only_ the "new" DMA-API.txt writeup. - Buffers allocated by dma_alloc_coherent() may not be completely free of coherency concerns ... some CPUs also have write buffers that may need to be flushed. - Cacheline coherence issues are now mentioned as being among issues which affect dma buffers, and complicate/prevent using of static and (especially) stack based buffers with the DMA calls. I don't think many drivers currently need to worry about flushing write buffers, but I did hit it with one SOC using external SDRAM for DMA descriptors: without explicit writebuffer flushing, the on-chip DMA controller accessed descriptors before the CPU completed the writes. Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'Documentation/DMA-API.txt')
-rw-r--r--Documentation/DMA-API.txt49
1 files changed, 36 insertions, 13 deletions
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 1af0f2d50220..2ffb0d62f0fe 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size,
33 33
34Consistent memory is memory for which a write by either the device or 34Consistent memory is memory for which a write by either the device or
35the processor can immediately be read by the processor or device 35the processor can immediately be read by the processor or device
36without having to worry about caching effects. 36without having to worry about caching effects. (You may however need
37to make sure to flush the processor's write buffers before telling
38devices to read that memory.)
37 39
38This routine allocates a region of <size> bytes of consistent memory. 40This routine allocates a region of <size> bytes of consistent memory.
39it also returns a <dma_handle> which may be cast to an unsigned 41it also returns a <dma_handle> which may be cast to an unsigned
@@ -304,12 +306,12 @@ dma address with dma_mapping_error(). A non zero return value means the mapping
304could not be created and the driver should take appropriate action (eg 306could not be created and the driver should take appropriate action (eg
305reduce current DMA mapping usage or delay and try again later). 307reduce current DMA mapping usage or delay and try again later).
306 308
307int 309 int
308dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 310 dma_map_sg(struct device *dev, struct scatterlist *sg,
309 enum dma_data_direction direction) 311 int nents, enum dma_data_direction direction)
310int 312 int
311pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, 313 pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
312 int nents, int direction) 314 int nents, int direction)
313 315
314Maps a scatter gather list from the block layer. 316Maps a scatter gather list from the block layer.
315 317
@@ -327,12 +329,33 @@ critical that the driver do something, in the case of a block driver
327aborting the request or even oopsing is better than doing nothing and 329aborting the request or even oopsing is better than doing nothing and
328corrupting the filesystem. 330corrupting the filesystem.
329 331
330void 332With scatterlists, you use the resulting mapping like this:
331dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 333
332 enum dma_data_direction direction) 334 int i, count = dma_map_sg(dev, sglist, nents, direction);
333void 335 struct scatterlist *sg;
334pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, 336
335 int nents, int direction) 337 for (i = 0, sg = sglist; i < count; i++, sg++) {
338 hw_address[i] = sg_dma_address(sg);
339 hw_len[i] = sg_dma_len(sg);
340 }
341
342where nents is the number of entries in the sglist.
343
344The implementation is free to merge several consecutive sglist entries
345into one (e.g. with an IOMMU, or if several pages just happen to be
346physically contiguous) and returns the actual number of sg entries it
347mapped them to. On failure 0, is returned.
348
349Then you should loop count times (note: this can be less than nents times)
350and use sg_dma_address() and sg_dma_len() macros where you previously
351accessed sg->address and sg->length as shown above.
352
353 void
354 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
355 int nhwentries, enum dma_data_direction direction)
356 void
357 pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
358 int nents, int direction)
336 359
337unmap the previously mapped scatter/gather list. All the parameters 360unmap the previously mapped scatter/gather list. All the parameters
338must be the same as those and passed in to the scatter/gather mapping 361must be the same as those and passed in to the scatter/gather mapping