aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@g5.osdl.org>2006-04-14 20:08:18 -0400
committerLinus Torvalds <torvalds@g5.osdl.org>2006-04-14 20:08:18 -0400
commitbcdc084257352902103aca85c65ddbbba8f74732 (patch)
tree535af6bc3d976016058ed9d1ec26227e23462ecf /Documentation
parent754a264c42178b85125a071299bb900b615c853b (diff)
parent78a596b4490e17b9990d87b9d468ef5bb70daa10 (diff)
Merge master.kernel.org:/pub/scm/linux/kernel/git/gregkh/pci-2.6
* master.kernel.org:/pub/scm/linux/kernel/git/gregkh/pci-2.6: (169 commits) commit 78a596b4490e17b9990d87b9d468ef5bb70daa10 Author: Adrian Bunk <bunk@stusta.de> Date: Fri Mar 31 01:38:12 2006 -0800 [PATCH] remove kernel/power/pm.c:pm_unregister() Since the last user is removed in -mm, we can now remove this long deprecated function. Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> commit 21440d313358043b0ce5e43b00ff3c9b35a8616c Author: David Brownell <david-b@pacbell.net> Date: Sat Apr 1 10:21:52 2006 -0800 [PATCH] dma doc updates ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/DMA-API.txt49
-rw-r--r--Documentation/DMA-mapping.txt22
2 files changed, 53 insertions, 18 deletions
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 1af0f2d50220..2ffb0d62f0fe 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size,
33 33
34Consistent memory is memory for which a write by either the device or 34Consistent memory is memory for which a write by either the device or
35the processor can immediately be read by the processor or device 35the processor can immediately be read by the processor or device
36without having to worry about caching effects. 36without having to worry about caching effects. (You may however need
37to make sure to flush the processor's write buffers before telling
38devices to read that memory.)
37 39
38This routine allocates a region of <size> bytes of consistent memory. 40This routine allocates a region of <size> bytes of consistent memory.
39it also returns a <dma_handle> which may be cast to an unsigned 41it also returns a <dma_handle> which may be cast to an unsigned
@@ -304,12 +306,12 @@ dma address with dma_mapping_error(). A non zero return value means the mapping
304could not be created and the driver should take appropriate action (eg 306could not be created and the driver should take appropriate action (eg
305reduce current DMA mapping usage or delay and try again later). 307reduce current DMA mapping usage or delay and try again later).
306 308
307int 309 int
308dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 310 dma_map_sg(struct device *dev, struct scatterlist *sg,
309 enum dma_data_direction direction) 311 int nents, enum dma_data_direction direction)
310int 312 int
311pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, 313 pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
312 int nents, int direction) 314 int nents, int direction)
313 315
314Maps a scatter gather list from the block layer. 316Maps a scatter gather list from the block layer.
315 317
@@ -327,12 +329,33 @@ critical that the driver do something, in the case of a block driver
327aborting the request or even oopsing is better than doing nothing and 329aborting the request or even oopsing is better than doing nothing and
328corrupting the filesystem. 330corrupting the filesystem.
329 331
330void 332With scatterlists, you use the resulting mapping like this:
331dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 333
332 enum dma_data_direction direction) 334 int i, count = dma_map_sg(dev, sglist, nents, direction);
333void 335 struct scatterlist *sg;
334pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, 336
335 int nents, int direction) 337 for (i = 0, sg = sglist; i < count; i++, sg++) {
338 hw_address[i] = sg_dma_address(sg);
339 hw_len[i] = sg_dma_len(sg);
340 }
341
342where nents is the number of entries in the sglist.
343
344The implementation is free to merge several consecutive sglist entries
345into one (e.g. with an IOMMU, or if several pages just happen to be
346physically contiguous) and returns the actual number of sg entries it
347mapped them to. On failure 0, is returned.
348
349Then you should loop count times (note: this can be less than nents times)
350and use sg_dma_address() and sg_dma_len() macros where you previously
351accessed sg->address and sg->length as shown above.
352
353 void
354 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
355 int nhwentries, enum dma_data_direction direction)
356 void
357 pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
358 int nents, int direction)
336 359
337unmap the previously mapped scatter/gather list. All the parameters 360unmap the previously mapped scatter/gather list. All the parameters
338must be the same as those and passed in to the scatter/gather mapping 361must be the same as those and passed in to the scatter/gather mapping
diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt
index 10bf4deb96aa..7c717699032c 100644
--- a/Documentation/DMA-mapping.txt
+++ b/Documentation/DMA-mapping.txt
@@ -58,11 +58,15 @@ translating each of those pages back to a kernel address using
58something like __va(). [ EDIT: Update this when we integrate 58something like __va(). [ EDIT: Update this when we integrate
59Gerd Knorr's generic code which does this. ] 59Gerd Knorr's generic code which does this. ]
60 60
61This rule also means that you may not use kernel image addresses 61This rule also means that you may use neither kernel image addresses
62(ie. items in the kernel's data/text/bss segment, or your driver's) 62(items in data/text/bss segments), nor module image addresses, nor
63nor may you use kernel stack addresses for DMA. Both of these items 63stack addresses for DMA. These could all be mapped somewhere entirely
64might be mapped somewhere entirely different than the rest of physical 64different than the rest of physical memory. Even if those classes of
65memory. 65memory could physically work with DMA, you'd need to ensure the I/O
66buffers were cacheline-aligned. Without that, you'd see cacheline
67sharing problems (data corruption) on CPUs with DMA-incoherent caches.
68(The CPU could write to one word, DMA would write to a different one
69in the same cache line, and one of them could be overwritten.)
66 70
67Also, this means that you cannot take the return of a kmap() 71Also, this means that you cannot take the return of a kmap()
68call and DMA to/from that. This is similar to vmalloc(). 72call and DMA to/from that. This is similar to vmalloc().
@@ -284,6 +288,11 @@ There are two types of DMA mappings:
284 288
285 in order to get correct behavior on all platforms. 289 in order to get correct behavior on all platforms.
286 290
291 Also, on some platforms your driver may need to flush CPU write
292 buffers in much the same way as it needs to flush write buffers
293 found in PCI bridges (such as by reading a register's value
294 after writing it).
295
287- Streaming DMA mappings which are usually mapped for one DMA transfer, 296- Streaming DMA mappings which are usually mapped for one DMA transfer,
288 unmapped right after it (unless you use pci_dma_sync_* below) and for which 297 unmapped right after it (unless you use pci_dma_sync_* below) and for which
289 hardware can optimize for sequential accesses. 298 hardware can optimize for sequential accesses.
@@ -303,6 +312,9 @@ There are two types of DMA mappings:
303 312
304Neither type of DMA mapping has alignment restrictions that come 313Neither type of DMA mapping has alignment restrictions that come
305from PCI, although some devices may have such restrictions. 314from PCI, although some devices may have such restrictions.
315Also, systems with caches that aren't DMA-coherent will work better
316when the underlying buffers don't share cache lines with other data.
317
306 318
307 Using Consistent DMA mappings. 319 Using Consistent DMA mappings.
308 320