aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/usb/dma.txt52
-rw-r--r--drivers/usb/core/usb.c11
2 files changed, 43 insertions, 20 deletions
diff --git a/Documentation/usb/dma.txt b/Documentation/usb/dma.txt
index 62844aeba69c..e8b50b7de9d9 100644
--- a/Documentation/usb/dma.txt
+++ b/Documentation/usb/dma.txt
@@ -32,12 +32,15 @@ ELIMINATING COPIES
32It's good to avoid making CPUs copy data needlessly. The costs can add up, 32It's good to avoid making CPUs copy data needlessly. The costs can add up,
33and effects like cache-trashing can impose subtle penalties. 33and effects like cache-trashing can impose subtle penalties.
34 34
35- When you're allocating a buffer for DMA purposes anyway, use the buffer 35- If you're doing lots of small data transfers from the same buffer all
36 primitives. Think of them as kmalloc and kfree that give you the right 36 the time, that can really burn up resources on systems which use an
37 kind of addresses to store in urb->transfer_buffer and urb->transfer_dma, 37 IOMMU to manage the DMA mappings. It can cost MUCH more to set up and
38 while guaranteeing that no hidden copies through DMA "bounce" buffers will 38 tear down the IOMMU mappings with each request than perform the I/O!
39 slow things down. You'd also set URB_NO_TRANSFER_DMA_MAP in 39
40 urb->transfer_flags: 40 For those specific cases, USB has primitives to allocate less expensive
41 memory. They work like kmalloc and kfree versions that give you the right
42 kind of addresses to store in urb->transfer_buffer and urb->transfer_dma.
43 You'd also set URB_NO_TRANSFER_DMA_MAP in urb->transfer_flags:
41 44
42 void *usb_buffer_alloc (struct usb_device *dev, size_t size, 45 void *usb_buffer_alloc (struct usb_device *dev, size_t size,
43 int mem_flags, dma_addr_t *dma); 46 int mem_flags, dma_addr_t *dma);
@@ -45,6 +48,10 @@ and effects like cache-trashing can impose subtle penalties.
45 void usb_buffer_free (struct usb_device *dev, size_t size, 48 void usb_buffer_free (struct usb_device *dev, size_t size,
46 void *addr, dma_addr_t dma); 49 void *addr, dma_addr_t dma);
47 50
51 Most drivers should *NOT* be using these primitives; they don't need
52 to use this type of memory ("dma-coherent"), and memory returned from
53 kmalloc() will work just fine.
54
48 For control transfers you can use the buffer primitives or not for each 55 For control transfers you can use the buffer primitives or not for each
49 of the transfer buffer and setup buffer independently. Set the flag bits 56 of the transfer buffer and setup buffer independently. Set the flag bits
50 URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP to indicate which 57 URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP to indicate which
@@ -54,29 +61,39 @@ and effects like cache-trashing can impose subtle penalties.
54 The memory buffer returned is "dma-coherent"; sometimes you might need to 61 The memory buffer returned is "dma-coherent"; sometimes you might need to
55 force a consistent memory access ordering by using memory barriers. It's 62 force a consistent memory access ordering by using memory barriers. It's
56 not using a streaming DMA mapping, so it's good for small transfers on 63 not using a streaming DMA mapping, so it's good for small transfers on
57 systems where the I/O would otherwise tie up an IOMMU mapping. (See 64 systems where the I/O would otherwise thrash an IOMMU mapping. (See
58 Documentation/DMA-mapping.txt for definitions of "coherent" and "streaming" 65 Documentation/DMA-mapping.txt for definitions of "coherent" and "streaming"
59 DMA mappings.) 66 DMA mappings.)
60 67
61 Asking for 1/Nth of a page (as well as asking for N pages) is reasonably 68 Asking for 1/Nth of a page (as well as asking for N pages) is reasonably
62 space-efficient. 69 space-efficient.
63 70
71 On most systems the memory returned will be uncached, because the
72 semantics of dma-coherent memory require either bypassing CPU caches
73 or using cache hardware with bus-snooping support. While x86 hardware
74 has such bus-snooping, many other systems use software to flush cache
75 lines to prevent DMA conflicts.
76
64- Devices on some EHCI controllers could handle DMA to/from high memory. 77- Devices on some EHCI controllers could handle DMA to/from high memory.
65 Driver probe() routines can notice this using a generic DMA call, then
66 tell higher level code (network, scsi, etc) about it like this:
67 78
68 if (dma_supported (&intf->dev, 0xffffffffffffffffULL)) 79 Unfortunately, the current Linux DMA infrastructure doesn't have a sane
69 net->features |= NETIF_F_HIGHDMA; 80 way to expose these capabilities ... and in any case, HIGHMEM is mostly a
81 design wart specific to x86_32. So your best bet is to ensure you never
82 pass a highmem buffer into a USB driver. That's easy; it's the default
83 behavior. Just don't override it; e.g. with NETIF_F_HIGHDMA.
70 84
71 That can eliminate dma bounce buffering of requests that originate (or 85 This may force your callers to do some bounce buffering, copying from
72 terminate) in high memory, in cases where the buffers aren't allocated 86 high memory to "normal" DMA memory. If you can come up with a good way
73 with usb_buffer_alloc() but instead are dma-mapped. 87 to fix this issue (for x86_32 machines with over 1 GByte of memory),
88 feel free to submit patches.
74 89
75 90
76WORKING WITH EXISTING BUFFERS 91WORKING WITH EXISTING BUFFERS
77 92
78Existing buffers aren't usable for DMA without first being mapped into the 93Existing buffers aren't usable for DMA without first being mapped into the
79DMA address space of the device. 94DMA address space of the device. However, most buffers passed to your
95driver can safely be used with such DMA mapping. (See the first section
96of DMA-mapping.txt, titled "What memory is DMA-able?")
80 97
81- When you're using scatterlists, you can map everything at once. On some 98- When you're using scatterlists, you can map everything at once. On some
82 systems, this kicks in an IOMMU and turns the scatterlists into single 99 systems, this kicks in an IOMMU and turns the scatterlists into single
@@ -114,3 +131,8 @@ DMA address space of the device.
114 The calls manage urb->transfer_dma for you, and set URB_NO_TRANSFER_DMA_MAP 131 The calls manage urb->transfer_dma for you, and set URB_NO_TRANSFER_DMA_MAP
115 so that usbcore won't map or unmap the buffer. The same goes for 132 so that usbcore won't map or unmap the buffer. The same goes for
116 urb->setup_dma and URB_NO_SETUP_DMA_MAP for control requests. 133 urb->setup_dma and URB_NO_SETUP_DMA_MAP for control requests.
134
135Note that several of those interfaces are currently commented out, since
136they don't have current users. See the source code. Other than the dmasync
137calls (where the underlying DMA primitives have changed), most of them can
138easily be commented back in if you want to use them.
diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
index c611b3cbc67b..0fee5c66fd64 100644
--- a/drivers/usb/core/usb.c
+++ b/drivers/usb/core/usb.c
@@ -579,11 +579,12 @@ int __usb_get_extra_descriptor(char *buffer, unsigned size,
579 * address (through the pointer provided). 579 * address (through the pointer provided).
580 * 580 *
581 * These buffers are used with URB_NO_xxx_DMA_MAP set in urb->transfer_flags 581 * These buffers are used with URB_NO_xxx_DMA_MAP set in urb->transfer_flags
582 * to avoid behaviors like using "DMA bounce buffers", or tying down I/O 582 * to avoid behaviors like using "DMA bounce buffers", or thrashing IOMMU
583 * mapping hardware for long idle periods. The implementation varies between 583 * hardware during URB completion/resubmit. The implementation varies between
584 * platforms, depending on details of how DMA will work to this device. 584 * platforms, depending on details of how DMA will work to this device.
585 * Using these buffers also helps prevent cacheline sharing problems on 585 * Using these buffers also eliminates cacheline sharing problems on
586 * architectures where CPU caches are not DMA-coherent. 586 * architectures where CPU caches are not DMA-coherent. On systems without
587 * bus-snooping caches, these buffers are uncached.
587 * 588 *
588 * When the buffer is no longer used, free it with usb_buffer_free(). 589 * When the buffer is no longer used, free it with usb_buffer_free().
589 */ 590 */
@@ -608,7 +609,7 @@ void *usb_buffer_alloc(
608 * 609 *
609 * This reclaims an I/O buffer, letting it be reused. The memory must have 610 * This reclaims an I/O buffer, letting it be reused. The memory must have
610 * been allocated using usb_buffer_alloc(), and the parameters must match 611 * been allocated using usb_buffer_alloc(), and the parameters must match
611 * those provided in that allocation request. 612 * those provided in that allocation request.
612 */ 613 */
613void usb_buffer_free( 614void usb_buffer_free(
614 struct usb_device *dev, 615 struct usb_device *dev,