diff options
Diffstat (limited to 'Documentation/usb/dma.txt')
-rw-r--r-- | Documentation/usb/dma.txt | 116 |
1 files changed, 116 insertions, 0 deletions
diff --git a/Documentation/usb/dma.txt b/Documentation/usb/dma.txt new file mode 100644 index 000000000000..62844aeba69c --- /dev/null +++ b/Documentation/usb/dma.txt | |||
@@ -0,0 +1,116 @@ | |||
1 | In Linux 2.5 kernels (and later), USB device drivers have additional control | ||
2 | over how DMA may be used to perform I/O operations. The APIs are detailed | ||
3 | in the kernel usb programming guide (kerneldoc, from the source code). | ||
4 | |||
5 | |||
6 | API OVERVIEW | ||
7 | |||
8 | The big picture is that USB drivers can continue to ignore most DMA issues, | ||
9 | though they still must provide DMA-ready buffers (see DMA-mapping.txt). | ||
10 | That's how they've worked through the 2.4 (and earlier) kernels. | ||
11 | |||
12 | OR: they can now be DMA-aware. | ||
13 | |||
14 | - New calls enable DMA-aware drivers, letting them allocate dma buffers and | ||
15 | manage dma mappings for existing dma-ready buffers (see below). | ||
16 | |||
17 | - URBs have an additional "transfer_dma" field, as well as a transfer_flags | ||
18 | bit saying if it's valid. (Control requests also have "setup_dma" and a | ||
19 | corresponding transfer_flags bit.) | ||
20 | |||
21 | - "usbcore" will map those DMA addresses, if a DMA-aware driver didn't do | ||
22 | it first and set URB_NO_TRANSFER_DMA_MAP or URB_NO_SETUP_DMA_MAP. HCDs | ||
23 | don't manage dma mappings for URBs. | ||
24 | |||
25 | - There's a new "generic DMA API", parts of which are usable by USB device | ||
26 | drivers. Never use dma_set_mask() on any USB interface or device; that | ||
27 | would potentially break all devices sharing that bus. | ||
28 | |||
29 | |||
30 | ELIMINATING COPIES | ||
31 | |||
32 | It's good to avoid making CPUs copy data needlessly. The costs can add up, | ||
33 | and effects like cache-trashing can impose subtle penalties. | ||
34 | |||
35 | - When you're allocating a buffer for DMA purposes anyway, use the buffer | ||
36 | primitives. Think of them as kmalloc and kfree that give you the right | ||
37 | kind of addresses to store in urb->transfer_buffer and urb->transfer_dma, | ||
38 | while guaranteeing that no hidden copies through DMA "bounce" buffers will | ||
39 | slow things down. You'd also set URB_NO_TRANSFER_DMA_MAP in | ||
40 | urb->transfer_flags: | ||
41 | |||
42 | void *usb_buffer_alloc (struct usb_device *dev, size_t size, | ||
43 | int mem_flags, dma_addr_t *dma); | ||
44 | |||
45 | void usb_buffer_free (struct usb_device *dev, size_t size, | ||
46 | void *addr, dma_addr_t dma); | ||
47 | |||
48 | For control transfers you can use the buffer primitives or not for each | ||
49 | of the transfer buffer and setup buffer independently. Set the flag bits | ||
50 | URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP to indicate which | ||
51 | buffers you have prepared. For non-control transfers URB_NO_SETUP_DMA_MAP | ||
52 | is ignored. | ||
53 | |||
54 | The memory buffer returned is "dma-coherent"; sometimes you might need to | ||
55 | force a consistent memory access ordering by using memory barriers. It's | ||
56 | not using a streaming DMA mapping, so it's good for small transfers on | ||
57 | systems where the I/O would otherwise tie up an IOMMU mapping. (See | ||
58 | Documentation/DMA-mapping.txt for definitions of "coherent" and "streaming" | ||
59 | DMA mappings.) | ||
60 | |||
61 | Asking for 1/Nth of a page (as well as asking for N pages) is reasonably | ||
62 | space-efficient. | ||
63 | |||
64 | - Devices on some EHCI controllers could handle DMA to/from high memory. | ||
65 | Driver probe() routines can notice this using a generic DMA call, then | ||
66 | tell higher level code (network, scsi, etc) about it like this: | ||
67 | |||
68 | if (dma_supported (&intf->dev, 0xffffffffffffffffULL)) | ||
69 | net->features |= NETIF_F_HIGHDMA; | ||
70 | |||
71 | That can eliminate dma bounce buffering of requests that originate (or | ||
72 | terminate) in high memory, in cases where the buffers aren't allocated | ||
73 | with usb_buffer_alloc() but instead are dma-mapped. | ||
74 | |||
75 | |||
76 | WORKING WITH EXISTING BUFFERS | ||
77 | |||
78 | Existing buffers aren't usable for DMA without first being mapped into the | ||
79 | DMA address space of the device. | ||
80 | |||
81 | - When you're using scatterlists, you can map everything at once. On some | ||
82 | systems, this kicks in an IOMMU and turns the scatterlists into single | ||
83 | DMA transactions: | ||
84 | |||
85 | int usb_buffer_map_sg (struct usb_device *dev, unsigned pipe, | ||
86 | struct scatterlist *sg, int nents); | ||
87 | |||
88 | void usb_buffer_dmasync_sg (struct usb_device *dev, unsigned pipe, | ||
89 | struct scatterlist *sg, int n_hw_ents); | ||
90 | |||
91 | void usb_buffer_unmap_sg (struct usb_device *dev, unsigned pipe, | ||
92 | struct scatterlist *sg, int n_hw_ents); | ||
93 | |||
94 | It's probably easier to use the new usb_sg_*() calls, which do the DMA | ||
95 | mapping and apply other tweaks to make scatterlist i/o be fast. | ||
96 | |||
97 | - Some drivers may prefer to work with the model that they're mapping large | ||
98 | buffers, synchronizing their safe re-use. (If there's no re-use, then let | ||
99 | usbcore do the map/unmap.) Large periodic transfers make good examples | ||
100 | here, since it's cheaper to just synchronize the buffer than to unmap it | ||
101 | each time an urb completes and then re-map it on during resubmission. | ||
102 | |||
103 | These calls all work with initialized urbs: urb->dev, urb->pipe, | ||
104 | urb->transfer_buffer, and urb->transfer_buffer_length must all be | ||
105 | valid when these calls are used (urb->setup_packet must be valid too | ||
106 | if urb is a control request): | ||
107 | |||
108 | struct urb *usb_buffer_map (struct urb *urb); | ||
109 | |||
110 | void usb_buffer_dmasync (struct urb *urb); | ||
111 | |||
112 | void usb_buffer_unmap (struct urb *urb); | ||
113 | |||
114 | The calls manage urb->transfer_dma for you, and set URB_NO_TRANSFER_DMA_MAP | ||
115 | so that usbcore won't map or unmap the buffer. The same goes for | ||
116 | urb->setup_dma and URB_NO_SETUP_DMA_MAP for control requests. | ||