diff options
Diffstat (limited to 'Documentation/dmaengine.txt')
-rw-r--r-- | Documentation/dmaengine.txt | 234 |
1 files changed, 164 insertions, 70 deletions
diff --git a/Documentation/dmaengine.txt b/Documentation/dmaengine.txt index 5a0cb1ef6164..94b7e0f96b38 100644 --- a/Documentation/dmaengine.txt +++ b/Documentation/dmaengine.txt | |||
@@ -10,87 +10,181 @@ NOTE: For DMA Engine usage in async_tx please see: | |||
10 | Below is a guide to device driver writers on how to use the Slave-DMA API of the | 10 | Below is a guide to device driver writers on how to use the Slave-DMA API of the |
11 | DMA Engine. This is applicable only for slave DMA usage only. | 11 | DMA Engine. This is applicable only for slave DMA usage only. |
12 | 12 | ||
13 | The slave DMA usage consists of following steps | 13 | The slave DMA usage consists of following steps: |
14 | 1. Allocate a DMA slave channel | 14 | 1. Allocate a DMA slave channel |
15 | 2. Set slave and controller specific parameters | 15 | 2. Set slave and controller specific parameters |
16 | 3. Get a descriptor for transaction | 16 | 3. Get a descriptor for transaction |
17 | 4. Submit the transaction and wait for callback notification | 17 | 4. Submit the transaction |
18 | 5. Issue pending requests and wait for callback notification | ||
18 | 19 | ||
19 | 1. Allocate a DMA slave channel | 20 | 1. Allocate a DMA slave channel |
20 | Channel allocation is slightly different in the slave DMA context, client | 21 | |
21 | drivers typically need a channel from a particular DMA controller only and even | 22 | Channel allocation is slightly different in the slave DMA context, |
22 | in some cases a specific channel is desired. To request a channel | 23 | client drivers typically need a channel from a particular DMA |
23 | dma_request_channel() API is used. | 24 | controller only and even in some cases a specific channel is desired. |
24 | 25 | To request a channel dma_request_channel() API is used. | |
25 | Interface: | 26 | |
26 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, | 27 | Interface: |
27 | dma_filter_fn filter_fn, | 28 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, |
28 | void *filter_param); | 29 | dma_filter_fn filter_fn, |
29 | where dma_filter_fn is defined as: | 30 | void *filter_param); |
30 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); | 31 | where dma_filter_fn is defined as: |
31 | 32 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); | |
32 | When the optional 'filter_fn' parameter is set to NULL dma_request_channel | 33 | |
33 | simply returns the first channel that satisfies the capability mask. Otherwise, | 34 | The 'filter_fn' parameter is optional, but highly recommended for |
34 | when the mask parameter is insufficient for specifying the necessary channel, | 35 | slave and cyclic channels as they typically need to obtain a specific |
35 | the filter_fn routine can be used to disposition the available channels in the | 36 | DMA channel. |
36 | system. The filter_fn routine is called once for each free channel in the | 37 | |
37 | system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags | 38 | When the optional 'filter_fn' parameter is NULL, dma_request_channel() |
38 | that channel to be the return value from dma_request_channel. A channel | 39 | simply returns the first channel that satisfies the capability mask. |
39 | allocated via this interface is exclusive to the caller, until | 40 | |
40 | dma_release_channel() is called. | 41 | Otherwise, the 'filter_fn' routine will be called once for each free |
42 | channel which has a capability in 'mask'. 'filter_fn' is expected to | ||
43 | return 'true' when the desired DMA channel is found. | ||
44 | |||
45 | A channel allocated via this interface is exclusive to the caller, | ||
46 | until dma_release_channel() is called. | ||
41 | 47 | ||
42 | 2. Set slave and controller specific parameters | 48 | 2. Set slave and controller specific parameters |
43 | Next step is always to pass some specific information to the DMA driver. Most of | 49 | |
44 | the generic information which a slave DMA can use is in struct dma_slave_config. | 50 | Next step is always to pass some specific information to the DMA |
45 | It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA | 51 | driver. Most of the generic information which a slave DMA can use |
46 | burst lengths etc. If some DMA controllers have more parameters to be sent then | 52 | is in struct dma_slave_config. This allows the clients to specify |
47 | they should try to embed struct dma_slave_config in their controller specific | 53 | DMA direction, DMA addresses, bus widths, DMA burst lengths etc |
48 | structure. That gives flexibility to client to pass more parameters, if | 54 | for the peripheral. |
49 | required. | 55 | |
50 | 56 | If some DMA controllers have more parameters to be sent then they | |
51 | Interface: | 57 | should try to embed struct dma_slave_config in their controller |
52 | int dmaengine_slave_config(struct dma_chan *chan, | 58 | specific structure. That gives flexibility to client to pass more |
53 | struct dma_slave_config *config) | 59 | parameters, if required. |
60 | |||
61 | Interface: | ||
62 | int dmaengine_slave_config(struct dma_chan *chan, | ||
63 | struct dma_slave_config *config) | ||
64 | |||
65 | Please see the dma_slave_config structure definition in dmaengine.h | ||
66 | for a detailed explaination of the struct members. Please note | ||
67 | that the 'direction' member will be going away as it duplicates the | ||
68 | direction given in the prepare call. | ||
54 | 69 | ||
55 | 3. Get a descriptor for transaction | 70 | 3. Get a descriptor for transaction |
56 | For slave usage the various modes of slave transfers supported by the | 71 | |
57 | DMA-engine are: | 72 | For slave usage the various modes of slave transfers supported by the |
58 | slave_sg - DMA a list of scatter gather buffers from/to a peripheral | 73 | DMA-engine are: |
59 | dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the | 74 | |
75 | slave_sg - DMA a list of scatter gather buffers from/to a peripheral | ||
76 | dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the | ||
60 | operation is explicitly stopped. | 77 | operation is explicitly stopped. |
61 | The non NULL return of this transfer API represents a "descriptor" for the given | 78 | |
62 | transaction. | 79 | A non-NULL return of this transfer API represents a "descriptor" for |
63 | 80 | the given transaction. | |
64 | Interface: | 81 | |
65 | struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)( | 82 | Interface: |
66 | struct dma_chan *chan, | 83 | struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)( |
67 | struct scatterlist *dst_sg, unsigned int dst_nents, | 84 | struct dma_chan *chan, struct scatterlist *sgl, |
68 | struct scatterlist *src_sg, unsigned int src_nents, | 85 | unsigned int sg_len, enum dma_data_direction direction, |
69 | unsigned long flags); | 86 | unsigned long flags); |
70 | struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( | 87 | |
88 | struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( | ||
71 | struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, | 89 | struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, |
72 | size_t period_len, enum dma_data_direction direction); | 90 | size_t period_len, enum dma_data_direction direction); |
73 | 91 | ||
74 | 4. Submit the transaction and wait for callback notification | 92 | The peripheral driver is expected to have mapped the scatterlist for |
75 | To schedule the transaction to be scheduled by dma device, the "descriptor" | 93 | the DMA operation prior to calling device_prep_slave_sg, and must |
76 | returned in above (3) needs to be submitted. | 94 | keep the scatterlist mapped until the DMA operation has completed. |
77 | To tell the dma driver that a transaction is ready to be serviced, the | 95 | The scatterlist must be mapped using the DMA struct device. So, |
78 | descriptor->submit() callback needs to be invoked. This chains the descriptor to | 96 | normal setup should look like this: |
79 | the pending queue. | 97 | |
80 | The transactions in the pending queue can be activated by calling the | 98 | nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); |
81 | issue_pending API. If channel is idle then the first transaction in queue is | 99 | if (nr_sg == 0) |
82 | started and subsequent ones queued up. | 100 | /* error */ |
83 | On completion of the DMA operation the next in queue is submitted and a tasklet | 101 | |
84 | triggered. The tasklet would then call the client driver completion callback | 102 | desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg, |
85 | routine for notification, if set. | 103 | direction, flags); |
86 | Interface: | 104 | |
87 | void dma_async_issue_pending(struct dma_chan *chan); | 105 | Once a descriptor has been obtained, the callback information can be |
88 | 106 | added and the descriptor must then be submitted. Some DMA engine | |
89 | ============================================================================== | 107 | drivers may hold a spinlock between a successful preparation and |
90 | 108 | submission so it is important that these two operations are closely | |
91 | Additional usage notes for dma driver writers | 109 | paired. |
92 | 1/ Although DMA engine specifies that completion callback routines cannot submit | 110 | |
93 | any new operations, but typically for slave DMA subsequent transaction may not | 111 | Note: |
94 | be available for submit prior to callback routine being called. This requirement | 112 | Although the async_tx API specifies that completion callback |
95 | is not a requirement for DMA-slave devices. But they should take care to drop | 113 | routines cannot submit any new operations, this is not the |
96 | the spin-lock they might be holding before calling the callback routine | 114 | case for slave/cyclic DMA. |
115 | |||
116 | For slave DMA, the subsequent transaction may not be available | ||
117 | for submission prior to callback function being invoked, so | ||
118 | slave DMA callbacks are permitted to prepare and submit a new | ||
119 | transaction. | ||
120 | |||
121 | For cyclic DMA, a callback function may wish to terminate the | ||
122 | DMA via dmaengine_terminate_all(). | ||
123 | |||
124 | Therefore, it is important that DMA engine drivers drop any | ||
125 | locks before calling the callback function which may cause a | ||
126 | deadlock. | ||
127 | |||
128 | Note that callbacks will always be invoked from the DMA | ||
129 | engines tasklet, never from interrupt context. | ||
130 | |||
131 | 4. Submit the transaction | ||
132 | |||
133 | Once the descriptor has been prepared and the callback information | ||
134 | added, it must be placed on the DMA engine drivers pending queue. | ||
135 | |||
136 | Interface: | ||
137 | dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc) | ||
138 | |||
139 | This returns a cookie can be used to check the progress of DMA engine | ||
140 | activity via other DMA engine calls not covered in this document. | ||
141 | |||
142 | dmaengine_submit() will not start the DMA operation, it merely adds | ||
143 | it to the pending queue. For this, see step 5, dma_async_issue_pending. | ||
144 | |||
145 | 5. Issue pending DMA requests and wait for callback notification | ||
146 | |||
147 | The transactions in the pending queue can be activated by calling the | ||
148 | issue_pending API. If channel is idle then the first transaction in | ||
149 | queue is started and subsequent ones queued up. | ||
150 | |||
151 | On completion of each DMA operation, the next in queue is started and | ||
152 | a tasklet triggered. The tasklet will then call the client driver | ||
153 | completion callback routine for notification, if set. | ||
154 | |||
155 | Interface: | ||
156 | void dma_async_issue_pending(struct dma_chan *chan); | ||
157 | |||
158 | Further APIs: | ||
159 | |||
160 | 1. int dmaengine_terminate_all(struct dma_chan *chan) | ||
161 | |||
162 | This causes all activity for the DMA channel to be stopped, and may | ||
163 | discard data in the DMA FIFO which hasn't been fully transferred. | ||
164 | No callback functions will be called for any incomplete transfers. | ||
165 | |||
166 | 2. int dmaengine_pause(struct dma_chan *chan) | ||
167 | |||
168 | This pauses activity on the DMA channel without data loss. | ||
169 | |||
170 | 3. int dmaengine_resume(struct dma_chan *chan) | ||
171 | |||
172 | Resume a previously paused DMA channel. It is invalid to resume a | ||
173 | channel which is not currently paused. | ||
174 | |||
175 | 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan, | ||
176 | dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used) | ||
177 | |||
178 | This can be used to check the status of the channel. Please see | ||
179 | the documentation in include/linux/dmaengine.h for a more complete | ||
180 | description of this API. | ||
181 | |||
182 | This can be used in conjunction with dma_async_is_complete() and | ||
183 | the cookie returned from 'descriptor->submit()' to check for | ||
184 | completion of a specific DMA transaction. | ||
185 | |||
186 | Note: | ||
187 | Not all DMA engine drivers can return reliable information for | ||
188 | a running DMA channel. It is recommended that DMA engine users | ||
189 | pause or stop (via dmaengine_terminate_all) the channel before | ||
190 | using this API. | ||