diff options
Diffstat (limited to 'Documentation/dmaengine/client.txt')
-rw-r--r-- | Documentation/dmaengine/client.txt | 199 |
1 files changed, 199 insertions, 0 deletions
diff --git a/Documentation/dmaengine/client.txt b/Documentation/dmaengine/client.txt new file mode 100644 index 000000000000..11fb87ff6cd0 --- /dev/null +++ b/Documentation/dmaengine/client.txt | |||
@@ -0,0 +1,199 @@ | |||
1 | DMA Engine API Guide | ||
2 | ==================== | ||
3 | |||
4 | Vinod Koul <vinod dot koul at intel.com> | ||
5 | |||
6 | NOTE: For DMA Engine usage in async_tx please see: | ||
7 | Documentation/crypto/async-tx-api.txt | ||
8 | |||
9 | |||
10 | Below is a guide to device driver writers on how to use the Slave-DMA API of the | ||
11 | DMA Engine. This is applicable only for slave DMA usage only. | ||
12 | |||
13 | The slave DMA usage consists of following steps: | ||
14 | 1. Allocate a DMA slave channel | ||
15 | 2. Set slave and controller specific parameters | ||
16 | 3. Get a descriptor for transaction | ||
17 | 4. Submit the transaction | ||
18 | 5. Issue pending requests and wait for callback notification | ||
19 | |||
20 | 1. Allocate a DMA slave channel | ||
21 | |||
22 | Channel allocation is slightly different in the slave DMA context, | ||
23 | client drivers typically need a channel from a particular DMA | ||
24 | controller only and even in some cases a specific channel is desired. | ||
25 | To request a channel dma_request_channel() API is used. | ||
26 | |||
27 | Interface: | ||
28 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, | ||
29 | dma_filter_fn filter_fn, | ||
30 | void *filter_param); | ||
31 | where dma_filter_fn is defined as: | ||
32 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); | ||
33 | |||
34 | The 'filter_fn' parameter is optional, but highly recommended for | ||
35 | slave and cyclic channels as they typically need to obtain a specific | ||
36 | DMA channel. | ||
37 | |||
38 | When the optional 'filter_fn' parameter is NULL, dma_request_channel() | ||
39 | simply returns the first channel that satisfies the capability mask. | ||
40 | |||
41 | Otherwise, the 'filter_fn' routine will be called once for each free | ||
42 | channel which has a capability in 'mask'. 'filter_fn' is expected to | ||
43 | return 'true' when the desired DMA channel is found. | ||
44 | |||
45 | A channel allocated via this interface is exclusive to the caller, | ||
46 | until dma_release_channel() is called. | ||
47 | |||
48 | 2. Set slave and controller specific parameters | ||
49 | |||
50 | Next step is always to pass some specific information to the DMA | ||
51 | driver. Most of the generic information which a slave DMA can use | ||
52 | is in struct dma_slave_config. This allows the clients to specify | ||
53 | DMA direction, DMA addresses, bus widths, DMA burst lengths etc | ||
54 | for the peripheral. | ||
55 | |||
56 | If some DMA controllers have more parameters to be sent then they | ||
57 | should try to embed struct dma_slave_config in their controller | ||
58 | specific structure. That gives flexibility to client to pass more | ||
59 | parameters, if required. | ||
60 | |||
61 | Interface: | ||
62 | int dmaengine_slave_config(struct dma_chan *chan, | ||
63 | struct dma_slave_config *config) | ||
64 | |||
65 | Please see the dma_slave_config structure definition in dmaengine.h | ||
66 | for a detailed explanation of the struct members. Please note | ||
67 | that the 'direction' member will be going away as it duplicates the | ||
68 | direction given in the prepare call. | ||
69 | |||
70 | 3. Get a descriptor for transaction | ||
71 | |||
72 | For slave usage the various modes of slave transfers supported by the | ||
73 | DMA-engine are: | ||
74 | |||
75 | slave_sg - DMA a list of scatter gather buffers from/to a peripheral | ||
76 | dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the | ||
77 | operation is explicitly stopped. | ||
78 | interleaved_dma - This is common to Slave as well as M2M clients. For slave | ||
79 | address of devices' fifo could be already known to the driver. | ||
80 | Various types of operations could be expressed by setting | ||
81 | appropriate values to the 'dma_interleaved_template' members. | ||
82 | |||
83 | A non-NULL return of this transfer API represents a "descriptor" for | ||
84 | the given transaction. | ||
85 | |||
86 | Interface: | ||
87 | struct dma_async_tx_descriptor *dmaengine_prep_slave_sg( | ||
88 | struct dma_chan *chan, struct scatterlist *sgl, | ||
89 | unsigned int sg_len, enum dma_data_direction direction, | ||
90 | unsigned long flags); | ||
91 | |||
92 | struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic( | ||
93 | struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, | ||
94 | size_t period_len, enum dma_data_direction direction); | ||
95 | |||
96 | struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma( | ||
97 | struct dma_chan *chan, struct dma_interleaved_template *xt, | ||
98 | unsigned long flags); | ||
99 | |||
100 | The peripheral driver is expected to have mapped the scatterlist for | ||
101 | the DMA operation prior to calling dmaengine_prep_slave_sg(), and must | ||
102 | keep the scatterlist mapped until the DMA operation has completed. | ||
103 | The scatterlist must be mapped using the DMA struct device. | ||
104 | If a mapping needs to be synchronized later, dma_sync_*_for_*() must be | ||
105 | called using the DMA struct device, too. | ||
106 | So, normal setup should look like this: | ||
107 | |||
108 | nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); | ||
109 | if (nr_sg == 0) | ||
110 | /* error */ | ||
111 | |||
112 | desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags); | ||
113 | |||
114 | Once a descriptor has been obtained, the callback information can be | ||
115 | added and the descriptor must then be submitted. Some DMA engine | ||
116 | drivers may hold a spinlock between a successful preparation and | ||
117 | submission so it is important that these two operations are closely | ||
118 | paired. | ||
119 | |||
120 | Note: | ||
121 | Although the async_tx API specifies that completion callback | ||
122 | routines cannot submit any new operations, this is not the | ||
123 | case for slave/cyclic DMA. | ||
124 | |||
125 | For slave DMA, the subsequent transaction may not be available | ||
126 | for submission prior to callback function being invoked, so | ||
127 | slave DMA callbacks are permitted to prepare and submit a new | ||
128 | transaction. | ||
129 | |||
130 | For cyclic DMA, a callback function may wish to terminate the | ||
131 | DMA via dmaengine_terminate_all(). | ||
132 | |||
133 | Therefore, it is important that DMA engine drivers drop any | ||
134 | locks before calling the callback function which may cause a | ||
135 | deadlock. | ||
136 | |||
137 | Note that callbacks will always be invoked from the DMA | ||
138 | engines tasklet, never from interrupt context. | ||
139 | |||
140 | 4. Submit the transaction | ||
141 | |||
142 | Once the descriptor has been prepared and the callback information | ||
143 | added, it must be placed on the DMA engine drivers pending queue. | ||
144 | |||
145 | Interface: | ||
146 | dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc) | ||
147 | |||
148 | This returns a cookie can be used to check the progress of DMA engine | ||
149 | activity via other DMA engine calls not covered in this document. | ||
150 | |||
151 | dmaengine_submit() will not start the DMA operation, it merely adds | ||
152 | it to the pending queue. For this, see step 5, dma_async_issue_pending. | ||
153 | |||
154 | 5. Issue pending DMA requests and wait for callback notification | ||
155 | |||
156 | The transactions in the pending queue can be activated by calling the | ||
157 | issue_pending API. If channel is idle then the first transaction in | ||
158 | queue is started and subsequent ones queued up. | ||
159 | |||
160 | On completion of each DMA operation, the next in queue is started and | ||
161 | a tasklet triggered. The tasklet will then call the client driver | ||
162 | completion callback routine for notification, if set. | ||
163 | |||
164 | Interface: | ||
165 | void dma_async_issue_pending(struct dma_chan *chan); | ||
166 | |||
167 | Further APIs: | ||
168 | |||
169 | 1. int dmaengine_terminate_all(struct dma_chan *chan) | ||
170 | |||
171 | This causes all activity for the DMA channel to be stopped, and may | ||
172 | discard data in the DMA FIFO which hasn't been fully transferred. | ||
173 | No callback functions will be called for any incomplete transfers. | ||
174 | |||
175 | 2. int dmaengine_pause(struct dma_chan *chan) | ||
176 | |||
177 | This pauses activity on the DMA channel without data loss. | ||
178 | |||
179 | 3. int dmaengine_resume(struct dma_chan *chan) | ||
180 | |||
181 | Resume a previously paused DMA channel. It is invalid to resume a | ||
182 | channel which is not currently paused. | ||
183 | |||
184 | 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan, | ||
185 | dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used) | ||
186 | |||
187 | This can be used to check the status of the channel. Please see | ||
188 | the documentation in include/linux/dmaengine.h for a more complete | ||
189 | description of this API. | ||
190 | |||
191 | This can be used in conjunction with dma_async_is_complete() and | ||
192 | the cookie returned from dmaengine_submit() to check for | ||
193 | completion of a specific DMA transaction. | ||
194 | |||
195 | Note: | ||
196 | Not all DMA engine drivers can return reliable information for | ||
197 | a running DMA channel. It is recommended that DMA engine users | ||
198 | pause or stop (via dmaengine_terminate_all()) the channel before | ||
199 | using this API. | ||