aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2011-08-01 19:46:37 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2011-08-01 19:46:37 -0400
commit12ff47e7f5fb64c566f62e6cf6a3b291c51bd337 (patch)
treed9fba3780142af380ccfaf90d8b13363e3475bd7 /Documentation
parent73bcbac130a59f236ae78ed70ef7a05b45caa19e (diff)
parent1ae105aa7416087f2920c35c3cd16831d0d09c9c (diff)
Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (37 commits) Improve slave/cyclic DMA engine documentation dmaengine: pl08x: handle the rest of enums in pl08x_width DMA: PL08x: cleanup selection of burst size DMA: PL08x: avoid recalculating cctl at each prepare DMA: PL08x: cleanup selection of buswidth DMA: PL08x: constify plchan->cd and plat->slave_channels DMA: PL08x: separately store source/destination cctl DMA: PL08x: separately store source/destination slave address DMA: PL08x: clean up LLI debugging DMA: PL08x: select LLI bus only once per LLI setup DMA: PL08x: remove unused constants ARM: mxs-dma: reset after disable channel dma: intel_mid_dma: remove redundant pci_set_drvdata calls dma: mxs-dma: fix unterminated platform_device_id table dmaengine: pl330: make platform data optional dmaengine: imx-sdma: return proper error if kzalloc fails pch_dma: Fix CTL register access issue dmaengine: mxs-dma: skip request_irq for NO_IRQ dmaengine/coh901318: fix slave submission semantics dmaengine/ste_dma40: allow memory buswidth/burst to be configured ... Fix trivial whitespace conflict in drivers/dma/mv_xor.c
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/dmaengine.txt234
1 files changed, 164 insertions, 70 deletions
diff --git a/Documentation/dmaengine.txt b/Documentation/dmaengine.txt
index 5a0cb1ef6164..94b7e0f96b38 100644
--- a/Documentation/dmaengine.txt
+++ b/Documentation/dmaengine.txt
@@ -10,87 +10,181 @@ NOTE: For DMA Engine usage in async_tx please see:
10Below is a guide to device driver writers on how to use the Slave-DMA API of the 10Below is a guide to device driver writers on how to use the Slave-DMA API of the
11DMA Engine. This is applicable only for slave DMA usage only. 11DMA Engine. This is applicable only for slave DMA usage only.
12 12
13The slave DMA usage consists of following steps 13The slave DMA usage consists of following steps:
141. Allocate a DMA slave channel 141. Allocate a DMA slave channel
152. Set slave and controller specific parameters 152. Set slave and controller specific parameters
163. Get a descriptor for transaction 163. Get a descriptor for transaction
174. Submit the transaction and wait for callback notification 174. Submit the transaction
185. Issue pending requests and wait for callback notification
18 19
191. Allocate a DMA slave channel 201. Allocate a DMA slave channel
20Channel allocation is slightly different in the slave DMA context, client 21
21drivers typically need a channel from a particular DMA controller only and even 22 Channel allocation is slightly different in the slave DMA context,
22in some cases a specific channel is desired. To request a channel 23 client drivers typically need a channel from a particular DMA
23dma_request_channel() API is used. 24 controller only and even in some cases a specific channel is desired.
24 25 To request a channel dma_request_channel() API is used.
25Interface: 26
26struct dma_chan *dma_request_channel(dma_cap_mask_t mask, 27 Interface:
27 dma_filter_fn filter_fn, 28 struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
28 void *filter_param); 29 dma_filter_fn filter_fn,
29where dma_filter_fn is defined as: 30 void *filter_param);
30typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 31 where dma_filter_fn is defined as:
31 32 typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
32When the optional 'filter_fn' parameter is set to NULL dma_request_channel 33
33simply returns the first channel that satisfies the capability mask. Otherwise, 34 The 'filter_fn' parameter is optional, but highly recommended for
34when the mask parameter is insufficient for specifying the necessary channel, 35 slave and cyclic channels as they typically need to obtain a specific
35the filter_fn routine can be used to disposition the available channels in the 36 DMA channel.
36system. The filter_fn routine is called once for each free channel in the 37
37system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags 38 When the optional 'filter_fn' parameter is NULL, dma_request_channel()
38that channel to be the return value from dma_request_channel. A channel 39 simply returns the first channel that satisfies the capability mask.
39allocated via this interface is exclusive to the caller, until 40
40dma_release_channel() is called. 41 Otherwise, the 'filter_fn' routine will be called once for each free
42 channel which has a capability in 'mask'. 'filter_fn' is expected to
43 return 'true' when the desired DMA channel is found.
44
45 A channel allocated via this interface is exclusive to the caller,
46 until dma_release_channel() is called.
41 47
422. Set slave and controller specific parameters 482. Set slave and controller specific parameters
43Next step is always to pass some specific information to the DMA driver. Most of 49
44the generic information which a slave DMA can use is in struct dma_slave_config. 50 Next step is always to pass some specific information to the DMA
45It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA 51 driver. Most of the generic information which a slave DMA can use
46burst lengths etc. If some DMA controllers have more parameters to be sent then 52 is in struct dma_slave_config. This allows the clients to specify
47they should try to embed struct dma_slave_config in their controller specific 53 DMA direction, DMA addresses, bus widths, DMA burst lengths etc
48structure. That gives flexibility to client to pass more parameters, if 54 for the peripheral.
49required. 55
50 56 If some DMA controllers have more parameters to be sent then they
51Interface: 57 should try to embed struct dma_slave_config in their controller
52int dmaengine_slave_config(struct dma_chan *chan, 58 specific structure. That gives flexibility to client to pass more
53 struct dma_slave_config *config) 59 parameters, if required.
60
61 Interface:
62 int dmaengine_slave_config(struct dma_chan *chan,
63 struct dma_slave_config *config)
64
65 Please see the dma_slave_config structure definition in dmaengine.h
66 for a detailed explaination of the struct members. Please note
67 that the 'direction' member will be going away as it duplicates the
68 direction given in the prepare call.
54 69
553. Get a descriptor for transaction 703. Get a descriptor for transaction
56For slave usage the various modes of slave transfers supported by the 71
57DMA-engine are: 72 For slave usage the various modes of slave transfers supported by the
58slave_sg - DMA a list of scatter gather buffers from/to a peripheral 73 DMA-engine are:
59dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the 74
75 slave_sg - DMA a list of scatter gather buffers from/to a peripheral
76 dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
60 operation is explicitly stopped. 77 operation is explicitly stopped.
61The non NULL return of this transfer API represents a "descriptor" for the given 78
62transaction. 79 A non-NULL return of this transfer API represents a "descriptor" for
63 80 the given transaction.
64Interface: 81
65struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)( 82 Interface:
66 struct dma_chan *chan, 83 struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
67 struct scatterlist *dst_sg, unsigned int dst_nents, 84 struct dma_chan *chan, struct scatterlist *sgl,
68 struct scatterlist *src_sg, unsigned int src_nents, 85 unsigned int sg_len, enum dma_data_direction direction,
69 unsigned long flags); 86 unsigned long flags);
70struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( 87
88 struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
71 struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, 89 struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
72 size_t period_len, enum dma_data_direction direction); 90 size_t period_len, enum dma_data_direction direction);
73 91
744. Submit the transaction and wait for callback notification 92 The peripheral driver is expected to have mapped the scatterlist for
75To schedule the transaction to be scheduled by dma device, the "descriptor" 93 the DMA operation prior to calling device_prep_slave_sg, and must
76returned in above (3) needs to be submitted. 94 keep the scatterlist mapped until the DMA operation has completed.
77To tell the dma driver that a transaction is ready to be serviced, the 95 The scatterlist must be mapped using the DMA struct device. So,
78descriptor->submit() callback needs to be invoked. This chains the descriptor to 96 normal setup should look like this:
79the pending queue. 97
80The transactions in the pending queue can be activated by calling the 98 nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
81issue_pending API. If channel is idle then the first transaction in queue is 99 if (nr_sg == 0)
82started and subsequent ones queued up. 100 /* error */
83On completion of the DMA operation the next in queue is submitted and a tasklet 101
84triggered. The tasklet would then call the client driver completion callback 102 desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
85routine for notification, if set. 103 direction, flags);
86Interface: 104
87void dma_async_issue_pending(struct dma_chan *chan); 105 Once a descriptor has been obtained, the callback information can be
88 106 added and the descriptor must then be submitted. Some DMA engine
89============================================================================== 107 drivers may hold a spinlock between a successful preparation and
90 108 submission so it is important that these two operations are closely
91Additional usage notes for dma driver writers 109 paired.
921/ Although DMA engine specifies that completion callback routines cannot submit 110
93any new operations, but typically for slave DMA subsequent transaction may not 111 Note:
94be available for submit prior to callback routine being called. This requirement 112 Although the async_tx API specifies that completion callback
95is not a requirement for DMA-slave devices. But they should take care to drop 113 routines cannot submit any new operations, this is not the
96the spin-lock they might be holding before calling the callback routine 114 case for slave/cyclic DMA.
115
116 For slave DMA, the subsequent transaction may not be available
117 for submission prior to callback function being invoked, so
118 slave DMA callbacks are permitted to prepare and submit a new
119 transaction.
120
121 For cyclic DMA, a callback function may wish to terminate the
122 DMA via dmaengine_terminate_all().
123
124 Therefore, it is important that DMA engine drivers drop any
125 locks before calling the callback function which may cause a
126 deadlock.
127
128 Note that callbacks will always be invoked from the DMA
129 engines tasklet, never from interrupt context.
130
1314. Submit the transaction
132
133 Once the descriptor has been prepared and the callback information
134 added, it must be placed on the DMA engine drivers pending queue.
135
136 Interface:
137 dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
138
139 This returns a cookie can be used to check the progress of DMA engine
140 activity via other DMA engine calls not covered in this document.
141
142 dmaengine_submit() will not start the DMA operation, it merely adds
143 it to the pending queue. For this, see step 5, dma_async_issue_pending.
144
1455. Issue pending DMA requests and wait for callback notification
146
147 The transactions in the pending queue can be activated by calling the
148 issue_pending API. If channel is idle then the first transaction in
149 queue is started and subsequent ones queued up.
150
151 On completion of each DMA operation, the next in queue is started and
152 a tasklet triggered. The tasklet will then call the client driver
153 completion callback routine for notification, if set.
154
155 Interface:
156 void dma_async_issue_pending(struct dma_chan *chan);
157
158Further APIs:
159
1601. int dmaengine_terminate_all(struct dma_chan *chan)
161
162 This causes all activity for the DMA channel to be stopped, and may
163 discard data in the DMA FIFO which hasn't been fully transferred.
164 No callback functions will be called for any incomplete transfers.
165
1662. int dmaengine_pause(struct dma_chan *chan)
167
168 This pauses activity on the DMA channel without data loss.
169
1703. int dmaengine_resume(struct dma_chan *chan)
171
172 Resume a previously paused DMA channel. It is invalid to resume a
173 channel which is not currently paused.
174
1754. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
176 dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
177
178 This can be used to check the status of the channel. Please see
179 the documentation in include/linux/dmaengine.h for a more complete
180 description of this API.
181
182 This can be used in conjunction with dma_async_is_complete() and
183 the cookie returned from 'descriptor->submit()' to check for
184 completion of a specific DMA transaction.
185
186 Note:
187 Not all DMA engine drivers can return reliable information for
188 a running DMA channel. It is recommended that DMA engine users
189 pause or stop (via dmaengine_terminate_all) the channel before
190 using this API.