diff options
author | Dan Williams <dan.j.williams@intel.com> | 2009-01-05 19:14:31 -0500 |
---|---|---|
committer | Dan Williams <dan.j.williams@intel.com> | 2009-01-05 20:10:19 -0500 |
commit | 28405d8d9ce05f5bd869ef8b48da5086f9527d73 (patch) | |
tree | f3e68349cc38b4dc0a437455ba9513be78f620ae /Documentation | |
parent | fe0bdec68b77020281dc814805edfe594ae89e0f (diff) |
async_tx, dmaengine: document channel allocation and api rework
"Wouldn't it be better if the dmaengine layer made sure it didn't pass
the same channel several times to a client?
I mean, you seem concerned that the memcpy() API should be transparent
and easy to use, but the whole registration interface is just
ridiculously complicated..."
- Haavard
The dmaengine and async_tx registration/allocation interface is indeed
needlessly complicated. This redesign has the following goals:
1/ Simplify reference counting: dma channels are not something one would
expect to be hotplugged, it should be an exceptional event handled by
drivers not something clients should be mandated to handle in a
callback. The common case channel removal event is 'rmmod <dma driver>',
which for simplicity should be disallowed if the channel is in use.
2/ Add an interface for requesting exclusive access to a channel
suitable to device-to-memory users.
3/ Convert all memory-to-memory users over to a common allocator, the goal
here is to not have competing channel allocation schemes. The only
competition should be between device-to-memory exclusive allocations and
the memory-to-memory usage case where channels are shared between
multiple "clients".
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Cc: Neil Brown <neilb@suse.de>
Cc: Jeff Garzik <jeff@garzik.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/crypto/async-tx-api.txt | 96 | ||||
-rw-r--r-- | Documentation/dmaengine.txt | 1 |
2 files changed, 45 insertions, 52 deletions
diff --git a/Documentation/crypto/async-tx-api.txt b/Documentation/crypto/async-tx-api.txt index c1e9545c59bd..9f59fcbf5d82 100644 --- a/Documentation/crypto/async-tx-api.txt +++ b/Documentation/crypto/async-tx-api.txt | |||
@@ -13,9 +13,9 @@ | |||
13 | 3.6 Constraints | 13 | 3.6 Constraints |
14 | 3.7 Example | 14 | 3.7 Example |
15 | 15 | ||
16 | 4 DRIVER DEVELOPER NOTES | 16 | 4 DMAENGINE DRIVER DEVELOPER NOTES |
17 | 4.1 Conformance points | 17 | 4.1 Conformance points |
18 | 4.2 "My application needs finer control of hardware channels" | 18 | 4.2 "My application needs exclusive control of hardware channels" |
19 | 19 | ||
20 | 5 SOURCE | 20 | 5 SOURCE |
21 | 21 | ||
@@ -150,6 +150,7 @@ ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more | |||
150 | implementation examples. | 150 | implementation examples. |
151 | 151 | ||
152 | 4 DRIVER DEVELOPMENT NOTES | 152 | 4 DRIVER DEVELOPMENT NOTES |
153 | |||
153 | 4.1 Conformance points: | 154 | 4.1 Conformance points: |
154 | There are a few conformance points required in dmaengine drivers to | 155 | There are a few conformance points required in dmaengine drivers to |
155 | accommodate assumptions made by applications using the async_tx API: | 156 | accommodate assumptions made by applications using the async_tx API: |
@@ -158,58 +159,49 @@ accommodate assumptions made by applications using the async_tx API: | |||
158 | 3/ Use async_tx_run_dependencies() in the descriptor clean up path to | 159 | 3/ Use async_tx_run_dependencies() in the descriptor clean up path to |
159 | handle submission of dependent operations | 160 | handle submission of dependent operations |
160 | 161 | ||
161 | 4.2 "My application needs finer control of hardware channels" | 162 | 4.2 "My application needs exclusive control of hardware channels" |
162 | This requirement seems to arise from cases where a DMA engine driver is | 163 | Primarily this requirement arises from cases where a DMA engine driver |
163 | trying to support device-to-memory DMA. The dmaengine and async_tx | 164 | is being used to support device-to-memory operations. A channel that is |
164 | implementations were designed for offloading memory-to-memory | 165 | performing these operations cannot, for many platform specific reasons, |
165 | operations; however, there are some capabilities of the dmaengine layer | 166 | be shared. For these cases the dma_request_channel() interface is |
166 | that can be used for platform-specific channel management. | 167 | provided. |
167 | Platform-specific constraints can be handled by registering the | 168 | |
168 | application as a 'dma_client' and implementing a 'dma_event_callback' to | 169 | The interface is: |
169 | apply a filter to the available channels in the system. Before showing | 170 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, |
170 | how to implement a custom dma_event callback some background of | 171 | dma_filter_fn filter_fn, |
171 | dmaengine's client support is required. | 172 | void *filter_param); |
172 | 173 | ||
173 | The following routines in dmaengine support multiple clients requesting | 174 | Where dma_filter_fn is defined as: |
174 | use of a channel: | 175 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); |
175 | - dma_async_client_register(struct dma_client *client) | 176 | |
176 | - dma_async_client_chan_request(struct dma_client *client) | 177 | When the optional 'filter_fn' parameter is set to NULL |
177 | 178 | dma_request_channel simply returns the first channel that satisfies the | |
178 | dma_async_client_register takes a pointer to an initialized dma_client | 179 | capability mask. Otherwise, when the mask parameter is insufficient for |
179 | structure. It expects that the 'event_callback' and 'cap_mask' fields | 180 | specifying the necessary channel, the filter_fn routine can be used to |
180 | are already initialized. | 181 | disposition the available channels in the system. The filter_fn routine |
181 | 182 | is called once for each free channel in the system. Upon seeing a | |
182 | dma_async_client_chan_request triggers dmaengine to notify the client of | 183 | suitable channel filter_fn returns DMA_ACK which flags that channel to |
183 | all channels that satisfy the capability mask. It is up to the client's | 184 | be the return value from dma_request_channel. A channel allocated via |
184 | event_callback routine to track how many channels the client needs and | 185 | this interface is exclusive to the caller, until dma_release_channel() |
185 | how many it is currently using. The dma_event_callback routine returns a | 186 | is called. |
186 | dma_state_client code to let dmaengine know the status of the | 187 | |
187 | allocation. | 188 | The DMA_PRIVATE capability flag is used to tag dma devices that should |
188 | 189 | not be used by the general-purpose allocator. It can be set at | |
189 | Below is the example of how to extend this functionality for | 190 | initialization time if it is known that a channel will always be |
190 | platform-specific filtering of the available channels beyond the | 191 | private. Alternatively, it is set when dma_request_channel() finds an |
191 | standard capability mask: | 192 | unused "public" channel. |
192 | 193 | ||
193 | static enum dma_state_client | 194 | A couple caveats to note when implementing a driver and consumer: |
194 | my_dma_client_callback(struct dma_client *client, | 195 | 1/ Once a channel has been privately allocated it will no longer be |
195 | struct dma_chan *chan, enum dma_state state) | 196 | considered by the general-purpose allocator even after a call to |
196 | { | 197 | dma_release_channel(). |
197 | struct dma_device *dma_dev; | 198 | 2/ Since capabilities are specified at the device level a dma_device |
198 | struct my_platform_specific_dma *plat_dma_dev; | 199 | with multiple channels will either have all channels public, or all |
199 | 200 | channels private. | |
200 | dma_dev = chan->device; | ||
201 | plat_dma_dev = container_of(dma_dev, | ||
202 | struct my_platform_specific_dma, | ||
203 | dma_dev); | ||
204 | |||
205 | if (!plat_dma_dev->platform_specific_capability) | ||
206 | return DMA_DUP; | ||
207 | |||
208 | . . . | ||
209 | } | ||
210 | 201 | ||
211 | 5 SOURCE | 202 | 5 SOURCE |
212 | include/linux/dmaengine.h: core header file for DMA drivers and clients | 203 | |
204 | include/linux/dmaengine.h: core header file for DMA drivers and api users | ||
213 | drivers/dma/dmaengine.c: offload engine channel management routines | 205 | drivers/dma/dmaengine.c: offload engine channel management routines |
214 | drivers/dma/: location for offload engine drivers | 206 | drivers/dma/: location for offload engine drivers |
215 | include/linux/async_tx.h: core header file for the async_tx api | 207 | include/linux/async_tx.h: core header file for the async_tx api |
diff --git a/Documentation/dmaengine.txt b/Documentation/dmaengine.txt new file mode 100644 index 000000000000..0c1c2f63c0a9 --- /dev/null +++ b/Documentation/dmaengine.txt | |||
@@ -0,0 +1 @@ | |||
See Documentation/crypto/async-tx-api.txt | |||