aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorDaniel Vetter <daniel.vetter@ffwll.ch>2012-03-18 19:34:27 -0400
committerSumit Semwal <sumit.semwal@ti.com>2012-03-26 02:03:13 -0400
commitb0b40f24848440aa65418532fe97d8d46acceb3d (patch)
tree905a24a4164741ebae7ed41a85c63474c1bdedd6 /Documentation
parentfc13020e086bfedf2afb95c91c026d5af1f80107 (diff)
dma_buf: Add documentation for the new cpu access support
v2: Fix spelling issues noticed by Rob Clark. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Rob Clark <rob@ti.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/dma-buf-sharing.txt102
1 files changed, 99 insertions, 3 deletions
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 225f96d88f55..9f3aeeffd608 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -32,8 +32,12 @@ The buffer-user
32*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details] 32*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
33For this first version, A buffer shared using the dma_buf sharing API: 33For this first version, A buffer shared using the dma_buf sharing API:
34- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of 34- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
35 this framework. 35 this framework.
36- may be used *ONLY* by importers that do not need CPU access to the buffer. 36- with this new iteration of the dma-buf api cpu access from the kernel has been
37 enable, see below for the details.
38
39dma-buf operations for device dma only
40--------------------------------------
37 41
38The dma_buf buffer sharing API usage contains the following steps: 42The dma_buf buffer sharing API usage contains the following steps:
39 43
@@ -219,7 +223,99 @@ NOTES:
219 If the exporter chooses not to allow an attach() operation once a 223 If the exporter chooses not to allow an attach() operation once a
220 map_dma_buf() API has been called, it simply returns an error. 224 map_dma_buf() API has been called, it simply returns an error.
221 225
222Miscellaneous notes: 226Kernel cpu access to a dma-buf buffer object
227--------------------------------------------
228
229The motivation to allow cpu access from the kernel to a dma-buf object from the
230importers side are:
231- fallback operations, e.g. if the devices is connected to a usb bus and the
232 kernel needs to shuffle the data around first before sending it away.
233- full transparency for existing users on the importer side, i.e. userspace
234 should not notice the difference between a normal object from that subsystem
235 and an imported one backed by a dma-buf. This is really important for drm
236 opengl drivers that expect to still use all the existing upload/download
237 paths.
238
239Access to a dma_buf from the kernel context involves three steps:
240
2411. Prepare access, which invalidate any necessary caches and make the object
242 available for cpu access.
2432. Access the object page-by-page with the dma_buf map apis
2443. Finish access, which will flush any necessary cpu caches and free reserved
245 resources.
246
2471. Prepare access
248
249 Before an importer can access a dma_buf object with the cpu from the kernel
250 context, it needs to notify the exporter of the access that is about to
251 happen.
252
253 Interface:
254 int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
255 size_t start, size_t len,
256 enum dma_data_direction direction)
257
258 This allows the exporter to ensure that the memory is actually available for
259 cpu access - the exporter might need to allocate or swap-in and pin the
260 backing storage. The exporter also needs to ensure that cpu access is
261 coherent for the given range and access direction. The range and access
262 direction can be used by the exporter to optimize the cache flushing, i.e.
263 access outside of the range or with a different direction (read instead of
264 write) might return stale or even bogus data (e.g. when the exporter needs to
265 copy the data to temporary storage).
266
267 This step might fail, e.g. in oom conditions.
268
2692. Accessing the buffer
270
271 To support dma_buf objects residing in highmem cpu access is page-based using
272 an api similar to kmap. Accessing a dma_buf is done in aligned chunks of
273 PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns
274 a pointer in kernel virtual address space. Afterwards the chunk needs to be
275 unmapped again. There is no limit on how often a given chunk can be mapped
276 and unmapped, i.e. the importer does not need to call begin_cpu_access again
277 before mapping the same chunk again.
278
279 Interfaces:
280 void *dma_buf_kmap(struct dma_buf *, unsigned long);
281 void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
282
283 There are also atomic variants of these interfaces. Like for kmap they
284 facilitate non-blocking fast-paths. Neither the importer nor the exporter (in
285 the callback) is allowed to block when using these.
286
287 Interfaces:
288 void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
289 void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
290
291 For importers all the restrictions of using kmap apply, like the limited
292 supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2
293 atomic dma_buf kmaps at the same time (in any given process context).
294
295 dma_buf kmap calls outside of the range specified in begin_cpu_access are
296 undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
297 the partial chunks at the beginning and end but may return stale or bogus
298 data outside of the range (in these partial chunks).
299
300 Note that these calls need to always succeed. The exporter needs to complete
301 any preparations that might fail in begin_cpu_access.
302
3033. Finish access
304
305 When the importer is done accessing the range specified in begin_cpu_access,
306 it needs to announce this to the exporter (to facilitate cache flushing and
307 unpinning of any pinned resources). The result of of any dma_buf kmap calls
308 after end_cpu_access is undefined.
309
310 Interface:
311 void dma_buf_end_cpu_access(struct dma_buf *dma_buf,
312 size_t start, size_t len,
313 enum dma_data_direction dir);
314
315
316Miscellaneous notes
317-------------------
318
223- Any exporters or users of the dma-buf buffer sharing framework must have 319- Any exporters or users of the dma-buf buffer sharing framework must have
224 a 'select DMA_SHARED_BUFFER' in their respective Kconfigs. 320 a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
225 321