summaryrefslogtreecommitdiffstats
path: root/drivers/dma-buf
diff options
context:
space:
mode:
authorDaniel Vetter <daniel.vetter@ffwll.ch>2016-12-09 13:53:08 -0500
committerSumit Semwal <sumit.semwal@linaro.org>2016-12-13 07:23:32 -0500
commit0959a1683d78270bab6381d498707fb8655ae11c (patch)
treefdadda6d04dc830b974624e5f890462a81b52365 /drivers/dma-buf
parent2904a8c1311f02896635fd35744262413a0b2726 (diff)
dma-buf: Update cpu access documentation
- Again move the information relevant for driver writers next to the callbacks. - Put the overview and userspace interface documentation into a DOC: section within the code. - Remove the text that mmap needs to be coherent - since the DMA_BUF_IOCTL_SYNC landed that's no longer the case. But keep the text that for pte zapping exporters need to adjust the address space. - Add a FIXME that kmap and the new begin/end stuff used by the SYNC ioctl don't really mix correctly. That's something I just realized while doing this doc rework. - Augment function and structure docs like usual. Cc: linux-doc@vger.kernel.org Cc: Jonathan Corbet <corbet@lwn.net> Cc: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> [sumits: fix cosmetic issues] Link: http://patchwork.freedesktop.org/patch/msgid/20161209185309.1682-5-daniel.vetter@ffwll.ch
Diffstat (limited to 'drivers/dma-buf')
-rw-r--r--drivers/dma-buf/dma-buf.c122
1 files changed, 122 insertions, 0 deletions
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 09f948fd62ad..eae0846cbd95 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -640,6 +640,122 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
640} 640}
641EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); 641EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
642 642
643/**
644 * DOC: cpu access
645 *
646 * There are mutliple reasons for supporting CPU access to a dma buffer object:
647 *
648 * - Fallback operations in the kernel, for example when a device is connected
649 * over USB and the kernel needs to shuffle the data around first before
650 * sending it away. Cache coherency is handled by braketing any transactions
651 * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
652 * access.
653 *
654 * To support dma_buf objects residing in highmem cpu access is page-based
655 * using an api similar to kmap. Accessing a dma_buf is done in aligned chunks
656 * of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which
657 * returns a pointer in kernel virtual address space. Afterwards the chunk
658 * needs to be unmapped again. There is no limit on how often a given chunk
659 * can be mapped and unmapped, i.e. the importer does not need to call
660 * begin_cpu_access again before mapping the same chunk again.
661 *
662 * Interfaces::
663 * void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
664 * void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
665 *
666 * There are also atomic variants of these interfaces. Like for kmap they
667 * facilitate non-blocking fast-paths. Neither the importer nor the exporter
668 * (in the callback) is allowed to block when using these.
669 *
670 * Interfaces::
671 * void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
672 * void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*);
673 *
674 * For importers all the restrictions of using kmap apply, like the limited
675 * supply of kmap_atomic slots. Hence an importer shall only hold onto at
676 * max 2 atomic dma_buf kmaps at the same time (in any given process context).
677 *
678 * dma_buf kmap calls outside of the range specified in begin_cpu_access are
679 * undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
680 * the partial chunks at the beginning and end but may return stale or bogus
681 * data outside of the range (in these partial chunks).
682 *
683 * Note that these calls need to always succeed. The exporter needs to
684 * complete any preparations that might fail in begin_cpu_access.
685 *
686 * For some cases the overhead of kmap can be too high, a vmap interface
687 * is introduced. This interface should be used very carefully, as vmalloc
688 * space is a limited resources on many architectures.
689 *
690 * Interfaces::
691 * void \*dma_buf_vmap(struct dma_buf \*dmabuf)
692 * void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
693 *
694 * The vmap call can fail if there is no vmap support in the exporter, or if
695 * it runs out of vmalloc space. Fallback to kmap should be implemented. Note
696 * that the dma-buf layer keeps a reference count for all vmap access and
697 * calls down into the exporter's vmap function only when no vmapping exists,
698 * and only unmaps it once. Protection against concurrent vmap/vunmap calls is
699 * provided by taking the dma_buf->lock mutex.
700 *
701 * - For full compatibility on the importer side with existing userspace
702 * interfaces, which might already support mmap'ing buffers. This is needed in
703 * many processing pipelines (e.g. feeding a software rendered image into a
704 * hardware pipeline, thumbnail creation, snapshots, ...). Also, Android's ION
705 * framework already supported this and for DMA buffer file descriptors to
706 * replace ION buffers mmap support was needed.
707 *
708 * There is no special interfaces, userspace simply calls mmap on the dma-buf
709 * fd. But like for CPU access there's a need to braket the actual access,
710 * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
711 * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
712 * be restarted.
713 *
714 * Some systems might need some sort of cache coherency management e.g. when
715 * CPU and GPU domains are being accessed through dma-buf at the same time.
716 * To circumvent this problem there are begin/end coherency markers, that
717 * forward directly to existing dma-buf device drivers vfunc hooks. Userspace
718 * can make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The
719 * sequence would be used like following:
720 *
721 * - mmap dma-buf fd
722 * - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
723 * to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
724 * want (with the new data being consumed by say the GPU or the scanout
725 * device)
726 * - munmap once you don't need the buffer any more
727 *
728 * For correctness and optimal performance, it is always required to use
729 * SYNC_START and SYNC_END before and after, respectively, when accessing the
730 * mapped address. Userspace cannot rely on coherent access, even when there
731 * are systems where it just works without calling these ioctls.
732 *
733 * - And as a CPU fallback in userspace processing pipelines.
734 *
735 * Similar to the motivation for kernel cpu access it is again important that
736 * the userspace code of a given importing subsystem can use the same
737 * interfaces with a imported dma-buf buffer object as with a native buffer
738 * object. This is especially important for drm where the userspace part of
739 * contemporary OpenGL, X, and other drivers is huge, and reworking them to
740 * use a different way to mmap a buffer rather invasive.
741 *
742 * The assumption in the current dma-buf interfaces is that redirecting the
743 * initial mmap is all that's needed. A survey of some of the existing
744 * subsystems shows that no driver seems to do any nefarious thing like
745 * syncing up with outstanding asynchronous processing on the device or
746 * allocating special resources at fault time. So hopefully this is good
747 * enough, since adding interfaces to intercept pagefaults and allow pte
748 * shootdowns would increase the complexity quite a bit.
749 *
750 * Interface::
751 * int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
752 * unsigned long);
753 *
754 * If the importing subsystem simply provides a special-purpose mmap call to
755 * set up a mapping in userspace, calling do_mmap with dma_buf->file will
756 * equally achieve that for a dma-buf object.
757 */
758
643static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 759static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
644 enum dma_data_direction direction) 760 enum dma_data_direction direction)
645{ 761{
@@ -665,6 +781,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
665 * @dmabuf: [in] buffer to prepare cpu access for. 781 * @dmabuf: [in] buffer to prepare cpu access for.
666 * @direction: [in] length of range for cpu access. 782 * @direction: [in] length of range for cpu access.
667 * 783 *
784 * After the cpu access is complete the caller should call
785 * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
786 * it guaranteed to be coherent with other DMA access.
787 *
668 * Can return negative error values, returns 0 on success. 788 * Can return negative error values, returns 0 on success.
669 */ 789 */
670int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 790int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
@@ -697,6 +817,8 @@ EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
697 * @dmabuf: [in] buffer to complete cpu access for. 817 * @dmabuf: [in] buffer to complete cpu access for.
698 * @direction: [in] length of range for cpu access. 818 * @direction: [in] length of range for cpu access.
699 * 819 *
820 * This terminates CPU access started with dma_buf_begin_cpu_access().
821 *
700 * Can return negative error values, returns 0 on success. 822 * Can return negative error values, returns 0 on success.
701 */ 823 */
702int dma_buf_end_cpu_access(struct dma_buf *dmabuf, 824int dma_buf_end_cpu_access(struct dma_buf *dmabuf,