aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorArmin Reese <armin.c.reese@intel.com>2014-07-11 13:20:07 -0400
committerDaniel Vetter <daniel.vetter@ffwll.ch>2014-07-23 01:05:40 -0400
commit9490edb58865536e91baf489eff4e67709cf134c (patch)
tree43d77a364123a4a44c56f1b8ab7b80885575da0d
parentc17c654d18d0c14fb366bdad00e49fdf82a73aa4 (diff)
drm/i915: Do not unmap object unless no other VMAs reference it
When using an IOMMU, GEM objects are mapped by their DMA address as the physical address is unknown. This depends on the underlying IOMMU driver to map and unmap the physical pages properly as defined in intel_iommu.c. The current code will tell the IOMMU to unmap the GEM BO's pages on the destruction of the first VMA that "maps" that BO. This is clearly wrong as there may be other VMAs "mapping" that BO (using flink). The scanout is one such example. The patch fixes this issue by only unmapping the DMA maps when there are no more VMAs mapping that object. This is equivalent to when an object is considered unbound as can be seen by the code. On the first VMA that again because bound, we will remap. An alternate solution would be to move the dma mapping to object creation and destrubtion. I am not sure if this is considered an unfriendly thing to do. Some notes to backporters trying to backport full PPGTT: The bug can never be hit without enabling the IOMMU. The existing code will also do the right thing when the object is shared via dmabuf. The failure should be demonstrable with flink. In cases when not using intel_iommu_strict it is likely (likely, as defined by: off the top of my head) on current workloads to *not* hit this bug since we often teardown all VMAs for an object shared across multiple VMs. We also finish access to that object before the first dma_unmapping. intel_iommu_strict with flinked buffers is likely to hit this issue. Signed-off-by: Armin Reese <armin.c.reese@intel.com> [danvet: Add the excellent commit message provided by Ben.] Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-rw-r--r--drivers/gpu/drm/i915/i915_gem.c6
1 files changed, 3 insertions, 3 deletions
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ed0b5fc4b6b0..dcd8d7b42552 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2927,8 +2927,6 @@ int i915_vma_unbind(struct i915_vma *vma)
2927 2927
2928 vma->unbind_vma(vma); 2928 vma->unbind_vma(vma);
2929 2929
2930 i915_gem_gtt_finish_object(obj);
2931
2932 list_del_init(&vma->mm_list); 2930 list_del_init(&vma->mm_list);
2933 /* Avoid an unnecessary call to unbind on rebind. */ 2931 /* Avoid an unnecessary call to unbind on rebind. */
2934 if (i915_is_ggtt(vma->vm)) 2932 if (i915_is_ggtt(vma->vm))
@@ -2939,8 +2937,10 @@ int i915_vma_unbind(struct i915_vma *vma)
2939 2937
2940 /* Since the unbound list is global, only move to that list if 2938 /* Since the unbound list is global, only move to that list if
2941 * no more VMAs exist. */ 2939 * no more VMAs exist. */
2942 if (list_empty(&obj->vma_list)) 2940 if (list_empty(&obj->vma_list)) {
2941 i915_gem_gtt_finish_object(obj);
2943 list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list); 2942 list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
2943 }
2944 2944
2945 /* And finally now the object is completely decoupled from this vma, 2945 /* And finally now the object is completely decoupled from this vma,
2946 * we can drop its hold on the backing storage and allow it to be 2946 * we can drop its hold on the backing storage and allow it to be