diff options
author | Chris Wright <chrisw@sous-sol.org> | 2011-05-28 14:15:04 -0400 |
---|---|---|
committer | David Woodhouse <David.Woodhouse@intel.com> | 2011-06-01 07:47:40 -0400 |
commit | 1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb (patch) | |
tree | c06f2bcc3b5aa5d0cb0869815e6247afc736ea7a /drivers/pci | |
parent | cb452a4040bb051d92e85d6e7eb60c11734c1781 (diff) |
intel-iommu: Dont cache iova above 32bit
Mike Travis and Mike Habeck reported an issue where iova allocation
would return a range that was larger than a device's dma mask.
https://lkml.org/lkml/2011/3/29/423
The dmar initialization code will reserve all PCI MMIO regions and copy
those reservations into a domain specific iova tree. It is possible for
one of those regions to be above the dma mask of a device. It is typical
to allocate iovas with a 32bit mask (despite device's dma mask possibly
being larger) and cache the result until it exhausts the lower 32bit
address space. Freeing the iova range that is >= the last iova in the
lower 32bit range when there is still an iova above the 32bit range will
corrupt the cached iova by pointing it to a region that is above 32bit.
If that region is also larger than the device's dma mask, a subsequent
allocation will return an unusable iova and cause dma failure.
Simply don't cache an iova that is above the 32bit caching boundary.
Reported-by: Mike Travis <travis@sgi.com>
Reported-by: Mike Habeck <habeck@sgi.com>
Cc: stable@kernel.org
Acked-by: Mike Travis <travis@sgi.com>
Tested-by: Mike Habeck <habeck@sgi.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Diffstat (limited to 'drivers/pci')
-rw-r--r-- | drivers/pci/iova.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/drivers/pci/iova.c b/drivers/pci/iova.c index 9606e599a475..c5c274ab5c5a 100644 --- a/drivers/pci/iova.c +++ b/drivers/pci/iova.c | |||
@@ -63,8 +63,16 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) | |||
63 | curr = iovad->cached32_node; | 63 | curr = iovad->cached32_node; |
64 | cached_iova = container_of(curr, struct iova, node); | 64 | cached_iova = container_of(curr, struct iova, node); |
65 | 65 | ||
66 | if (free->pfn_lo >= cached_iova->pfn_lo) | 66 | if (free->pfn_lo >= cached_iova->pfn_lo) { |
67 | iovad->cached32_node = rb_next(&free->node); | 67 | struct rb_node *node = rb_next(&free->node); |
68 | struct iova *iova = container_of(node, struct iova, node); | ||
69 | |||
70 | /* only cache if it's below 32bit pfn */ | ||
71 | if (node && iova->pfn_lo < iovad->dma_32bit_pfn) | ||
72 | iovad->cached32_node = node; | ||
73 | else | ||
74 | iovad->cached32_node = NULL; | ||
75 | } | ||
68 | } | 76 | } |
69 | 77 | ||
70 | /* Computes the padding size required, to make the | 78 | /* Computes the padding size required, to make the |