diff options
author | Quentin Casasnovas <quentin.casasnovas@oracle.com> | 2014-10-17 16:55:59 -0400 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2014-10-24 07:30:37 -0400 |
commit | 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f (patch) | |
tree | 977b69a4db7362325a5fda148ba2e43e2a8e07df /virt | |
parent | 3f6f1480d86bf9fc16c160d803ab1d006e3058d5 (diff) |
kvm: fix excessive pages un-pinning in kvm_iommu_map error path.
The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.
This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.
This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.
This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.
Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.
Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r-- | virt/kvm/iommu.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c index e51d9f9b995f..c1e6ae989a43 100644 --- a/virt/kvm/iommu.c +++ b/virt/kvm/iommu.c | |||
@@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm, | |||
43 | gfn_t base_gfn, unsigned long npages); | 43 | gfn_t base_gfn, unsigned long npages); |
44 | 44 | ||
45 | static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, | 45 | static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, |
46 | unsigned long size) | 46 | unsigned long npages) |
47 | { | 47 | { |
48 | gfn_t end_gfn; | 48 | gfn_t end_gfn; |
49 | pfn_t pfn; | 49 | pfn_t pfn; |
50 | 50 | ||
51 | pfn = gfn_to_pfn_memslot(slot, gfn); | 51 | pfn = gfn_to_pfn_memslot(slot, gfn); |
52 | end_gfn = gfn + (size >> PAGE_SHIFT); | 52 | end_gfn = gfn + npages; |
53 | gfn += 1; | 53 | gfn += 1; |
54 | 54 | ||
55 | if (is_error_noslot_pfn(pfn)) | 55 | if (is_error_noslot_pfn(pfn)) |
@@ -119,7 +119,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) | |||
119 | * Pin all pages we are about to map in memory. This is | 119 | * Pin all pages we are about to map in memory. This is |
120 | * important because we unmap and unpin in 4kb steps later. | 120 | * important because we unmap and unpin in 4kb steps later. |
121 | */ | 121 | */ |
122 | pfn = kvm_pin_pages(slot, gfn, page_size); | 122 | pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT); |
123 | if (is_error_noslot_pfn(pfn)) { | 123 | if (is_error_noslot_pfn(pfn)) { |
124 | gfn += 1; | 124 | gfn += 1; |
125 | continue; | 125 | continue; |
@@ -131,7 +131,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) | |||
131 | if (r) { | 131 | if (r) { |
132 | printk(KERN_ERR "kvm_iommu_map_address:" | 132 | printk(KERN_ERR "kvm_iommu_map_address:" |
133 | "iommu failed to map pfn=%llx\n", pfn); | 133 | "iommu failed to map pfn=%llx\n", pfn); |
134 | kvm_unpin_pages(kvm, pfn, page_size); | 134 | kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT); |
135 | goto unmap_pages; | 135 | goto unmap_pages; |
136 | } | 136 | } |
137 | 137 | ||