aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLai Jiangshan <laijs@cn.fujitsu.com>2010-05-26 04:48:19 -0400
committerAvi Kivity <avi@redhat.com>2010-08-01 03:39:21 -0400
commit3af1817a0d65e8c1317e8d23cfe8a91aa1d4a065 (patch)
treefaa0f198bebd3496b27b091d0c6d59ba77f9306e
parentc9fa0b3bef9a0b117b3c3f958ec553c21f609a9f (diff)
KVM: MMU: calculate correct gfn for small host pages backing large guest pages
In Documentation/kvm/mmu.txt: gfn: Either the guest page table containing the translations shadowed by this page, or the base page frame for linear translations. See role.direct. But in function FNAME(fetch)(), sp->gfn is incorrect when one of following situations occurred: 1) guest is 32bit paging and the guest PDE maps a 4-MByte page (backed by 4k host pages), FNAME(fetch)() miss handling the quadrant. And if guest use pse-36, "table_gfn = gpte_to_gfn(gw->ptes[level - delta]);" is incorrect. 2) guest is long mode paging and the guest PDPTE maps a 1-GByte page (backed by 4k or 2M host pages). So we fix it to suit to the document and suit to the code which requires sp->gfn correct when sp->role.direct=1. We use the goal mapping gfn(gw->gfn) to calculate the base page frame for linear translations, it is simple and easy to be understood. Reported-by: Marcelo Tosatti <mtosatti@redhat.com> Reported-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-rw-r--r--arch/x86/kvm/paging_tmpl.h11
1 files changed, 7 insertions, 4 deletions
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 2ee7060a80a5..1f7f5dd8306f 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -339,10 +339,13 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
339 direct = 1; 339 direct = 1;
340 if (!is_dirty_gpte(gw->ptes[level - delta])) 340 if (!is_dirty_gpte(gw->ptes[level - delta]))
341 access &= ~ACC_WRITE_MASK; 341 access &= ~ACC_WRITE_MASK;
342 table_gfn = gpte_to_gfn(gw->ptes[level - delta]); 342 /*
343 /* advance table_gfn when emulating 1gb pages with 4k */ 343 * It is a large guest pages backed by small host pages,
344 if (delta == 0) 344 * So we set @direct(@shadow_page->role.direct)=1, and
345 table_gfn += PT_INDEX(addr, level); 345 * set @table_gfn(@shadow_page->gfn)=the base page frame
346 * for linear translations.
347 */
348 table_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
346 access &= gw->pte_access; 349 access &= gw->pte_access;
347 } else { 350 } else {
348 direct = 0; 351 direct = 0;