diff options
author | Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> | 2013-03-12 04:45:30 -0400 |
---|---|---|
committer | Gleb Natapov <gleb@redhat.com> | 2013-03-14 04:21:21 -0400 |
commit | 982b3394dd23eec6e5a2f7871238435a167b63cc (patch) | |
tree | 24e7cbbfdfa7500aa1e685b80aee205bf2ff17af /arch/x86/include | |
parent | 95b0430d1a53541076ffbaf453f8b49a547cceba (diff) |
KVM: x86: Optimize mmio spte zapping when creating/moving memslot
When we create or move a memory slot, we need to zap mmio sptes.
Currently, zap_all() is used for this and this is causing two problems:
- extra page faults after zapping mmu pages
- long mmu_lock hold time during zapping mmu pages
For the latter, Marcelo reported a disastrous mmu_lock hold time during
hot-plug, which made the guest unresponsive for a long time.
This patch takes a simple way to fix these problems: do not zap mmu
pages unless they are marked mmio cached. On our test box, this took
only 50us for the 4GB guest and we did not see ms of mmu_lock hold time
any more.
Note that we still need to do zap_all() for other cases. So another
work is also needed: Xiao's work may be the one.
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Diffstat (limited to 'arch/x86/include')
-rw-r--r-- | arch/x86/include/asm/kvm_host.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9b75cae83d10..3f205c6cde59 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h | |||
@@ -767,6 +767,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, | |||
767 | struct kvm_memory_slot *slot, | 767 | struct kvm_memory_slot *slot, |
768 | gfn_t gfn_offset, unsigned long mask); | 768 | gfn_t gfn_offset, unsigned long mask); |
769 | void kvm_mmu_zap_all(struct kvm *kvm); | 769 | void kvm_mmu_zap_all(struct kvm *kvm); |
770 | void kvm_mmu_zap_mmio_sptes(struct kvm *kvm); | ||
770 | unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); | 771 | unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); |
771 | void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); | 772 | void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); |
772 | 773 | ||