aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-generic
diff options
context:
space:
mode:
authorZachary Amsden <zach@vmware.com>2006-10-01 02:29:33 -0400
committerLinus Torvalds <torvalds@g5.osdl.org>2006-10-01 03:39:33 -0400
commit6606c3e0da5360799e07ae24b05080cc85c68e72 (patch)
tree5072acfc3b36e48ec84fe28805d160cbc9b28900 /include/asm-generic
parent9888a1cae3f859db38b9604e3df1c02177161bb0 (diff)
[PATCH] paravirt: lazy mmu mode hooks.patch
Implement lazy MMU update hooks which are SMP safe for both direct and shadow page tables. The idea is that PTE updates and page invalidations while in lazy mode can be batched into a single hypercall. We use this in VMI for shadow page table synchronization, and it is a win. It also can be used by PPC and for direct page tables on Xen. For SMP, the enter / leave must happen under protection of the page table locks for page tables which are being modified. This is because otherwise, you end up with stale state in the batched hypercall, which other CPUs can race ahead of. Doing this under the protection of the locks guarantees the synchronization is correct, and also means that spurious faults which are generated during this window by remote CPUs are properly handled, as the page fault handler must re-check the PTE under protection of the same lock. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r--include/asm-generic/pgtable.h20
1 files changed, 20 insertions, 0 deletions
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 78740716c9e7..56627fa453a6 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -171,6 +171,26 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
171#endif 171#endif
172 172
173/* 173/*
174 * A facility to provide lazy MMU batching. This allows PTE updates and
175 * page invalidations to be delayed until a call to leave lazy MMU mode
176 * is issued. Some architectures may benefit from doing this, and it is
177 * beneficial for both shadow and direct mode hypervisors, which may batch
178 * the PTE updates which happen during this window. Note that using this
179 * interface requires that read hazards be removed from the code. A read
180 * hazard could result in the direct mode hypervisor case, since the actual
181 * write to the page tables may not yet have taken place, so reads though
182 * a raw PTE pointer after it has been modified are not guaranteed to be
183 * up to date. This mode can only be entered and left under the protection of
184 * the page table locks for all page tables which may be modified. In the UP
185 * case, this is required so that preemption is disabled, and in the SMP case,
186 * it must synchronize the delayed page table writes properly on other CPUs.
187 */
188#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
189#define arch_enter_lazy_mmu_mode() do {} while (0)
190#define arch_leave_lazy_mmu_mode() do {} while (0)
191#endif
192
193/*
174 * When walking page tables, get the address of the next boundary, 194 * When walking page tables, get the address of the next boundary,
175 * or the end address of the range if that comes earlier. Although no 195 * or the end address of the range if that comes earlier. Although no
176 * vma end wraps to 0, rounded up __boundary may wrap to 0 throughout. 196 * vma end wraps to 0, rounded up __boundary may wrap to 0 throughout.