aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86_64/mm
diff options
context:
space:
mode:
authorArjan van de Ven <arjan@intel.linux.com>2006-03-25 10:30:10 -0500
committerLinus Torvalds <torvalds@g5.osdl.org>2006-03-25 12:10:54 -0500
commita9ba9a3b3897561d01e04cd21433746df46548c0 (patch)
treed222fb0e1c522d4bd506cb8c24c498a7e1589da2 /arch/x86_64/mm
parent4bc32c4d5cde5c57edcc9c2fe5057da8a4dd0153 (diff)
[PATCH] x86_64: prefetch the mmap_sem in the fault path
In a micro-benchmark that stresses the pagefault path, the down_read_trylock on the mmap_sem showed up quite high on the profile. Turns out this lock is bouncing between cpus quite a bit and thus is cache-cold a lot. This patch prefetches the lock (for write) as early as possible (and before some other somewhat expensive operations). With this patch, the down_read_trylock basically fell out of the top of profile. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/x86_64/mm')
-rw-r--r--arch/x86_64/mm/fault.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/arch/x86_64/mm/fault.c b/arch/x86_64/mm/fault.c
index de91e17daf6f..316c53de47bd 100644
--- a/arch/x86_64/mm/fault.c
+++ b/arch/x86_64/mm/fault.c
@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
314 unsigned long flags; 314 unsigned long flags;
315 siginfo_t info; 315 siginfo_t info;
316 316
317 tsk = current;
318 mm = tsk->mm;
319 prefetchw(&mm->mmap_sem);
320
317 /* get the address */ 321 /* get the address */
318 __asm__("movq %%cr2,%0":"=r" (address)); 322 __asm__("movq %%cr2,%0":"=r" (address));
319 323
320 tsk = current;
321 mm = tsk->mm;
322 info.si_code = SEGV_MAPERR; 324 info.si_code = SEGV_MAPERR;
323 325
324 326