diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-08-28 02:49:11 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-08-29 20:25:04 -0400 |
commit | d992895ba2b27cf5adf1ba0ad6d27662adc54c5e (patch) | |
tree | 65a4d1f18a93a9e89d43fe0b8e0b3009675c50f0 /mm/memory.c | |
parent | 40193713df2cdb9c233b3fc2029ecdccb40cb1e4 (diff) |
[PATCH] Lazy page table copies in fork()
Defer copying of ptes until fault time when it is possible to reconstruct
the pte from backing store. Idea from Andi Kleen and Nick Piggin.
Thanks to input from Rik van Riel and Linus and to Hugh for correcting
my blundering.
Ray Fucillo <fucillo@intersystems.com> reports:
"I applied this latest patch to a 2.6.12 kernel and found that it does
resolve the problem. Prior to the patch on this machine, I was
seeing about 23ms spent in fork for ever 100MB of shared memory
segment.
After applying the patch, fork is taking about 1ms regardless of the
shared memory size."
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/mm/memory.c b/mm/memory.c index e046b7e4b530..a596c1172248 100644 --- a/mm/memory.c +++ b/mm/memory.c | |||
@@ -498,6 +498,17 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, | |||
498 | unsigned long addr = vma->vm_start; | 498 | unsigned long addr = vma->vm_start; |
499 | unsigned long end = vma->vm_end; | 499 | unsigned long end = vma->vm_end; |
500 | 500 | ||
501 | /* | ||
502 | * Don't copy ptes where a page fault will fill them correctly. | ||
503 | * Fork becomes much lighter when there are big shared or private | ||
504 | * readonly mappings. The tradeoff is that copy_page_range is more | ||
505 | * efficient than faulting. | ||
506 | */ | ||
507 | if (!(vma->vm_flags & (VM_HUGETLB|VM_NONLINEAR|VM_RESERVED))) { | ||
508 | if (!vma->anon_vma) | ||
509 | return 0; | ||
510 | } | ||
511 | |||
501 | if (is_vm_hugetlb_page(vma)) | 512 | if (is_vm_hugetlb_page(vma)) |
502 | return copy_hugetlb_page_range(dst_mm, src_mm, vma); | 513 | return copy_hugetlb_page_range(dst_mm, src_mm, vma); |
503 | 514 | ||