diff options
author | Michel Lespinasse <walken@google.com> | 2011-01-13 18:46:07 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-01-13 20:32:35 -0500 |
commit | b009c024ff0059e293c1937516f2defe56263650 (patch) | |
tree | 35d71c837b954e884c429c9c36a85aaf7b033c49 /mm/memory.c | |
parent | 212260aa07135b327752dc02625c68cf4ce04caf (diff) |
do_wp_page: remove the 'reuse' flag
mlocking a shared, writable vma currently causes the corresponding pages
to be marked as dirty and queued for writeback. This seems rather
unnecessary given that the pages are not being actually modified during
mlock. It is understood that for non-shared mappings (file or anon) we
want to use a write fault in order to break COW, but there is just no such
need for shared mappings.
The first two patches in this series do not introduce any behavior change.
The intent there is to make it obvious that dirtying file pages is only
done in the (writable, shared) case. I think this clarifies the code, but
I wouldn't mind dropping these two patches if there is no consensus about
them.
The last patch is where we actually avoid dirtying shared mappings during
mlock. Note that as a side effect of this, we won't call page_mkwrite()
for the mappings that define it, and won't be pre-allocating data blocks
at the FS level if the mapped file was sparsely allocated. My
understanding is that mlock does not need to provide such guarantee, as
evidenced by the fact that it never did for the filesystems that don't
define page_mkwrite() - including some common ones like ext3. However, I
would like to gather feedback on this from filesystem people as a
precaution. If this turns out to be a showstopper, maybe block
preallocation can be added back on using a different interface.
Large shared mlocks are getting significantly (>2x) faster in my tests, as
the disk can be fully used for reading the file instead of having to share
between this and writeback.
This patch:
Reorganize the code to remove the 'reuse' flag. No behavior changes.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Theodore Tso <tytso@google.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 11 |
1 files changed, 5 insertions, 6 deletions
diff --git a/mm/memory.c b/mm/memory.c index 02e48aa0ed13..d0cc1c134a64 100644 --- a/mm/memory.c +++ b/mm/memory.c | |||
@@ -2112,7 +2112,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
2112 | { | 2112 | { |
2113 | struct page *old_page, *new_page; | 2113 | struct page *old_page, *new_page; |
2114 | pte_t entry; | 2114 | pte_t entry; |
2115 | int reuse = 0, ret = 0; | 2115 | int ret = 0; |
2116 | int page_mkwrite = 0; | 2116 | int page_mkwrite = 0; |
2117 | struct page *dirty_page = NULL; | 2117 | struct page *dirty_page = NULL; |
2118 | 2118 | ||
@@ -2149,14 +2149,16 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
2149 | } | 2149 | } |
2150 | page_cache_release(old_page); | 2150 | page_cache_release(old_page); |
2151 | } | 2151 | } |
2152 | reuse = reuse_swap_page(old_page); | 2152 | if (reuse_swap_page(old_page)) { |
2153 | if (reuse) | ||
2154 | /* | 2153 | /* |
2155 | * The page is all ours. Move it to our anon_vma so | 2154 | * The page is all ours. Move it to our anon_vma so |
2156 | * the rmap code will not search our parent or siblings. | 2155 | * the rmap code will not search our parent or siblings. |
2157 | * Protected against the rmap code by the page lock. | 2156 | * Protected against the rmap code by the page lock. |
2158 | */ | 2157 | */ |
2159 | page_move_anon_rmap(old_page, vma, address); | 2158 | page_move_anon_rmap(old_page, vma, address); |
2159 | unlock_page(old_page); | ||
2160 | goto reuse; | ||
2161 | } | ||
2160 | unlock_page(old_page); | 2162 | unlock_page(old_page); |
2161 | } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == | 2163 | } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == |
2162 | (VM_WRITE|VM_SHARED))) { | 2164 | (VM_WRITE|VM_SHARED))) { |
@@ -2220,10 +2222,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
2220 | } | 2222 | } |
2221 | dirty_page = old_page; | 2223 | dirty_page = old_page; |
2222 | get_page(dirty_page); | 2224 | get_page(dirty_page); |
2223 | reuse = 1; | ||
2224 | } | ||
2225 | 2225 | ||
2226 | if (reuse) { | ||
2227 | reuse: | 2226 | reuse: |
2228 | flush_cache_page(vma, address, pte_pfn(orig_pte)); | 2227 | flush_cache_page(vma, address, pte_pfn(orig_pte)); |
2229 | entry = pte_mkyoung(orig_pte); | 2228 | entry = pte_mkyoung(orig_pte); |