aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKen Chen <kenchen@google.com>2007-02-10 04:43:18 -0500
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-02-11 13:51:19 -0500
commitdaa88c8d214ca4ab2f1764b6e503cef4b3cde9b2 (patch)
treef01848aa624796944b62dc67a41c7d1bdac6ac04
parent46626296314e5679c9aaca36979a50ac20692e0b (diff)
[PATCH] do not disturb page referenced state when unmapping memory range
When kernel unmaps an address range, it needs to transfer PTE state into page struct. Currently, kernel transfer access bit via mark_page_accessed(). The call to mark_page_accessed in the unmap path doesn't look logically correct. At unmap time, calling mark_page_accessed will causes page LRU state to be bumped up one step closer to more recently used state. It is causing quite a bit headache in a scenario when a process creates a shmem segment, touch a whole bunch of pages, then unmaps it. The unmapping takes a long time because mark_page_accessed() will start moving pages from inactive to active list. I'm not too much concerned with moving the page from one list to another in LRU. Sooner or later it might be moved because of multiple mappings from various processes. But it just doesn't look logical that when user asks a range to be unmapped, it's his intention that the process is no longer interested in these pages. Moving those pages to active list (or bumping up a state towards more active) seems to be an over reaction. It also prolongs unmapping latency which is the core issue I'm trying to solve. As suggested by Peter, we should still preserve the info on pte young pages, but not more. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Ken Chen <kenchen@google.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/memory.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 0047d3a4e364..0e6a402d86be 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -678,7 +678,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
678 if (pte_dirty(ptent)) 678 if (pte_dirty(ptent))
679 set_page_dirty(page); 679 set_page_dirty(page);
680 if (pte_young(ptent)) 680 if (pte_young(ptent))
681 mark_page_accessed(page); 681 SetPageReferenced(page);
682 file_rss--; 682 file_rss--;
683 } 683 }
684 page_remove_rmap(page, vma); 684 page_remove_rmap(page, vma);