aboutsummaryrefslogtreecommitdiffstats
path: root/mm/rmap.c
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2010-03-05 16:42:22 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2010-03-06 14:26:27 -0500
commit645747462435d84c6c6a64269ed49cc3015f753d (patch)
tree4cbbddcddd429704dd4f205f6371bb329dcb0ff1 /mm/rmap.c
parent31c0569c3b0b6cc8a867ac6665ca081553f7984c (diff)
vmscan: detect mapped file pages used only once
The VM currently assumes that an inactive, mapped and referenced file page is in use and promotes it to the active list. However, every mapped file page starts out like this and thus a problem arises when workloads create a stream of such pages that are used only for a short time. By flooding the active list with those pages, the VM quickly gets into trouble finding eligible reclaim canditates. The result is long allocation latencies and eviction of the wrong pages. This patch reuses the PG_referenced page flag (used for unmapped file pages) to implement a usage detection that scales with the speed of LRU list cycling (i.e. memory pressure). If the scanner encounters those pages, the flag is set and the page cycled again on the inactive list. Only if it returns with another page table reference it is activated. Otherwise it is reclaimed as 'not recently used cache'. This effectively changes the minimum lifetime of a used-once mapped file page from a full memory cycle to an inactive list cycle, which allows it to occur in linear streams without affecting the stable working set of the system. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: OSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/rmap.c')
-rw-r--r--mm/rmap.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/mm/rmap.c b/mm/rmap.c
index 4d2fb93851ca..fcd593c9c997 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -601,9 +601,6 @@ int page_referenced(struct page *page,
601 int referenced = 0; 601 int referenced = 0;
602 int we_locked = 0; 602 int we_locked = 0;
603 603
604 if (TestClearPageReferenced(page))
605 referenced++;
606
607 *vm_flags = 0; 604 *vm_flags = 0;
608 if (page_mapped(page) && page_rmapping(page)) { 605 if (page_mapped(page) && page_rmapping(page)) {
609 if (!is_locked && (!PageAnon(page) || PageKsm(page))) { 606 if (!is_locked && (!PageAnon(page) || PageKsm(page))) {