aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorMel Gorman <mgorman@suse.de>2013-07-03 18:02:03 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-07-03 19:07:29 -0400
commitd04e8acd03e5c3421ef18e3da7bc88d56179ca42 (patch)
tree7183adb0d45168268289f38751be4f508544947b /mm
parent8e950282804558e4605401b9c79c1d34f0d73507 (diff)
mm: vmscan: treat pages marked for immediate reclaim as zone congestion
Currently a zone will only be marked congested if the underlying BDI is congested but if dirty pages are spread across zones it is possible that an individual zone is full of dirty pages without being congested. The impact is that zone gets scanned very quickly potentially reclaiming really clean pages. This patch treats pages marked for immediate reclaim as congested for the purposes of marking a zone ZONE_CONGESTED and stalling in wait_iff_congested. Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu> Cc: Zlatko Calusic <zcalusic@bitsync.net> Cc: dormando <dormando@rydia.net> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c10
1 files changed, 8 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4898daf074cf..bf4778479e3a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -761,9 +761,15 @@ static unsigned long shrink_page_list(struct list_head *page_list,
761 if (dirty && !writeback) 761 if (dirty && !writeback)
762 nr_unqueued_dirty++; 762 nr_unqueued_dirty++;
763 763
764 /* Treat this page as congested if underlying BDI is */ 764 /*
765 * Treat this page as congested if the underlying BDI is or if
766 * pages are cycling through the LRU so quickly that the
767 * pages marked for immediate reclaim are making it to the
768 * end of the LRU a second time.
769 */
765 mapping = page_mapping(page); 770 mapping = page_mapping(page);
766 if (mapping && bdi_write_congested(mapping->backing_dev_info)) 771 if ((mapping && bdi_write_congested(mapping->backing_dev_info)) ||
772 (writeback && PageReclaim(page)))
767 nr_congested++; 773 nr_congested++;
768 774
769 /* 775 /*