aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorMel Gorman <mgorman@suse.de>2013-07-08 19:00:25 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-07-09 13:33:23 -0400
commit918fc718c5922520c499ad60f61b8df86b998ae9 (patch)
tree71c492881b0bcadcdd1a35e63f09cb6df59c86cd /mm
parent5a1c9cbc1550f93335d7c03eb6c271e642deff04 (diff)
mm: vmscan: do not scale writeback pages when deciding whether to set ZONE_WRITEBACK
After the patch "mm: vmscan: Flatten kswapd priority loop" was merged the scanning priority of kswapd changed. The priority now rises until it is scanning enough pages to meet the high watermark. shrink_inactive_list sets ZONE_WRITEBACK if a number of pages were encountered under writeback but this value is scaled based on the priority. As kswapd frequently scans with a higher priority now it is relatively easy to set ZONE_WRITEBACK. This patch removes the scaling and treates writeback pages similar to how it treats unqueued dirty pages and congested pages. The user-visible effect should be that kswapd will writeback fewer pages from reclaim context. Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <david@fromorbit.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c16
1 files changed, 1 insertions, 15 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2385663ae5e5..2cff0d491c6d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1443,25 +1443,11 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
1443 * as there is no guarantee the dirtying process is throttled in the 1443 * as there is no guarantee the dirtying process is throttled in the
1444 * same way balance_dirty_pages() manages. 1444 * same way balance_dirty_pages() manages.
1445 * 1445 *
1446 * This scales the number of dirty pages that must be under writeback
1447 * before a zone gets flagged ZONE_WRITEBACK. It is a simple backoff
1448 * function that has the most effect in the range DEF_PRIORITY to
1449 * DEF_PRIORITY-2 which is the priority reclaim is considered to be
1450 * in trouble and reclaim is considered to be in trouble.
1451 *
1452 * DEF_PRIORITY 100% isolated pages must be PageWriteback to throttle
1453 * DEF_PRIORITY-1 50% must be PageWriteback
1454 * DEF_PRIORITY-2 25% must be PageWriteback, kswapd in trouble
1455 * ...
1456 * DEF_PRIORITY-6 For SWAP_CLUSTER_MAX isolated pages, throttle if any
1457 * isolated page is PageWriteback
1458 *
1459 * Once a zone is flagged ZONE_WRITEBACK, kswapd will count the number 1446 * Once a zone is flagged ZONE_WRITEBACK, kswapd will count the number
1460 * of pages under pages flagged for immediate reclaim and stall if any 1447 * of pages under pages flagged for immediate reclaim and stall if any
1461 * are encountered in the nr_immediate check below. 1448 * are encountered in the nr_immediate check below.
1462 */ 1449 */
1463 if (nr_writeback && nr_writeback >= 1450 if (nr_writeback && nr_writeback == nr_taken)
1464 (nr_taken >> (DEF_PRIORITY - sc->priority)))
1465 zone_set_flag(zone, ZONE_WRITEBACK); 1451 zone_set_flag(zone, ZONE_WRITEBACK);
1466 1452
1467 /* 1453 /*