diff options
author | Mel Gorman <mgorman@suse.de> | 2011-07-08 18:39:38 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-07-09 00:14:43 -0400 |
commit | d7868dae893c83c50c7824bc2bc75f93d114669f (patch) | |
tree | 7c9e56513ecbbf086c81ebff77310f80e0232ecc | |
parent | 08951e545918c1594434d000d88a7793e2452a9b (diff) |
mm: vmscan: do not apply pressure to slab if we are not applying pressure to zone
During allocator-intensive workloads, kswapd will be woken frequently
causing free memory to oscillate between the high and min watermark. This
is expected behaviour.
When kswapd applies pressure to zones during node balancing, it checks if
the zone is above a high+balance_gap threshold. If it is, it does not
apply pressure but it unconditionally shrinks slab on a global basis which
is excessive. In the event kswapd is being kept awake due to a high small
unreclaimable zone, it skips zone shrinking but still calls shrink_slab().
Once pressure has been applied, the check for zone being unreclaimable is
being made before the check is made if all_unreclaimable should be set.
This miss of unreclaimable can cause has_under_min_watermark_zone to be
set due to an unreclaimable zone preventing kswapd backing off on
congestion_wait().
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Andrew Lutomirski <luto@mit.edu>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/vmscan.c | 23 |
1 files changed, 13 insertions, 10 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 04c49fe781fe..a0245861934a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c | |||
@@ -2510,18 +2510,18 @@ loop_again: | |||
2510 | KSWAPD_ZONE_BALANCE_GAP_RATIO); | 2510 | KSWAPD_ZONE_BALANCE_GAP_RATIO); |
2511 | if (!zone_watermark_ok_safe(zone, order, | 2511 | if (!zone_watermark_ok_safe(zone, order, |
2512 | high_wmark_pages(zone) + balance_gap, | 2512 | high_wmark_pages(zone) + balance_gap, |
2513 | end_zone, 0)) | 2513 | end_zone, 0)) { |
2514 | shrink_zone(priority, zone, &sc); | 2514 | shrink_zone(priority, zone, &sc); |
2515 | reclaim_state->reclaimed_slab = 0; | ||
2516 | nr_slab = shrink_slab(&shrink, sc.nr_scanned, lru_pages); | ||
2517 | sc.nr_reclaimed += reclaim_state->reclaimed_slab; | ||
2518 | total_scanned += sc.nr_scanned; | ||
2519 | 2515 | ||
2520 | if (zone->all_unreclaimable) | 2516 | reclaim_state->reclaimed_slab = 0; |
2521 | continue; | 2517 | nr_slab = shrink_slab(&shrink, sc.nr_scanned, lru_pages); |
2522 | if (nr_slab == 0 && | 2518 | sc.nr_reclaimed += reclaim_state->reclaimed_slab; |
2523 | !zone_reclaimable(zone)) | 2519 | total_scanned += sc.nr_scanned; |
2524 | zone->all_unreclaimable = 1; | 2520 | |
2521 | if (nr_slab == 0 && !zone_reclaimable(zone)) | ||
2522 | zone->all_unreclaimable = 1; | ||
2523 | } | ||
2524 | |||
2525 | /* | 2525 | /* |
2526 | * If we've done a decent amount of scanning and | 2526 | * If we've done a decent amount of scanning and |
2527 | * the reclaim ratio is low, start doing writepage | 2527 | * the reclaim ratio is low, start doing writepage |
@@ -2531,6 +2531,9 @@ loop_again: | |||
2531 | total_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2) | 2531 | total_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2) |
2532 | sc.may_writepage = 1; | 2532 | sc.may_writepage = 1; |
2533 | 2533 | ||
2534 | if (zone->all_unreclaimable) | ||
2535 | continue; | ||
2536 | |||
2534 | if (!zone_watermark_ok_safe(zone, order, | 2537 | if (!zone_watermark_ok_safe(zone, order, |
2535 | high_wmark_pages(zone), end_zone, 0)) { | 2538 | high_wmark_pages(zone), end_zone, 0)) { |
2536 | all_zones_ok = 0; | 2539 | all_zones_ok = 0; |