aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2011-10-31 20:09:31 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2011-10-31 20:30:50 -0400
commite0887c19b2daa140f20ca8104bdc5740f39dbb86 (patch)
tree86330414eb04b5989e68661c205aa52d46ca7ebf /mm
parent21ee9f398be209ccbb62929d35961ca1ed48eec3 (diff)
vmscan: limit direct reclaim for higher order allocations
When suffering from memory fragmentation due to unfreeable pages, THP page faults will repeatedly try to compact memory. Due to the unfreeable pages, compaction fails. Needless to say, at that point page reclaim also fails to create free contiguous 2MB areas. However, that doesn't stop the current code from trying, over and over again, and freeing a minimum of 4MB (2UL << sc->order pages) at every single invocation. This resulted in my 12GB system having 2-3GB free memory, a corresponding amount of used swap and very sluggish response times. This can be avoided by having the direct reclaim code not reclaim from zones that already have plenty of free memory available for compaction. If compaction still fails due to unmovable memory, doing additional reclaim will only hurt the system, not help. [jweiner@redhat.com: change comment to explain the order check] Signed-off-by: Rik van Riel <riel@redhat.com> Acked-by: Johannes Weiner <jweiner@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c16
1 files changed, 16 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f51a33e8ed89..7e0f05797388 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2125,6 +2125,22 @@ static void shrink_zones(int priority, struct zonelist *zonelist,
2125 continue; 2125 continue;
2126 if (zone->all_unreclaimable && priority != DEF_PRIORITY) 2126 if (zone->all_unreclaimable && priority != DEF_PRIORITY)
2127 continue; /* Let kswapd poll it */ 2127 continue; /* Let kswapd poll it */
2128 if (COMPACTION_BUILD) {
2129 /*
2130 * If we already have plenty of memory
2131 * free for compaction, don't free any
2132 * more. Even though compaction is
2133 * invoked for any non-zero order,
2134 * only frequent costly order
2135 * reclamation is disruptive enough to
2136 * become a noticable problem, like
2137 * transparent huge page allocations.
2138 */
2139 if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
2140 (compaction_suitable(zone, sc->order) ||
2141 compaction_deferred(zone)))
2142 continue;
2143 }
2128 /* 2144 /*
2129 * This steals pages from memory cgroups over softlimit 2145 * This steals pages from memory cgroups over softlimit
2130 * and returns the number of reclaimed pages and 2146 * and returns the number of reclaimed pages and