aboutsummaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2015-02-11 18:28:21 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2015-02-11 20:06:06 -0500
commit9c0415eb8cbf0c8fd043b6c0f0354308ab099df5 (patch)
tree9bc450992755e8a54d7a35ceec93d9179a6f4b88 /mm/page_alloc.c
parent3a1086fba92b6e2311b6a342f68bc380beb240fe (diff)
mm: more aggressive page stealing for UNMOVABLE allocations
When allocation falls back to stealing free pages of another migratetype, it can decide to steal extra pages, or even the whole pageblock in order to reduce fragmentation, which could happen if further allocation fallbacks pick a different pageblock. In try_to_steal_freepages(), one of the situations where extra pages are stolen happens when we are trying to allocate a MIGRATE_RECLAIMABLE page. However, MIGRATE_UNMOVABLE allocations are not treated the same way, although spreading such allocation over multiple fallback pageblocks is arguably even worse than it is for RECLAIMABLE allocations. To minimize fragmentation, we should minimize the number of such fallbacks, and thus steal as much as is possible from each fallback pageblock. Note that in theory this might put more pressure on movable pageblocks and cause movable allocations to steal back from unmovable pageblocks. However, movable allocations are not as aggressive with stealing, and do not cause permanent fragmentation, so the tradeoff is reasonable, and evaluation seems to support the change. This patch thus adds a check for MIGRATE_UNMOVABLE to the decision to steal extra free pages. When evaluating with stress-highalloc from mmtests, this has reduced the number of MIGRATE_UNMOVABLE fallbacks to roughly 1/6. The number of these fallbacks stealing from MIGRATE_MOVABLE block is reduced to 1/3. There was no observation of growing number of unmovable pageblocks over time, and also not of increased movable allocation fallbacks. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Cc: Minchan Kim <minchan@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c18
1 files changed, 14 insertions, 4 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c44b49cec40..8d52ab18fe0d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1123,10 +1123,19 @@ static void change_pageblock_range(struct page *pageblock_page,
1123} 1123}
1124 1124
1125/* 1125/*
1126 * If breaking a large block of pages, move all free pages to the preferred 1126 * When we are falling back to another migratetype during allocation, try to
1127 * allocation list. If falling back for a reclaimable kernel allocation, be 1127 * steal extra free pages from the same pageblocks to satisfy further
1128 * more aggressive about taking ownership of free pages. If we claim more than 1128 * allocations, instead of polluting multiple pageblocks.
1129 * half of the pageblock, change pageblock's migratetype as well. 1129 *
1130 * If we are stealing a relatively large buddy page, it is likely there will
1131 * be more free pages in the pageblock, so try to steal them all. For
1132 * reclaimable and unmovable allocations, we steal regardless of page size,
1133 * as fragmentation caused by those allocations polluting movable pageblocks
1134 * is worse than movable allocations stealing from unmovable and reclaimable
1135 * pageblocks.
1136 *
1137 * If we claim more than half of the pageblock, change pageblock's migratetype
1138 * as well.
1130 */ 1139 */
1131static void try_to_steal_freepages(struct zone *zone, struct page *page, 1140static void try_to_steal_freepages(struct zone *zone, struct page *page,
1132 int start_type, int fallback_type) 1141 int start_type, int fallback_type)
@@ -1141,6 +1150,7 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
1141 1150
1142 if (current_order >= pageblock_order / 2 || 1151 if (current_order >= pageblock_order / 2 ||
1143 start_type == MIGRATE_RECLAIMABLE || 1152 start_type == MIGRATE_RECLAIMABLE ||
1153 start_type == MIGRATE_UNMOVABLE ||
1144 page_group_by_mobility_disabled) { 1154 page_group_by_mobility_disabled) {
1145 int pages; 1155 int pages;
1146 1156