aboutsummaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2012-07-31 19:43:12 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2012-07-31 21:42:43 -0400
commit7db8889ab05b57200158432755af318fb68854a2 (patch)
treedfce0ce79909bc102465d871dc7b949fa9525e85 /mm/page_alloc.c
parentab2158848775c7918288f2c423d3e4dbbc7d34eb (diff)
mm: have order > 0 compaction start off where it left
Order > 0 compaction stops when enough free pages of the correct page order have been coalesced. When doing subsequent higher order allocations, it is possible for compaction to be invoked many times. However, the compaction code always starts out looking for things to compact at the start of the zone, and for free pages to compact things to at the end of the zone. This can cause quadratic behaviour, with isolate_freepages starting at the end of the zone each time, even though previous invocations of the compaction code already filled up all free memory on that end of the zone. This can cause isolate_freepages to take enormous amounts of CPU with certain workloads on larger memory systems. The obvious solution is to have isolate_freepages remember where it left off last time, and continue at that point the next time it gets invoked for an order > 0 compaction. This could cause compaction to fail if cc->free_pfn and cc->migrate_pfn are close together initially, in that case we restart from the end of the zone and try once more. Forced full (order == -1) compactions are left alone. [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: s/laste/last/, use 80 cols] Signed-off-by: Rik van Riel <riel@redhat.com> Reported-by: Jim Schutt <jaschut@sandia.gov> Tested-by: Jim Schutt <jaschut@sandia.gov> Cc: Minchan Kim <minchan.kim@gmail.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fba2a1223f14..94fc475c3f94 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4397,6 +4397,11 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
4397 4397
4398 zone->spanned_pages = size; 4398 zone->spanned_pages = size;
4399 zone->present_pages = realsize; 4399 zone->present_pages = realsize;
4400#if defined CONFIG_COMPACTION || defined CONFIG_CMA
4401 zone->compact_cached_free_pfn = zone->zone_start_pfn +
4402 zone->spanned_pages;
4403 zone->compact_cached_free_pfn &= ~(pageblock_nr_pages-1);
4404#endif
4400#ifdef CONFIG_NUMA 4405#ifdef CONFIG_NUMA
4401 zone->node = nid; 4406 zone->node = nid;
4402 zone->min_unmapped_pages = (realsize*sysctl_min_unmapped_ratio) 4407 zone->min_unmapped_pages = (realsize*sysctl_min_unmapped_ratio)