diff options
author | Rik van Riel <riel@redhat.com> | 2012-07-31 19:43:12 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-07-31 21:42:43 -0400 |
commit | 7db8889ab05b57200158432755af318fb68854a2 (patch) | |
tree | dfce0ce79909bc102465d871dc7b949fa9525e85 /include/linux/mmzone.h | |
parent | ab2158848775c7918288f2c423d3e4dbbc7d34eb (diff) |
mm: have order > 0 compaction start off where it left
Order > 0 compaction stops when enough free pages of the correct page
order have been coalesced. When doing subsequent higher order
allocations, it is possible for compaction to be invoked many times.
However, the compaction code always starts out looking for things to
compact at the start of the zone, and for free pages to compact things to
at the end of the zone.
This can cause quadratic behaviour, with isolate_freepages starting at the
end of the zone each time, even though previous invocations of the
compaction code already filled up all free memory on that end of the zone.
This can cause isolate_freepages to take enormous amounts of CPU with
certain workloads on larger memory systems.
The obvious solution is to have isolate_freepages remember where it left
off last time, and continue at that point the next time it gets invoked
for an order > 0 compaction. This could cause compaction to fail if
cc->free_pfn and cc->migrate_pfn are close together initially, in that
case we restart from the end of the zone and try once more.
Forced full (order == -1) compactions are left alone.
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: s/laste/last/, use 80 cols]
Signed-off-by: Rik van Riel <riel@redhat.com>
Reported-by: Jim Schutt <jaschut@sandia.gov>
Tested-by: Jim Schutt <jaschut@sandia.gov>
Cc: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/mmzone.h')
-rw-r--r-- | include/linux/mmzone.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1495d952e641..1aeadce4d56e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h | |||
@@ -368,6 +368,10 @@ struct zone { | |||
368 | */ | 368 | */ |
369 | spinlock_t lock; | 369 | spinlock_t lock; |
370 | int all_unreclaimable; /* All pages pinned */ | 370 | int all_unreclaimable; /* All pages pinned */ |
371 | #if defined CONFIG_COMPACTION || defined CONFIG_CMA | ||
372 | /* pfn where the last incremental compaction isolated free pages */ | ||
373 | unsigned long compact_cached_free_pfn; | ||
374 | #endif | ||
371 | #ifdef CONFIG_MEMORY_HOTPLUG | 375 | #ifdef CONFIG_MEMORY_HOTPLUG |
372 | /* see spanned/present_pages for more description */ | 376 | /* see spanned/present_pages for more description */ |
373 | seqlock_t span_seqlock; | 377 | seqlock_t span_seqlock; |