aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/kernel.h
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2011-01-13 18:45:56 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2011-01-13 20:32:33 -0500
commit3e7d344970673c5334cf7b5bb27c8c0942b06126 (patch)
tree832ecb4da5fd27efa5a503df5b96bfdee2a52ffd /include/linux/kernel.h
parentee64fc9354e515a79c7232cfde65c88ec627308b (diff)
mm: vmscan: reclaim order-0 and use compaction instead of lumpy reclaim
Lumpy reclaim is disruptive. It reclaims a large number of pages and ignores the age of the pages it reclaims. This can incur significant stalls and potentially increase the number of major faults. Compaction has reached the point where it is considered reasonably stable (meaning it has passed a lot of testing) and is a potential candidate for displacing lumpy reclaim. This patch introduces an alternative to lumpy reclaim whe compaction is available called reclaim/compaction. The basic operation is very simple - instead of selecting a contiguous range of pages to reclaim, a number of order-0 pages are reclaimed and then compaction is later by either kswapd (compact_zone_order()) or direct compaction (__alloc_pages_direct_compact()). [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: use conventional task_struct naming] Signed-off-by: Mel Gorman <mel@csn.ul.ie> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/kernel.h')
-rw-r--r--include/linux/kernel.h7
1 files changed, 7 insertions, 0 deletions
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 57dac7022b63..5a9d9059520b 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -600,6 +600,13 @@ struct sysinfo {
600#define NUMA_BUILD 0 600#define NUMA_BUILD 0
601#endif 601#endif
602 602
603/* This helps us avoid #ifdef CONFIG_COMPACTION */
604#ifdef CONFIG_COMPACTION
605#define COMPACTION_BUILD 1
606#else
607#define COMPACTION_BUILD 0
608#endif
609
603/* Rebuild everything on CONFIG_FTRACE_MCOUNT_RECORD */ 610/* Rebuild everything on CONFIG_FTRACE_MCOUNT_RECORD */
604#ifdef CONFIG_FTRACE_MCOUNT_RECORD 611#ifdef CONFIG_FTRACE_MCOUNT_RECORD
605# define REBUILD_DUE_TO_FTRACE_MCOUNT_RECORD 612# define REBUILD_DUE_TO_FTRACE_MCOUNT_RECORD