aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2009-06-16 18:31:58 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2009-06-16 22:47:33 -0400
commit49255c619fbd482d704289b5eb2795f8e3b7ff2e (patch)
treeb1f36ca46bda7767fce12bc4a70360a68f7255ab /include
parent11e33f6a55ed7847d9c8ffe185ef87faf7806abe (diff)
page allocator: move check for disabled anti-fragmentation out of fastpath
On low-memory systems, anti-fragmentation gets disabled as there is nothing it can do and it would just incur overhead shuffling pages between lists constantly. Currently the check is made in the free page fast path for every page. This patch moves it to a slow path. On machines with low memory, there will be small amount of additional overhead as pages get shuffled between lists but it should quickly settle. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mmzone.h3
1 files changed, 0 insertions, 3 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a47c879e1304..0aa4445b0b8a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -50,9 +50,6 @@ extern int page_group_by_mobility_disabled;
50 50
51static inline int get_pageblock_migratetype(struct page *page) 51static inline int get_pageblock_migratetype(struct page *page)
52{ 52{
53 if (unlikely(page_group_by_mobility_disabled))
54 return MIGRATE_UNMOVABLE;
55
56 return get_pageblock_flags_group(page, PB_migrate, PB_migrate_end); 53 return get_pageblock_flags_group(page, PB_migrate, PB_migrate_end);
57} 54}
58 55