aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorBrice Goglin <Brice.Goglin@ens-lyon.org>2009-06-16 18:32:43 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2009-06-16 22:47:41 -0400
commit35282a2de4e5e4e173ab61aa9d7015886021a821 (patch)
tree06d20ad1a132e15e4fb7447cd9b139a4921ecfc9 /mm
parent7f33d49a2ed546e01f7b1d0607661810f2421859 (diff)
migration: only migrate_prep() once per move_pages()
migrate_prep() is fairly expensive (72us on 16-core barcelona 1.9GHz). Commit 3140a2273009c01c27d316f35ab76a37e105fdd8 improved move_pages() throughput by breaking it into chunks, but it also made migrate_prep() be called once per chunk (every 128pages or so) instead of once per move_pages(). This patch reverts to calling migrate_prep() only once per chunk as we did before 2.6.29. It is also a followup to commit 0aedadf91a70a11c4a3e7c7d99b21e5528af8d5d ("mm: move migrate_prep out from under mmap_sem"). This improves migration throughput on the above machine from 600MB/s to 750MB/s. Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr> Acked-by: Christoph Lameter <cl@linux-foundation.org> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Rik van Riel <riel@redhat.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/migrate.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/migrate.c b/mm/migrate.c
index 5a24923e7fd7..939888f9ddab 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -820,7 +820,6 @@ static int do_move_page_to_node_array(struct mm_struct *mm,
820 struct page_to_node *pp; 820 struct page_to_node *pp;
821 LIST_HEAD(pagelist); 821 LIST_HEAD(pagelist);
822 822
823 migrate_prep();
824 down_read(&mm->mmap_sem); 823 down_read(&mm->mmap_sem);
825 824
826 /* 825 /*
@@ -907,6 +906,9 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
907 pm = (struct page_to_node *)__get_free_page(GFP_KERNEL); 906 pm = (struct page_to_node *)__get_free_page(GFP_KERNEL);
908 if (!pm) 907 if (!pm)
909 goto out; 908 goto out;
909
910 migrate_prep();
911
910 /* 912 /*
911 * Store a chunk of page_to_node array in a page, 913 * Store a chunk of page_to_node array in a page,
912 * but keep the last one as a marker 914 * but keep the last one as a marker