aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/sched.h
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2014-01-27 17:03:40 -0500
committerIngo Molnar <mingo@kernel.org>2014-01-28 07:17:04 -0500
commit52bf84aa206cd2c2516dfa3e03b578edf8a3242f (patch)
treee8acbb2c3ce90b7aed27046c7efc5a082f6ef684 /include/linux/sched.h
parenta57beec5d427086cdc8d75fd51164577193fa7f4 (diff)
sched/numa, mm: Remove p->numa_migrate_deferred
Excessive migration of pages can hurt the performance of workloads that span multiple NUMA nodes. However, it turns out that the p->numa_migrate_deferred knob is a really big hammer, which does reduce migration rates, but does not actually help performance. Now that the second stage of the automatic numa balancing code has stabilized, it is time to replace the simplistic migration deferral code with something smarter. Signed-off-by: Rik van Riel <riel@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Chegu Vinod <chegu_vinod@hp.com> Link: http://lkml.kernel.org/r/1390860228-21539-2-git-send-email-riel@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r--include/linux/sched.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffccdad050b5..d572d5ba650f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1457,7 +1457,6 @@ struct task_struct {
1457 unsigned int numa_scan_period; 1457 unsigned int numa_scan_period;
1458 unsigned int numa_scan_period_max; 1458 unsigned int numa_scan_period_max;
1459 int numa_preferred_nid; 1459 int numa_preferred_nid;
1460 int numa_migrate_deferred;
1461 unsigned long numa_migrate_retry; 1460 unsigned long numa_migrate_retry;
1462 u64 node_stamp; /* migration stamp */ 1461 u64 node_stamp; /* migration stamp */
1463 struct callback_head numa_work; 1462 struct callback_head numa_work;