aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2014-03-18 17:54:04 -0400
committerFrederic Weisbecker <fweisbec@gmail.com>2014-06-16 10:27:24 -0400
commit3882ec643997757824cd5f25180cd8a787b9dbe1 (patch)
tree82bbcc724d7dc4fc4442cc2a1699cd2ec9518725
parentfd2ac4f4a65a7f34b0bc6433fcca1192d7ba8b8e (diff)
nohz: Use IPI implicit full barrier against rq->nr_running r/w
A full dynticks CPU is allowed to stop its tick when a single task runs. Meanwhile when a new task gets enqueued, the CPU must be notified so that it can restart its tick to maintain local fairness and other accounting details. This notification is performed by way of an IPI. Then when the target receives the IPI, we expect it to see the new value of rq->nr_running. Hence the following ordering scenario: CPU 0 CPU 1 write rq->running get IPI smp_wmb() smp_rmb() send IPI read rq->nr_running But Paul Mckenney says that nowadays IPIs imply a full barrier on all architectures. So we can safely remove this pair and rely on the implicit barriers that come along IPI send/receive. Lets just comment on this new assumption. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kevin Hilman <khilman@linaro.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
-rw-r--r--kernel/sched/core.c9
-rw-r--r--kernel/sched/sched.h10
2 files changed, 13 insertions, 6 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 13f5857a15ba..7f3063c153d8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -740,10 +740,11 @@ bool sched_can_stop_tick(void)
740 740
741 rq = this_rq(); 741 rq = this_rq();
742 742
743 /* Make sure rq->nr_running update is visible after the IPI */ 743 /*
744 smp_rmb(); 744 * More than one running task need preemption.
745 745 * nr_running update is assumed to be visible
746 /* More than one running task need preemption */ 746 * after IPI is sent from wakers.
747 */
747 if (rq->nr_running > 1) 748 if (rq->nr_running > 1)
748 return false; 749 return false;
749 750
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 599a72aff5ea..eb8567610295 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1221,8 +1221,14 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
1221#ifdef CONFIG_NO_HZ_FULL 1221#ifdef CONFIG_NO_HZ_FULL
1222 if (prev_nr < 2 && rq->nr_running >= 2) { 1222 if (prev_nr < 2 && rq->nr_running >= 2) {
1223 if (tick_nohz_full_cpu(rq->cpu)) { 1223 if (tick_nohz_full_cpu(rq->cpu)) {
1224 /* Order rq->nr_running write against the IPI */ 1224 /*
1225 smp_wmb(); 1225 * Tick is needed if more than one task runs on a CPU.
1226 * Send the target an IPI to kick it out of nohz mode.
1227 *
1228 * We assume that IPI implies full memory barrier and the
1229 * new value of rq->nr_running is visible on reception
1230 * from the target.
1231 */
1226 tick_nohz_full_kick_cpu(rq->cpu); 1232 tick_nohz_full_kick_cpu(rq->cpu);
1227 } 1233 }
1228 } 1234 }