diff options
author | Peter Zijlstra <peterz@infradead.org> | 2013-10-15 06:35:07 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-10-16 08:22:13 -0400 |
commit | 7c3f2ab7b844f1a859afbc3d41925e8a0faba5fa (patch) | |
tree | 134aee5c31160aeafef83b211eeed5db2b2ce61c /kernel | |
parent | ed1b7732868035990f07aeb532b1d86272ea909e (diff) |
sched/rt: Add missing rmb()
While discussing the proposed SCHED_DEADLINE patches which in parts
mimic the existing FIFO code it was noticed that the wmb in
rt_set_overloaded() didn't have a matching barrier.
The only site using rt_overloaded() to test the rto_count is
pull_rt_task() and we should issue a matching rmb before then assuming
there's an rto_mask bit set.
Without that smp_rmb() in there we could actually miss seeing the
rto_mask bit.
Also, change to using smp_[wr]mb(), even though this is SMP only code;
memory barriers without smp_ always make me think they're against
hardware of some sort.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: vincent.guittot@linaro.org
Cc: luca.abeni@unitn.it
Cc: bruce.ashfield@windriver.com
Cc: dhaval.giani@gmail.com
Cc: rostedt@goodmis.org
Cc: hgu1972@gmail.com
Cc: oleg@redhat.com
Cc: fweisbec@gmail.com
Cc: darren@dvhart.com
Cc: johan.eker@ericsson.com
Cc: p.faure@akatech.ch
Cc: paulmck@linux.vnet.ibm.com
Cc: raistlin@linux.it
Cc: claudio@evidence.eu.com
Cc: insop.song@gmail.com
Cc: michael@amarulasolutions.com
Cc: liming.wang@windriver.com
Cc: fchecconi@gmail.com
Cc: jkacur@redhat.com
Cc: tommaso.cucinotta@sssup.it
Cc: Juri Lelli <juri.lelli@gmail.com>
Cc: harald.gustafsson@ericsson.com
Cc: nicola.manica@disi.unitn.it
Cc: tglx@linutronix.de
Link: http://lkml.kernel.org/r/20131015103507.GF10651@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/rt.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index e9304cdc26fe..a848f526b941 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c | |||
@@ -246,8 +246,10 @@ static inline void rt_set_overload(struct rq *rq) | |||
246 | * if we should look at the mask. It would be a shame | 246 | * if we should look at the mask. It would be a shame |
247 | * if we looked at the mask, but the mask was not | 247 | * if we looked at the mask, but the mask was not |
248 | * updated yet. | 248 | * updated yet. |
249 | * | ||
250 | * Matched by the barrier in pull_rt_task(). | ||
249 | */ | 251 | */ |
250 | wmb(); | 252 | smp_wmb(); |
251 | atomic_inc(&rq->rd->rto_count); | 253 | atomic_inc(&rq->rd->rto_count); |
252 | } | 254 | } |
253 | 255 | ||
@@ -1626,6 +1628,12 @@ static int pull_rt_task(struct rq *this_rq) | |||
1626 | if (likely(!rt_overloaded(this_rq))) | 1628 | if (likely(!rt_overloaded(this_rq))) |
1627 | return 0; | 1629 | return 0; |
1628 | 1630 | ||
1631 | /* | ||
1632 | * Match the barrier from rt_set_overloaded; this guarantees that if we | ||
1633 | * see overloaded we must also see the rto_mask bit. | ||
1634 | */ | ||
1635 | smp_rmb(); | ||
1636 | |||
1629 | for_each_cpu(cpu, this_rq->rd->rto_mask) { | 1637 | for_each_cpu(cpu, this_rq->rd->rto_mask) { |
1630 | if (this_cpu == cpu) | 1638 | if (this_cpu == cpu) |
1631 | continue; | 1639 | continue; |