diff options
author | Kirill Tkhai <tkhai@yandex.ru> | 2014-05-20 05:33:42 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-06-05 05:51:12 -0400 |
commit | 0f397f2c90ce68821ee864c2c53baafe78de765d (patch) | |
tree | 3042e7df0706945070b7ebccd25b673ed0a9d73d /kernel/sched | |
parent | b14ed2c273f8ab872ae4e6735fe5ab09cb14b8c3 (diff) |
sched/dl: Fix race in dl_task_timer()
Throttled task is still on rq, and it may be moved to other cpu
if user is playing with sched_setaffinity(). Therefore, unlocked
task_rq() access makes the race.
Juri Lelli reports he got this race when dl_bandwidth_enabled()
was not set.
Other thing, pointed by Peter Zijlstra:
"Now I suppose the problem can still actually happen when
you change the root domain and trigger a effective affinity
change that way".
To fix that we do the same as made in __task_rq_lock(). We do not
use __task_rq_lock() itself, because it has a useful lockdep check,
which is not correct in case of dl_task_timer(). We do not need
pi_lock locked here. This case is an exception (PeterZ):
"The only reason we don't strictly need ->pi_lock now is because
we're guaranteed to have p->state == TASK_RUNNING here and are
thus free of ttwu races".
Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v3.14+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/3056991400578422@web14g.yandex.ru
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/deadline.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 800e99b99075..14bc348ba3b4 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c | |||
@@ -513,9 +513,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) | |||
513 | struct sched_dl_entity, | 513 | struct sched_dl_entity, |
514 | dl_timer); | 514 | dl_timer); |
515 | struct task_struct *p = dl_task_of(dl_se); | 515 | struct task_struct *p = dl_task_of(dl_se); |
516 | struct rq *rq = task_rq(p); | 516 | struct rq *rq; |
517 | again: | ||
518 | rq = task_rq(p); | ||
517 | raw_spin_lock(&rq->lock); | 519 | raw_spin_lock(&rq->lock); |
518 | 520 | ||
521 | if (rq != task_rq(p)) { | ||
522 | /* Task was moved, retrying. */ | ||
523 | raw_spin_unlock(&rq->lock); | ||
524 | goto again; | ||
525 | } | ||
526 | |||
519 | /* | 527 | /* |
520 | * We need to take care of a possible races here. In fact, the | 528 | * We need to take care of a possible races here. In fact, the |
521 | * task might have changed its scheduling policy to something | 529 | * task might have changed its scheduling policy to something |