diff options
| author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2009-12-17 07:16:31 -0500 |
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2009-12-17 07:22:46 -0500 |
| commit | 077614ee1e93245a3b9a4e1213659405dbeb0ba6 (patch) | |
| tree | 246e441967d7973d9e3addc6bade207db86d2575 | |
| parent | e1781538cf5c870ab696e9b8f0a5c498d3900f2f (diff) | |
sched: Fix broken assertion
There's a preemption race in the set_task_cpu() debug check in
that when we get preempted after setting task->state we'd still
be on the rq proper, but fail the test.
Check for preempted tasks, since those are always on the RQ.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20091217121830.137155561@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
| -rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 7be88a7be047..720df108a2d6 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
| @@ -2041,7 +2041,8 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) | |||
| 2041 | * We should never call set_task_cpu() on a blocked task, | 2041 | * We should never call set_task_cpu() on a blocked task, |
| 2042 | * ttwu() will sort out the placement. | 2042 | * ttwu() will sort out the placement. |
| 2043 | */ | 2043 | */ |
| 2044 | WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING); | 2044 | WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING && |
| 2045 | !(task_thread_info(p)->preempt_count & PREEMPT_ACTIVE)); | ||
| 2045 | #endif | 2046 | #endif |
| 2046 | 2047 | ||
| 2047 | trace_sched_migrate_task(p, new_cpu); | 2048 | trace_sched_migrate_task(p, new_cpu); |
