diff options
| author | Andy Lutomirski <luto@amacapital.net> | 2014-06-04 13:31:17 -0400 |
|---|---|---|
| committer | Ingo Molnar <mingo@kernel.org> | 2014-06-05 06:09:52 -0400 |
| commit | 67b9ca70c3030e832999e8d1cdba2984c7bb5bfc (patch) | |
| tree | c7a2d408297c74954e284895cf4ae74172c9747b | |
| parent | 82c65d60d64401aedc1006d6572469bbfdf148de (diff) | |
sched/idle: Simplify wake_up_idle_cpu()
Now that rq->idle's polling bit is a reliable indication that the cpu is
polling, use it.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: nicolas.pitre@linaro.org
Cc: daniel.lezcano@linaro.org
Cc: umgwanakikbuti@gmail.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/922f00761445a830ebb23d058e2ae53956ce2d73.1401902905.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
| -rw-r--r-- | kernel/sched/core.c | 21 |
1 files changed, 1 insertions, 20 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e4c0ddd3db8e..6afbfeefddd6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c | |||
| @@ -628,26 +628,7 @@ static void wake_up_idle_cpu(int cpu) | |||
| 628 | if (cpu == smp_processor_id()) | 628 | if (cpu == smp_processor_id()) |
| 629 | return; | 629 | return; |
| 630 | 630 | ||
| 631 | /* | 631 | if (set_nr_and_not_polling(rq->idle)) |
| 632 | * This is safe, as this function is called with the timer | ||
| 633 | * wheel base lock of (cpu) held. When the CPU is on the way | ||
| 634 | * to idle and has not yet set rq->curr to idle then it will | ||
| 635 | * be serialized on the timer wheel base lock and take the new | ||
| 636 | * timer into account automatically. | ||
| 637 | */ | ||
| 638 | if (rq->curr != rq->idle) | ||
| 639 | return; | ||
| 640 | |||
| 641 | /* | ||
| 642 | * We can set TIF_RESCHED on the idle task of the other CPU | ||
| 643 | * lockless. The worst case is that the other CPU runs the | ||
| 644 | * idle task through an additional NOOP schedule() | ||
| 645 | */ | ||
| 646 | set_tsk_need_resched(rq->idle); | ||
| 647 | |||
| 648 | /* NEED_RESCHED must be visible before we test polling */ | ||
| 649 | smp_mb(); | ||
| 650 | if (!tsk_is_polling(rq->idle)) | ||
| 651 | smp_send_reschedule(cpu); | 632 | smp_send_reschedule(cpu); |
| 652 | else | 633 | else |
| 653 | trace_sched_wake_idle_without_ipi(cpu); | 634 | trace_sched_wake_idle_without_ipi(cpu); |
