diff options
author | Frederic Weisbecker <fweisbec@gmail.com> | 2013-02-20 10:15:36 -0500 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2013-02-21 14:52:24 -0500 |
commit | e5ab012c3271990e8457055c25cafddc1ae8aa6b (patch) | |
tree | 6c08a944d66fa68928a05b0792ad013c5e2a3da6 /kernel | |
parent | 1a13c0b181f218bf56a1a6b8edbaf2876b22314b (diff) |
nohz: Make tick_nohz_irq_exit() irq safe
As it stands, irq_exit() may or may not be called with
irqs disabled, depending on __ARCH_IRQ_EXIT_IRQS_DISABLED
that the arch can define.
It makes tick_nohz_irq_exit() unsafe. For example two
interrupts can race in tick_nohz_stop_sched_tick(): the inner
most one computes the expiring time on top of the timer list,
then it's interrupted right before reprogramming the
clock. The new interrupt enqueues a new timer list timer,
it reprogram the clock to take it into account and it exits.
The CPUs resumes the inner most interrupt and performs the clock
reprogramming without considering the new timer list timer.
This regression has been introduced by:
280f06774afedf849f0b34248ed6aff57d0f6908
("nohz: Separate out irq exit and idle loop dyntick logic")
Let's fix it right now with the appropriate protections.
A saner long term solution will be to remove
__ARCH_IRQ_EXIT_IRQS_DISABLED and mandate that irq_exit() is called
with interrupts disabled.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Cc: <stable@vger.kernel.org> #v3.2+
Link: http://lkml.kernel.org/r/1361373336-11337-1-git-send-email-fweisbec@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/time/tick-sched.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 314b9ee07edf..520592ab6aa4 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c | |||
@@ -565,14 +565,19 @@ void tick_nohz_idle_enter(void) | |||
565 | */ | 565 | */ |
566 | void tick_nohz_irq_exit(void) | 566 | void tick_nohz_irq_exit(void) |
567 | { | 567 | { |
568 | unsigned long flags; | ||
568 | struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched); | 569 | struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched); |
569 | 570 | ||
570 | if (!ts->inidle) | 571 | if (!ts->inidle) |
571 | return; | 572 | return; |
572 | 573 | ||
573 | /* Cancel the timer because CPU already waken up from the C-states*/ | 574 | local_irq_save(flags); |
575 | |||
576 | /* Cancel the timer because CPU already waken up from the C-states */ | ||
574 | menu_hrtimer_cancel(); | 577 | menu_hrtimer_cancel(); |
575 | __tick_nohz_idle_enter(ts); | 578 | __tick_nohz_idle_enter(ts); |
579 | |||
580 | local_irq_restore(flags); | ||
576 | } | 581 | } |
577 | 582 | ||
578 | /** | 583 | /** |