diff options
author | Venkatesh Pallipadi <venki@google.com> | 2010-10-04 20:03:23 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-10-18 14:52:29 -0400 |
commit | d267f87fb8179c6dba03d08b91952e81bc3723c7 (patch) | |
tree | ddcd2f7429964367fcb4ab1dd230a970452d72da /kernel | |
parent | aa483808516ca5cacfa0e5849691f64fec25828e (diff) |
sched: Call tick_check_idle before __irq_enter
When CPU is idle and on first interrupt, irq_enter calls tick_check_idle()
to notify interruption from idle. But, there is a problem if this call
is done after __irq_enter, as all routines in __irq_enter may find
stale time due to yet to be done tick_check_idle.
Specifically, trace calls in __irq_enter when they use global clock and also
account_system_vtime change in this patch as it wants to use sched_clock_cpu()
to do proper irq timing.
But, tick_check_idle was moved after __irq_enter intentionally to
prevent problem of unneeded ksoftirqd wakeups by the commit ee5f80a:
irq: call __irq_enter() before calling the tick_idle_check
Impact: avoid spurious ksoftirqd wakeups
Moving tick_check_idle() before __irq_enter and wrapping it with
local_bh_enable/disable would solve both the problems.
Fixed-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1286237003-12406-9-git-send-email-venki@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 2 | ||||
-rw-r--r-- | kernel/softirq.c | 12 |
2 files changed, 10 insertions, 4 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index bff9ef537df0..567f5cb9808c 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -1974,8 +1974,8 @@ void account_system_vtime(struct task_struct *curr) | |||
1974 | 1974 | ||
1975 | local_irq_save(flags); | 1975 | local_irq_save(flags); |
1976 | 1976 | ||
1977 | now = sched_clock(); | ||
1978 | cpu = smp_processor_id(); | 1977 | cpu = smp_processor_id(); |
1978 | now = sched_clock_cpu(cpu); | ||
1979 | delta = now - per_cpu(irq_start_time, cpu); | 1979 | delta = now - per_cpu(irq_start_time, cpu); |
1980 | per_cpu(irq_start_time, cpu) = now; | 1980 | per_cpu(irq_start_time, cpu) = now; |
1981 | /* | 1981 | /* |
diff --git a/kernel/softirq.c b/kernel/softirq.c index 267f7b763ebb..79ee8f1fc0e7 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c | |||
@@ -296,10 +296,16 @@ void irq_enter(void) | |||
296 | 296 | ||
297 | rcu_irq_enter(); | 297 | rcu_irq_enter(); |
298 | if (idle_cpu(cpu) && !in_interrupt()) { | 298 | if (idle_cpu(cpu) && !in_interrupt()) { |
299 | __irq_enter(); | 299 | /* |
300 | * Prevent raise_softirq from needlessly waking up ksoftirqd | ||
301 | * here, as softirq will be serviced on return from interrupt. | ||
302 | */ | ||
303 | local_bh_disable(); | ||
300 | tick_check_idle(cpu); | 304 | tick_check_idle(cpu); |
301 | } else | 305 | _local_bh_enable(); |
302 | __irq_enter(); | 306 | } |
307 | |||
308 | __irq_enter(); | ||
303 | } | 309 | } |
304 | 310 | ||
305 | #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED | 311 | #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED |