aboutsummaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorArjan van de Ven <arjan@infradead.org>2009-10-08 09:40:41 -0400
committerIngo Molnar <mingo@elte.hu>2009-10-08 11:27:27 -0400
commit9bcbdd9c58617f1301dd4f17c738bb9bc73aca70 (patch)
tree26c4e1faae64c3352c909f13a6c04ee3c68a99ed /arch
parentfdc6f192e7e1ae80565af23cc33dc88e3dcdf184 (diff)
x86, timers: Check for pending timers after (device) interrupts
Now that range timers and deferred timers are common, I found a problem with these using the "perf timechart" tool. Frans Pop also reported high scheduler latencies via LatencyTop, when using iwlagn. It turns out that on x86, these two 'opportunistic' timers only get checked when another "real" timer happens. These opportunistic timers have the objective to save power by hitchhiking on other wakeups, as to avoid CPU wakeups by themselves as much as possible. The change in this patch runs this check not only at timer interrupts, but at all (device) interrupts. The effect is that: 1) the deferred timers/range timers get delayed less 2) the range timers cause less wakeups by themselves because the percentage of hitchhiking on existing wakeup events goes up. I've verified the working of the patch using "perf timechart", the original exposed bug is gone with this patch. Frans also reported success - the latencies are now down in the expected ~10 msec range. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Tested-by: Frans Pop <elendil@planet.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091008064041.67219b13@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/kernel/irq.c2
-rw-r--r--arch/x86/kernel/smp.c1
2 files changed, 3 insertions, 0 deletions
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 74656d1d4e30..391206199515 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -244,6 +244,7 @@ unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
244 __func__, smp_processor_id(), vector, irq); 244 __func__, smp_processor_id(), vector, irq);
245 } 245 }
246 246
247 run_local_timers();
247 irq_exit(); 248 irq_exit();
248 249
249 set_irq_regs(old_regs); 250 set_irq_regs(old_regs);
@@ -268,6 +269,7 @@ void smp_generic_interrupt(struct pt_regs *regs)
268 if (generic_interrupt_extension) 269 if (generic_interrupt_extension)
269 generic_interrupt_extension(); 270 generic_interrupt_extension();
270 271
272 run_local_timers();
271 irq_exit(); 273 irq_exit();
272 274
273 set_irq_regs(old_regs); 275 set_irq_regs(old_regs);
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index ec1de97600e7..d915d956e66d 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -198,6 +198,7 @@ void smp_reschedule_interrupt(struct pt_regs *regs)
198{ 198{
199 ack_APIC_irq(); 199 ack_APIC_irq();
200 inc_irq_stat(irq_resched_count); 200 inc_irq_stat(irq_resched_count);
201 run_local_timers();
201 /* 202 /*
202 * KVM uses this interrupt to force a cpu out of guest mode 203 * KVM uses this interrupt to force a cpu out of guest mode
203 */ 204 */