diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2009-03-16 20:05:04 -0400 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2009-03-17 19:49:30 -0400 |
commit | 68a8ca593fac82e336a792226272455901fa83df (patch) | |
tree | 33bd7743ed54993b645852dccc7527d1a95d4533 | |
parent | 05c3dc2c4b60387769cbe73174347de4cf85f0c9 (diff) |
x86: fix broken irq migration logic while cleaning up multiple vectors
Impact: fix spurious IRQs
During irq migration, we send a low priority interrupt to the previous
irq destination. This happens in non interrupt-remapping case after interrupt
starts arriving at new destination and in interrupt-remapping case after
modifying and flushing the interrupt-remapping table entry caches.
This low priority irq cleanup handler can cleanup multiple vectors, as
multiple irq's can be migrated at almost the same time. While
there will be multiple invocations of irq cleanup handler (one cleanup
IPI for each irq migration), first invocation of the cleanup handler
can potentially cleanup more than one vector (as the first invocation can
see the requests for more than vector cleanup). When we cleanup multiple
vectors during the first invocation of the smp_irq_move_cleanup_interrupt(),
other vectors that are to be cleanedup can still be pending in the local
cpu's IRR (as smp_irq_move_cleanup_interrupt() runs with interrupts disabled).
When we are ready to unhook a vector corresponding to an irq, check if that
vector is registered in the local cpu's IRR. If so skip that cleanup and
do a self IPI with the cleanup vector, so that we give a chance to
service the pending vector interrupt and then cleanup that vector
allocation once we execute the lowest priority handler.
This fixes spurious interrupts seen when migrating multiple vectors
at the same time.
[ This is apparently possible even on conventional xapic, although to
the best of our knowledge it has never been seen. The stable
maintainers may wish to consider this one for -stable. ]
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: stable@kernel.org
-rw-r--r-- | arch/x86/kernel/apic/io_apic.c | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index ff1759a1128e..42cdc78427a2 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c | |||
@@ -2414,6 +2414,7 @@ asmlinkage void smp_irq_move_cleanup_interrupt(void) | |||
2414 | me = smp_processor_id(); | 2414 | me = smp_processor_id(); |
2415 | for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { | 2415 | for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { |
2416 | unsigned int irq; | 2416 | unsigned int irq; |
2417 | unsigned int irr; | ||
2417 | struct irq_desc *desc; | 2418 | struct irq_desc *desc; |
2418 | struct irq_cfg *cfg; | 2419 | struct irq_cfg *cfg; |
2419 | irq = __get_cpu_var(vector_irq)[vector]; | 2420 | irq = __get_cpu_var(vector_irq)[vector]; |
@@ -2433,6 +2434,18 @@ asmlinkage void smp_irq_move_cleanup_interrupt(void) | |||
2433 | if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain)) | 2434 | if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain)) |
2434 | goto unlock; | 2435 | goto unlock; |
2435 | 2436 | ||
2437 | irr = apic_read(APIC_IRR + (vector / 32 * 0x10)); | ||
2438 | /* | ||
2439 | * Check if the vector that needs to be cleanedup is | ||
2440 | * registered at the cpu's IRR. If so, then this is not | ||
2441 | * the best time to clean it up. Lets clean it up in the | ||
2442 | * next attempt by sending another IRQ_MOVE_CLEANUP_VECTOR | ||
2443 | * to myself. | ||
2444 | */ | ||
2445 | if (irr & (1 << (vector % 32))) { | ||
2446 | apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR); | ||
2447 | goto unlock; | ||
2448 | } | ||
2436 | __get_cpu_var(vector_irq)[vector] = -1; | 2449 | __get_cpu_var(vector_irq)[vector] = -1; |
2437 | cfg->move_cleanup_count--; | 2450 | cfg->move_cleanup_count--; |
2438 | unlock: | 2451 | unlock: |