diff options
author | Jeremy Fitzhardinge <jeremy@goop.org> | 2008-07-07 15:07:53 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-07-16 05:15:53 -0400 |
commit | 2d9e1e2f58b5612aa4eab0ab54c84308a29dbd79 (patch) | |
tree | 08e91f7f485a8fa8ec2515f8c706870c0d44044b /drivers/xen/events.c | |
parent | 56397f8dadb40055479a8ffff23f21a890098a31 (diff) |
xen: implement Xen-specific spinlocks
The standard ticket spinlocks are very expensive in a virtual
environment, because their performance depends on Xen's scheduler
giving vcpus time in the order that they're supposed to take the
spinlock.
This implements a Xen-specific spinlock, which should be much more
efficient.
The fast-path is essentially the old Linux-x86 locks, using a single
lock byte. The locker decrements the byte; if the result is 0, then
they have the lock. If the lock is negative, then locker must spin
until the lock is positive again.
When there's contention, the locker spin for 2^16[*] iterations waiting
to get the lock. If it fails to get the lock in that time, it adds
itself to the contention count in the lock and blocks on a per-cpu
event channel.
When unlocking the spinlock, the locker looks to see if there's anyone
blocked waiting for the lock by checking for a non-zero waiter count.
If there's a waiter, it traverses the per-cpu "lock_spinners"
variable, which contains which lock each CPU is waiting on. It picks
one CPU waiting on the lock and sends it an event to wake it up.
This allows efficient fast-path spinlock operation, while allowing
spinning vcpus to give up their processor time while waiting for a
contended lock.
[*] 2^16 iterations is threshold at which 98% locks have been taken
according to Thomas Friebel's Xen Summit talk "Preventing Guests from
Spinning Around". Therefore, we'd expect the lock and unlock slow
paths will only be entered 2% of the time.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Christoph Lameter <clameter@linux-foundation.org>
Cc: Petr Tesarik <ptesarik@suse.cz>
Cc: Virtualization <virtualization@lists.linux-foundation.org>
Cc: Xen devel <xen-devel@lists.xensource.com>
Cc: Thomas Friebel <thomas.friebel@amd.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'drivers/xen/events.c')
-rw-r--r-- | drivers/xen/events.c | 27 |
1 files changed, 27 insertions, 0 deletions
diff --git a/drivers/xen/events.c b/drivers/xen/events.c index 332dd63750a0..0e0c28574af8 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c | |||
@@ -734,6 +734,33 @@ static void restore_cpu_ipis(unsigned int cpu) | |||
734 | } | 734 | } |
735 | } | 735 | } |
736 | 736 | ||
737 | /* Clear an irq's pending state, in preparation for polling on it */ | ||
738 | void xen_clear_irq_pending(int irq) | ||
739 | { | ||
740 | int evtchn = evtchn_from_irq(irq); | ||
741 | |||
742 | if (VALID_EVTCHN(evtchn)) | ||
743 | clear_evtchn(evtchn); | ||
744 | } | ||
745 | |||
746 | /* Poll waiting for an irq to become pending. In the usual case, the | ||
747 | irq will be disabled so it won't deliver an interrupt. */ | ||
748 | void xen_poll_irq(int irq) | ||
749 | { | ||
750 | evtchn_port_t evtchn = evtchn_from_irq(irq); | ||
751 | |||
752 | if (VALID_EVTCHN(evtchn)) { | ||
753 | struct sched_poll poll; | ||
754 | |||
755 | poll.nr_ports = 1; | ||
756 | poll.timeout = 0; | ||
757 | poll.ports = &evtchn; | ||
758 | |||
759 | if (HYPERVISOR_sched_op(SCHEDOP_poll, &poll) != 0) | ||
760 | BUG(); | ||
761 | } | ||
762 | } | ||
763 | |||
737 | void xen_irq_resume(void) | 764 | void xen_irq_resume(void) |
738 | { | 765 | { |
739 | unsigned int cpu, irq, evtchn; | 766 | unsigned int cpu, irq, evtchn; |