aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYang Xiaowei <xiaowei.yang@intel.com>2009-09-09 15:44:52 -0400
committerJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>2009-09-09 19:38:44 -0400
commit2496afbf1e50c70f80992656bcb730c8583ddac3 (patch)
treefd701f588d8194cb1b296eca9c6288b6b8c342a1
parent4d576b57b50a92801e6493e76e5243d6cff193d2 (diff)
xen: use stronger barrier after unlocking lock
We need to have a stronger barrier between releasing the lock and checking for any waiting spinners. A compiler barrier is not sufficient because the CPU's ordering rules do not prevent the read xl->spinners from happening before the unlock assignment, as they are different memory locations. We need to have an explicit barrier to enforce the write-read ordering to different memory locations. Because of it, I can't bring up > 4 HVM guests on one SMP machine. [ Code and commit comments expanded -J ] [ Impact: avoid deadlock when using Xen PV spinlocks ] Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-rw-r--r--arch/x86/xen/spinlock.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2f91e5651926..36a5141108df 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -326,8 +326,13 @@ static void xen_spin_unlock(struct raw_spinlock *lock)
326 smp_wmb(); /* make sure no writes get moved after unlock */ 326 smp_wmb(); /* make sure no writes get moved after unlock */
327 xl->lock = 0; /* release lock */ 327 xl->lock = 0; /* release lock */
328 328
329 /* make sure unlock happens before kick */ 329 /*
330 barrier(); 330 * Make sure unlock happens before checking for waiting
331 * spinners. We need a strong barrier to enforce the
332 * write-read ordering to different memory locations, as the
333 * CPU makes no implied guarantees about their ordering.
334 */
335 mb();
331 336
332 if (unlikely(xl->spinners)) 337 if (unlikely(xl->spinners))
333 xen_spin_unlock_slow(xl); 338 xen_spin_unlock_slow(xl);