diff options
author | Oleg Nesterov <oleg@redhat.com> | 2014-12-01 16:34:17 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-12-08 05:36:44 -0500 |
commit | 78bff1c8684fb94f1ae7283688f90188b53fc433 (patch) | |
tree | 48795bcb348947f7e213980570e7413dfc10fc3d /arch/x86 | |
parent | 9fd7fc34cfcaf9f6c932ee1710cce83da3b7bd59 (diff) |
x86/ticketlock: Fix spin_unlock_wait() livelock
arch_spin_unlock_wait() looks very suboptimal, to the point I
think this is just wrong and can lead to livelock: if the lock
is heavily contended we can never see head == tail.
But we do not need to wait for arch_spin_is_locked() == F. If it
is locked we only need to wait until the current owner drops
this lock. So we could simply spin until old_head !=
lock->tickets.head in this case, but .head can overflow and thus
we can't check "unlocked" only once before the main loop.
Also, the "unlocked" check can ignore TICKET_SLOWPATH_FLAG bit.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Paul E.McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <Waiman.Long@hp.com>
Link: http://lkml.kernel.org/r/20141201213417.GA5842@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/include/asm/spinlock.h | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index bf156ded74b5..abc34e95398d 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h | |||
@@ -184,8 +184,20 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock, | |||
184 | 184 | ||
185 | static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) | 185 | static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) |
186 | { | 186 | { |
187 | while (arch_spin_is_locked(lock)) | 187 | __ticket_t head = ACCESS_ONCE(lock->tickets.head); |
188 | |||
189 | for (;;) { | ||
190 | struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets); | ||
191 | /* | ||
192 | * We need to check "unlocked" in a loop, tmp.head == head | ||
193 | * can be false positive because of overflow. | ||
194 | */ | ||
195 | if (tmp.head == (tmp.tail & ~TICKET_SLOWPATH_FLAG) || | ||
196 | tmp.head != head) | ||
197 | break; | ||
198 | |||
188 | cpu_relax(); | 199 | cpu_relax(); |
200 | } | ||
189 | } | 201 | } |
190 | 202 | ||
191 | /* | 203 | /* |