aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/locking
diff options
context:
space:
mode:
authorJason Low <jason.low2@hp.com>2014-07-14 13:27:51 -0400
committerIngo Molnar <mingo@kernel.org>2014-07-16 07:28:06 -0400
commit33ecd2083a9560fbc1ef1b1279ef3ecb4c012a4f (patch)
treed3c7834d2f4140fb64d80c7560513374ac1d5ca8 /kernel/locking
parent4d9d951e6b5df85ccfca2c5bd8b4f5c71d256b65 (diff)
locking/spinlocks/mcs: Micro-optimize osq_unlock()
In the unlock function of the cancellable MCS spinlock, the first thing we do is to retrive the current CPU's osq node. However, due to the changes made in the previous patch, in the common case where the lock is not contended, we wouldn't need to access the current CPU's osq node anymore. This patch optimizes this by only retriving this CPU's osq node after we attempt the initial cmpxchg to unlock the osq and found that its contended. Signed-off-by: Jason Low <jason.low2@hp.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Scott Norton <scott.norton@hp.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Davidlohr Bueso <davidlohr@hp.com> Cc: Rik van Riel <riel@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1405358872-3732-5-git-send-email-jason.low2@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking')
-rw-r--r--kernel/locking/mcs_spinlock.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c
index 32fc16c0a545..be9ee1559fca 100644
--- a/kernel/locking/mcs_spinlock.c
+++ b/kernel/locking/mcs_spinlock.c
@@ -182,8 +182,7 @@ unqueue:
182 182
183void osq_unlock(struct optimistic_spin_queue *lock) 183void osq_unlock(struct optimistic_spin_queue *lock)
184{ 184{
185 struct optimistic_spin_node *node = this_cpu_ptr(&osq_node); 185 struct optimistic_spin_node *node, *next;
186 struct optimistic_spin_node *next;
187 int curr = encode_cpu(smp_processor_id()); 186 int curr = encode_cpu(smp_processor_id());
188 187
189 /* 188 /*
@@ -195,6 +194,7 @@ void osq_unlock(struct optimistic_spin_queue *lock)
195 /* 194 /*
196 * Second most likely case. 195 * Second most likely case.
197 */ 196 */
197 node = this_cpu_ptr(&osq_node);
198 next = xchg(&node->next, NULL); 198 next = xchg(&node->next, NULL);
199 if (next) { 199 if (next) {
200 ACCESS_ONCE(next->locked) = 1; 200 ACCESS_ONCE(next->locked) = 1;