summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/locking/mutex-design.txt49
1 files changed, 17 insertions, 32 deletions
diff --git a/Documentation/locking/mutex-design.txt b/Documentation/locking/mutex-design.txt
index 60c482df1a38..818aca19612f 100644
--- a/Documentation/locking/mutex-design.txt
+++ b/Documentation/locking/mutex-design.txt
@@ -21,37 +21,23 @@ Implementation
21-------------- 21--------------
22 22
23Mutexes are represented by 'struct mutex', defined in include/linux/mutex.h 23Mutexes are represented by 'struct mutex', defined in include/linux/mutex.h
24and implemented in kernel/locking/mutex.c. These locks use a three 24and implemented in kernel/locking/mutex.c. These locks use an atomic variable
25state atomic counter (->count) to represent the different possible 25(->owner) to keep track of the lock state during its lifetime. Field owner
26transitions that can occur during the lifetime of a lock: 26actually contains 'struct task_struct *' to the current lock owner and it is
27 27therefore NULL if not currently owned. Since task_struct pointers are aligned
28 1: unlocked 28at at least L1_CACHE_BYTES, low bits (3) are used to store extra state (e.g.,
29 0: locked, no waiters 29if waiter list is non-empty). In its most basic form it also includes a
30 negative: locked, with potential waiters 30wait-queue and a spinlock that serializes access to it. Furthermore,
31 31CONFIG_MUTEX_SPIN_ON_OWNER=y systems use a spinner MCS lock (->osq), described
32In its most basic form it also includes a wait-queue and a spinlock 32below in (ii).
33that serializes access to it. CONFIG_SMP systems can also include
34a pointer to the lock task owner (->owner) as well as a spinner MCS
35lock (->osq), both described below in (ii).
36 33
37When acquiring a mutex, there are three possible paths that can be 34When acquiring a mutex, there are three possible paths that can be
38taken, depending on the state of the lock: 35taken, depending on the state of the lock:
39 36
40(i) fastpath: tries to atomically acquire the lock by decrementing the 37(i) fastpath: tries to atomically acquire the lock by cmpxchg()ing the owner with
41 counter. If it was already taken by another task it goes to the next 38 the current task. This only works in the uncontended case (cmpxchg() checks
42 possible path. This logic is architecture specific. On x86-64, the 39 against 0UL, so all 3 state bits above have to be 0). If the lock is
43 locking fastpath is 2 instructions: 40 contended it goes to the next possible path.
44
45 0000000000000e10 <mutex_lock>:
46 e21: f0 ff 0b lock decl (%rbx)
47 e24: 79 08 jns e2e <mutex_lock+0x1e>
48
49 the unlocking fastpath is equally tight:
50
51 0000000000000bc0 <mutex_unlock>:
52 bc8: f0 ff 07 lock incl (%rdi)
53 bcb: 7f 0a jg bd7 <mutex_unlock+0x17>
54
55 41
56(ii) midpath: aka optimistic spinning, tries to spin for acquisition 42(ii) midpath: aka optimistic spinning, tries to spin for acquisition
57 while the lock owner is running and there are no other tasks ready 43 while the lock owner is running and there are no other tasks ready
@@ -143,11 +129,10 @@ Test if the mutex is taken:
143Disadvantages 129Disadvantages
144------------- 130-------------
145 131
146Unlike its original design and purpose, 'struct mutex' is larger than 132Unlike its original design and purpose, 'struct mutex' is among the largest
147most locks in the kernel. E.g: on x86-64 it is 40 bytes, almost twice 133locks in the kernel. E.g: on x86-64 it is 32 bytes, where 'struct semaphore'
148as large as 'struct semaphore' (24 bytes) and tied, along with rwsems, 134is 24 bytes and rw_semaphore is 40 bytes. Larger structure sizes mean more CPU
149for the largest lock in the kernel. Larger structure sizes mean more 135cache and memory footprint.
150CPU cache and memory footprint.
151 136
152When to use mutexes 137When to use mutexes
153------------------- 138-------------------