aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/robust-futex-ABI.txt184
-rw-r--r--Documentation/robust-futexes.txt218
2 files changed, 402 insertions, 0 deletions
diff --git a/Documentation/robust-futex-ABI.txt b/Documentation/robust-futex-ABI.txt
new file mode 100644
index 000000000000..def5d8735286
--- /dev/null
+++ b/Documentation/robust-futex-ABI.txt
@@ -0,0 +1,184 @@
1Started by Paul Jackson <pj@sgi.com>
2
3The robust futex ABI
4--------------------
5
6Robust_futexes provide a mechanism that is used in addition to normal
7futexes, for kernel assist of cleanup of held locks on task exit.
8
9The interesting data as to what futexes a thread is holding is kept on a
10linked list in user space, where it can be updated efficiently as locks
11are taken and dropped, without kernel intervention. The only additional
12kernel intervention required for robust_futexes above and beyond what is
13required for futexes is:
14
15 1) a one time call, per thread, to tell the kernel where its list of
16 held robust_futexes begins, and
17 2) internal kernel code at exit, to handle any listed locks held
18 by the exiting thread.
19
20The existing normal futexes already provide a "Fast Userspace Locking"
21mechanism, which handles uncontested locking without needing a system
22call, and handles contested locking by maintaining a list of waiting
23threads in the kernel. Options on the sys_futex(2) system call support
24waiting on a particular futex, and waking up the next waiter on a
25particular futex.
26
27For robust_futexes to work, the user code (typically in a library such
28as glibc linked with the application) has to manage and place the
29necessary list elements exactly as the kernel expects them. If it fails
30to do so, then improperly listed locks will not be cleaned up on exit,
31probably causing deadlock or other such failure of the other threads
32waiting on the same locks.
33
34A thread that anticipates possibly using robust_futexes should first
35issue the system call:
36
37 asmlinkage long
38 sys_set_robust_list(struct robust_list_head __user *head, size_t len);
39
40The pointer 'head' points to a structure in the threads address space
41consisting of three words. Each word is 32 bits on 32 bit arch's, or 64
42bits on 64 bit arch's, and local byte order. Each thread should have
43its own thread private 'head'.
44
45If a thread is running in 32 bit compatibility mode on a 64 native arch
46kernel, then it can actually have two such structures - one using 32 bit
47words for 32 bit compatibility mode, and one using 64 bit words for 64
48bit native mode. The kernel, if it is a 64 bit kernel supporting 32 bit
49compatibility mode, will attempt to process both lists on each task
50exit, if the corresponding sys_set_robust_list() call has been made to
51setup that list.
52
53 The first word in the memory structure at 'head' contains a
54 pointer to a single linked list of 'lock entries', one per lock,
55 as described below. If the list is empty, the pointer will point
56 to itself, 'head'. The last 'lock entry' points back to the 'head'.
57
58 The second word, called 'offset', specifies the offset from the
59 address of the associated 'lock entry', plus or minus, of what will
60 be called the 'lock word', from that 'lock entry'. The 'lock word'
61 is always a 32 bit word, unlike the other words above. The 'lock
62 word' holds 3 flag bits in the upper 3 bits, and the thread id (TID)
63 of the thread holding the lock in the bottom 29 bits. See further
64 below for a description of the flag bits.
65
66 The third word, called 'list_op_pending', contains transient copy of
67 the address of the 'lock entry', during list insertion and removal,
68 and is needed to correctly resolve races should a thread exit while
69 in the middle of a locking or unlocking operation.
70
71Each 'lock entry' on the single linked list starting at 'head' consists
72of just a single word, pointing to the next 'lock entry', or back to
73'head' if there are no more entries. In addition, nearby to each 'lock
74entry', at an offset from the 'lock entry' specified by the 'offset'
75word, is one 'lock word'.
76
77The 'lock word' is always 32 bits, and is intended to be the same 32 bit
78lock variable used by the futex mechanism, in conjunction with
79robust_futexes. The kernel will only be able to wakeup the next thread
80waiting for a lock on a threads exit if that next thread used the futex
81mechanism to register the address of that 'lock word' with the kernel.
82
83For each futex lock currently held by a thread, if it wants this
84robust_futex support for exit cleanup of that lock, it should have one
85'lock entry' on this list, with its associated 'lock word' at the
86specified 'offset'. Should a thread die while holding any such locks,
87the kernel will walk this list, mark any such locks with a bit
88indicating their holder died, and wakeup the next thread waiting for
89that lock using the futex mechanism.
90
91When a thread has invoked the above system call to indicate it
92anticipates using robust_futexes, the kernel stores the passed in 'head'
93pointer for that task. The task may retrieve that value later on by
94using the system call:
95
96 asmlinkage long
97 sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
98 size_t __user *len_ptr);
99
100It is anticipated that threads will use robust_futexes embedded in
101larger, user level locking structures, one per lock. The kernel
102robust_futex mechanism doesn't care what else is in that structure, so
103long as the 'offset' to the 'lock word' is the same for all
104robust_futexes used by that thread. The thread should link those locks
105it currently holds using the 'lock entry' pointers. It may also have
106other links between the locks, such as the reverse side of a double
107linked list, but that doesn't matter to the kernel.
108
109By keeping its locks linked this way, on a list starting with a 'head'
110pointer known to the kernel, the kernel can provide to a thread the
111essential service available for robust_futexes, which is to help clean
112up locks held at the time of (a perhaps unexpectedly) exit.
113
114Actual locking and unlocking, during normal operations, is handled
115entirely by user level code in the contending threads, and by the
116existing futex mechanism to wait for, and wakeup, locks. The kernels
117only essential involvement in robust_futexes is to remember where the
118list 'head' is, and to walk the list on thread exit, handling locks
119still held by the departing thread, as described below.
120
121There may exist thousands of futex lock structures in a threads shared
122memory, on various data structures, at a given point in time. Only those
123lock structures for locks currently held by that thread should be on
124that thread's robust_futex linked lock list a given time.
125
126A given futex lock structure in a user shared memory region may be held
127at different times by any of the threads with access to that region. The
128thread currently holding such a lock, if any, is marked with the threads
129TID in the lower 29 bits of the 'lock word'.
130
131When adding or removing a lock from its list of held locks, in order for
132the kernel to correctly handle lock cleanup regardless of when the task
133exits (perhaps it gets an unexpected signal 9 in the middle of
134manipulating this list), the user code must observe the following
135protocol on 'lock entry' insertion and removal:
136
137On insertion:
138 1) set the 'list_op_pending' word to the address of the 'lock word'
139 to be inserted,
140 2) acquire the futex lock,
141 3) add the lock entry, with its thread id (TID) in the bottom 29 bits
142 of the 'lock word', to the linked list starting at 'head', and
143 4) clear the 'list_op_pending' word.
144
145 XXX I am particularly unsure of the following -pj XXX
146
147On removal:
148 1) set the 'list_op_pending' word to the address of the 'lock word'
149 to be removed,
150 2) remove the lock entry for this lock from the 'head' list,
151 2) release the futex lock, and
152 2) clear the 'lock_op_pending' word.
153
154On exit, the kernel will consider the address stored in
155'list_op_pending' and the address of each 'lock word' found by walking
156the list starting at 'head'. For each such address, if the bottom 29
157bits of the 'lock word' at offset 'offset' from that address equals the
158exiting threads TID, then the kernel will do two things:
159
160 1) if bit 31 (0x80000000) is set in that word, then attempt a futex
161 wakeup on that address, which will waken the next thread that has
162 used to the futex mechanism to wait on that address, and
163 2) atomically set bit 30 (0x40000000) in the 'lock word'.
164
165In the above, bit 31 was set by futex waiters on that lock to indicate
166they were waiting, and bit 30 is set by the kernel to indicate that the
167lock owner died holding the lock.
168
169The kernel exit code will silently stop scanning the list further if at
170any point:
171
172 1) the 'head' pointer or an subsequent linked list pointer
173 is not a valid address of a user space word
174 2) the calculated location of the 'lock word' (address plus
175 'offset') is not the valud address of a 32 bit user space
176 word
177 3) if the list contains more than 1 million (subject to
178 future kernel configuration changes) elements.
179
180When the kernel sees a list entry whose 'lock word' doesn't have the
181current threads TID in the lower 29 bits, it does nothing with that
182entry, and goes on to the next entry.
183
184Bit 29 (0x20000000) of the 'lock word' is reserved for future use.
diff --git a/Documentation/robust-futexes.txt b/Documentation/robust-futexes.txt
new file mode 100644
index 000000000000..7aecc67b1361
--- /dev/null
+++ b/Documentation/robust-futexes.txt
@@ -0,0 +1,218 @@
1Started by: Ingo Molnar <mingo@redhat.com>
2
3Background
4----------
5
6what are robust futexes? To answer that, we first need to understand
7what futexes are: normal futexes are special types of locks that in the
8noncontended case can be acquired/released from userspace without having
9to enter the kernel.
10
11A futex is in essence a user-space address, e.g. a 32-bit lock variable
12field. If userspace notices contention (the lock is already owned and
13someone else wants to grab it too) then the lock is marked with a value
14that says "there's a waiter pending", and the sys_futex(FUTEX_WAIT)
15syscall is used to wait for the other guy to release it. The kernel
16creates a 'futex queue' internally, so that it can later on match up the
17waiter with the waker - without them having to know about each other.
18When the owner thread releases the futex, it notices (via the variable
19value) that there were waiter(s) pending, and does the
20sys_futex(FUTEX_WAKE) syscall to wake them up. Once all waiters have
21taken and released the lock, the futex is again back to 'uncontended'
22state, and there's no in-kernel state associated with it. The kernel
23completely forgets that there ever was a futex at that address. This
24method makes futexes very lightweight and scalable.
25
26"Robustness" is about dealing with crashes while holding a lock: if a
27process exits prematurely while holding a pthread_mutex_t lock that is
28also shared with some other process (e.g. yum segfaults while holding a
29pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need
30to be notified that the last owner of the lock exited in some irregular
31way.
32
33To solve such types of problems, "robust mutex" userspace APIs were
34created: pthread_mutex_lock() returns an error value if the owner exits
35prematurely - and the new owner can decide whether the data protected by
36the lock can be recovered safely.
37
38There is a big conceptual problem with futex based mutexes though: it is
39the kernel that destroys the owner task (e.g. due to a SEGFAULT), but
40the kernel cannot help with the cleanup: if there is no 'futex queue'
41(and in most cases there is none, futexes being fast lightweight locks)
42then the kernel has no information to clean up after the held lock!
43Userspace has no chance to clean up after the lock either - userspace is
44the one that crashes, so it has no opportunity to clean up. Catch-22.
45
46In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot
47is needed to release that futex based lock. This is one of the leading
48bugreports against yum.
49
50To solve this problem, the traditional approach was to extend the vma
51(virtual memory area descriptor) concept to have a notion of 'pending
52robust futexes attached to this area'. This approach requires 3 new
53syscall variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and
54FUTEX_RECOVER. At do_exit() time, all vmas are searched to see whether
55they have a robust_head set. This approach has two fundamental problems
56left:
57
58 - it has quite complex locking and race scenarios. The vma-based
59 approach had been pending for years, but they are still not completely
60 reliable.
61
62 - they have to scan _every_ vma at sys_exit() time, per thread!
63
64The second disadvantage is a real killer: pthread_exit() takes around 1
65microsecond on Linux, but with thousands (or tens of thousands) of vmas
66every pthread_exit() takes a millisecond or more, also totally
67destroying the CPU's L1 and L2 caches!
68
69This is very much noticeable even for normal process sys_exit_group()
70calls: the kernel has to do the vma scanning unconditionally! (this is
71because the kernel has no knowledge about how many robust futexes there
72are to be cleaned up, because a robust futex might have been registered
73in another task, and the futex variable might have been simply mmap()-ed
74into this process's address space).
75
76This huge overhead forced the creation of CONFIG_FUTEX_ROBUST so that
77normal kernels can turn it off, but worse than that: the overhead makes
78robust futexes impractical for any type of generic Linux distribution.
79
80So something had to be done.
81
82New approach to robust futexes
83------------------------------
84
85At the heart of this new approach there is a per-thread private list of
86robust locks that userspace is holding (maintained by glibc) - which
87userspace list is registered with the kernel via a new syscall [this
88registration happens at most once per thread lifetime]. At do_exit()
89time, the kernel checks this user-space list: are there any robust futex
90locks to be cleaned up?
91
92In the common case, at do_exit() time, there is no list registered, so
93the cost of robust futexes is just a simple current->robust_list != NULL
94comparison. If the thread has registered a list, then normally the list
95is empty. If the thread/process crashed or terminated in some incorrect
96way then the list might be non-empty: in this case the kernel carefully
97walks the list [not trusting it], and marks all locks that are owned by
98this thread with the FUTEX_OWNER_DEAD bit, and wakes up one waiter (if
99any).
100
101The list is guaranteed to be private and per-thread at do_exit() time,
102so it can be accessed by the kernel in a lockless way.
103
104There is one race possible though: since adding to and removing from the
105list is done after the futex is acquired by glibc, there is a few
106instructions window for the thread (or process) to die there, leaving
107the futex hung. To protect against this possibility, userspace (glibc)
108also maintains a simple per-thread 'list_op_pending' field, to allow the
109kernel to clean up if the thread dies after acquiring the lock, but just
110before it could have added itself to the list. Glibc sets this
111list_op_pending field before it tries to acquire the futex, and clears
112it after the list-add (or list-remove) has finished.
113
114That's all that is needed - all the rest of robust-futex cleanup is done
115in userspace [just like with the previous patches].
116
117Ulrich Drepper has implemented the necessary glibc support for this new
118mechanism, which fully enables robust mutexes.
119
120Key differences of this userspace-list based approach, compared to the
121vma based method:
122
123 - it's much, much faster: at thread exit time, there's no need to loop
124 over every vma (!), which the VM-based method has to do. Only a very
125 simple 'is the list empty' op is done.
126
127 - no VM changes are needed - 'struct address_space' is left alone.
128
129 - no registration of individual locks is needed: robust mutexes dont
130 need any extra per-lock syscalls. Robust mutexes thus become a very
131 lightweight primitive - so they dont force the application designer
132 to do a hard choice between performance and robustness - robust
133 mutexes are just as fast.
134
135 - no per-lock kernel allocation happens.
136
137 - no resource limits are needed.
138
139 - no kernel-space recovery call (FUTEX_RECOVER) is needed.
140
141 - the implementation and the locking is "obvious", and there are no
142 interactions with the VM.
143
144Performance
145-----------
146
147I have benchmarked the time needed for the kernel to process a list of 1
148million (!) held locks, using the new method [on a 2GHz CPU]:
149
150 - with FUTEX_WAIT set [contended mutex]: 130 msecs
151 - without FUTEX_WAIT set [uncontended mutex]: 30 msecs
152
153I have also measured an approach where glibc does the lock notification
154[which it currently does for !pshared robust mutexes], and that took 256
155msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls
156userspace had to do.
157
158(1 million held locks are unheard of - we expect at most a handful of
159locks to be held at a time. Nevertheless it's nice to know that this
160approach scales nicely.)
161
162Implementation details
163----------------------
164
165The patch adds two new syscalls: one to register the userspace list, and
166one to query the registered list pointer:
167
168 asmlinkage long
169 sys_set_robust_list(struct robust_list_head __user *head,
170 size_t len);
171
172 asmlinkage long
173 sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
174 size_t __user *len_ptr);
175
176List registration is very fast: the pointer is simply stored in
177current->robust_list. [Note that in the future, if robust futexes become
178widespread, we could extend sys_clone() to register a robust-list head
179for new threads, without the need of another syscall.]
180
181So there is virtually zero overhead for tasks not using robust futexes,
182and even for robust futex users, there is only one extra syscall per
183thread lifetime, and the cleanup operation, if it happens, is fast and
184straightforward. The kernel doesnt have any internal distinction between
185robust and normal futexes.
186
187If a futex is found to be held at exit time, the kernel sets the
188following bit of the futex word:
189
190 #define FUTEX_OWNER_DIED 0x40000000
191
192and wakes up the next futex waiter (if any). User-space does the rest of
193the cleanup.
194
195Otherwise, robust futexes are acquired by glibc by putting the TID into
196the futex field atomically. Waiters set the FUTEX_WAITERS bit:
197
198 #define FUTEX_WAITERS 0x80000000
199
200and the remaining bits are for the TID.
201
202Testing, architecture support
203-----------------------------
204
205i've tested the new syscalls on x86 and x86_64, and have made sure the
206parsing of the userspace list is robust [ ;-) ] even if the list is
207deliberately corrupted.
208
209i386 and x86_64 syscalls are wired up at the moment, and Ulrich has
210tested the new glibc code (on x86_64 and i386), and it works for his
211robust-mutex testcases.
212
213All other architectures should build just fine too - but they wont have
214the new syscalls yet.
215
216Architectures need to implement the new futex_atomic_cmpxchg_inuser()
217inline function before writing up the syscalls (that function returns
218-ENOSYS right now).