diff options
author | Mauro Carvalho Chehab <mchehab@s-opensource.com> | 2017-05-11 08:55:30 -0400 |
---|---|---|
committer | Mauro Carvalho Chehab <mchehab@s-opensource.com> | 2017-05-16 07:00:58 -0400 |
commit | e548cdeffcd8ab8d3551a539890e682c08ab7828 (patch) | |
tree | 6749befde851c8656e6c240073f0a283b8280b76 /Documentation/kernel-hacking/locking.rst | |
parent | dca1e58e3f1c82a840abaafb9328f84ae69a9926 (diff) |
docs-rst: convert kernel-locking to ReST
Use pandoc to convert documentation to ReST by calling
Documentation/sphinx/tmplcvt script.
- Manually adjust tables with got broken by conversion
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Diffstat (limited to 'Documentation/kernel-hacking/locking.rst')
-rw-r--r-- | Documentation/kernel-hacking/locking.rst | 1453 |
1 files changed, 1453 insertions, 0 deletions
diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst new file mode 100644 index 000000000000..976b2703df75 --- /dev/null +++ b/Documentation/kernel-hacking/locking.rst | |||
@@ -0,0 +1,1453 @@ | |||
1 | =========================== | ||
2 | Unreliable Guide To Locking | ||
3 | =========================== | ||
4 | |||
5 | :Author: Rusty Russell | ||
6 | |||
7 | Introduction | ||
8 | ============ | ||
9 | |||
10 | Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking | ||
11 | issues. This document describes the locking systems in the Linux Kernel | ||
12 | in 2.6. | ||
13 | |||
14 | With the wide availability of HyperThreading, and preemption in the | ||
15 | Linux Kernel, everyone hacking on the kernel needs to know the | ||
16 | fundamentals of concurrency and locking for SMP. | ||
17 | |||
18 | The Problem With Concurrency | ||
19 | ============================ | ||
20 | |||
21 | (Skip this if you know what a Race Condition is). | ||
22 | |||
23 | In a normal program, you can increment a counter like so: | ||
24 | |||
25 | :: | ||
26 | |||
27 | very_important_count++; | ||
28 | |||
29 | |||
30 | This is what they would expect to happen: | ||
31 | |||
32 | +------------------------------------+------------------------------------+ | ||
33 | | Instance 1 | Instance 2 | | ||
34 | +====================================+====================================+ | ||
35 | | read very_important_count (5) | | | ||
36 | +------------------------------------+------------------------------------+ | ||
37 | | add 1 (6) | | | ||
38 | +------------------------------------+------------------------------------+ | ||
39 | | write very_important_count (6) | | | ||
40 | +------------------------------------+------------------------------------+ | ||
41 | | | read very_important_count (6) | | ||
42 | +------------------------------------+------------------------------------+ | ||
43 | | | add 1 (7) | | ||
44 | +------------------------------------+------------------------------------+ | ||
45 | | | write very_important_count (7) | | ||
46 | +------------------------------------+------------------------------------+ | ||
47 | |||
48 | Table: Expected Results | ||
49 | |||
50 | This is what might happen: | ||
51 | |||
52 | +------------------------------------+------------------------------------+ | ||
53 | | Instance 1 | Instance 2 | | ||
54 | +====================================+====================================+ | ||
55 | | read very_important_count (5) | | | ||
56 | +------------------------------------+------------------------------------+ | ||
57 | | | read very_important_count (5) | | ||
58 | +------------------------------------+------------------------------------+ | ||
59 | | add 1 (6) | | | ||
60 | +------------------------------------+------------------------------------+ | ||
61 | | | add 1 (6) | | ||
62 | +------------------------------------+------------------------------------+ | ||
63 | | write very_important_count (6) | | | ||
64 | +------------------------------------+------------------------------------+ | ||
65 | | | write very_important_count (6) | | ||
66 | +------------------------------------+------------------------------------+ | ||
67 | |||
68 | Table: Possible Results | ||
69 | |||
70 | Race Conditions and Critical Regions | ||
71 | ------------------------------------ | ||
72 | |||
73 | This overlap, where the result depends on the relative timing of | ||
74 | multiple tasks, is called a race condition. The piece of code containing | ||
75 | the concurrency issue is called a critical region. And especially since | ||
76 | Linux starting running on SMP machines, they became one of the major | ||
77 | issues in kernel design and implementation. | ||
78 | |||
79 | Preemption can have the same effect, even if there is only one CPU: by | ||
80 | preempting one task during the critical region, we have exactly the same | ||
81 | race condition. In this case the thread which preempts might run the | ||
82 | critical region itself. | ||
83 | |||
84 | The solution is to recognize when these simultaneous accesses occur, and | ||
85 | use locks to make sure that only one instance can enter the critical | ||
86 | region at any time. There are many friendly primitives in the Linux | ||
87 | kernel to help you do this. And then there are the unfriendly | ||
88 | primitives, but I'll pretend they don't exist. | ||
89 | |||
90 | Locking in the Linux Kernel | ||
91 | =========================== | ||
92 | |||
93 | If I could give you one piece of advice: never sleep with anyone crazier | ||
94 | than yourself. But if I had to give you advice on locking: *keep it | ||
95 | simple*. | ||
96 | |||
97 | Be reluctant to introduce new locks. | ||
98 | |||
99 | Strangely enough, this last one is the exact reverse of my advice when | ||
100 | you *have* slept with someone crazier than yourself. And you should | ||
101 | think about getting a big dog. | ||
102 | |||
103 | Two Main Types of Kernel Locks: Spinlocks and Mutexes | ||
104 | ----------------------------------------------------- | ||
105 | |||
106 | There are two main types of kernel locks. The fundamental type is the | ||
107 | spinlock (``include/asm/spinlock.h``), which is a very simple | ||
108 | single-holder lock: if you can't get the spinlock, you keep trying | ||
109 | (spinning) until you can. Spinlocks are very small and fast, and can be | ||
110 | used anywhere. | ||
111 | |||
112 | The second type is a mutex (``include/linux/mutex.h``): it is like a | ||
113 | spinlock, but you may block holding a mutex. If you can't lock a mutex, | ||
114 | your task will suspend itself, and be woken up when the mutex is | ||
115 | released. This means the CPU can do something else while you are | ||
116 | waiting. There are many cases when you simply can't sleep (see | ||
117 | `What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__), | ||
118 | and so have to use a spinlock instead. | ||
119 | |||
120 | Neither type of lock is recursive: see | ||
121 | `Deadlock: Simple and Advanced <#deadlock>`__. | ||
122 | |||
123 | Locks and Uniprocessor Kernels | ||
124 | ------------------------------ | ||
125 | |||
126 | For kernels compiled without ``CONFIG_SMP``, and without | ||
127 | ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent | ||
128 | design decision: when no-one else can run at the same time, there is no | ||
129 | reason to have a lock. | ||
130 | |||
131 | If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT`` | ||
132 | is set, then spinlocks simply disable preemption, which is sufficient to | ||
133 | prevent any races. For most purposes, we can think of preemption as | ||
134 | equivalent to SMP, and not worry about it separately. | ||
135 | |||
136 | You should always test your locking code with ``CONFIG_SMP`` and | ||
137 | ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box, | ||
138 | because it will still catch some kinds of locking bugs. | ||
139 | |||
140 | Mutexes still exist, because they are required for synchronization | ||
141 | between user contexts, as we will see below. | ||
142 | |||
143 | Locking Only In User Context | ||
144 | ---------------------------- | ||
145 | |||
146 | If you have a data structure which is only ever accessed from user | ||
147 | context, then you can use a simple mutex (``include/linux/mutex.h``) to | ||
148 | protect it. This is the most trivial case: you initialize the mutex. | ||
149 | Then you can call :c:func:`mutex_lock_interruptible()` to grab the | ||
150 | mutex, and :c:func:`mutex_unlock()` to release it. There is also a | ||
151 | :c:func:`mutex_lock()`, which should be avoided, because it will | ||
152 | not return if a signal is received. | ||
153 | |||
154 | Example: ``net/netfilter/nf_sockopt.c`` allows registration of new | ||
155 | :c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with | ||
156 | :c:func:`nf_register_sockopt()`. Registration and de-registration | ||
157 | are only done on module load and unload (and boot time, where there is | ||
158 | no concurrency), and the list of registrations is only consulted for an | ||
159 | unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system | ||
160 | call. The ``nf_sockopt_mutex`` is perfect to protect this, especially | ||
161 | since the setsockopt and getsockopt calls may well sleep. | ||
162 | |||
163 | Locking Between User Context and Softirqs | ||
164 | ----------------------------------------- | ||
165 | |||
166 | If a softirq shares data with user context, you have two problems. | ||
167 | Firstly, the current user context can be interrupted by a softirq, and | ||
168 | secondly, the critical region could be entered from another CPU. This is | ||
169 | where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is | ||
170 | used. It disables softirqs on that CPU, then grabs the lock. | ||
171 | :c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is | ||
172 | a historical reference to "Bottom Halves", the old name for software | ||
173 | interrupts. It should really be called spin_lock_softirq()' in a | ||
174 | perfect world). | ||
175 | |||
176 | Note that you can also use :c:func:`spin_lock_irq()` or | ||
177 | :c:func:`spin_lock_irqsave()` here, which stop hardware interrupts | ||
178 | as well: see `Hard IRQ Context <#hardirq-context>`__. | ||
179 | |||
180 | This works perfectly for UP as well: the spin lock vanishes, and this | ||
181 | macro simply becomes :c:func:`local_bh_disable()` | ||
182 | (``include/linux/interrupt.h``), which protects you from the softirq | ||
183 | being run. | ||
184 | |||
185 | Locking Between User Context and Tasklets | ||
186 | ----------------------------------------- | ||
187 | |||
188 | This is exactly the same as above, because tasklets are actually run | ||
189 | from a softirq. | ||
190 | |||
191 | Locking Between User Context and Timers | ||
192 | --------------------------------------- | ||
193 | |||
194 | This, too, is exactly the same as above, because timers are actually run | ||
195 | from a softirq. From a locking point of view, tasklets and timers are | ||
196 | identical. | ||
197 | |||
198 | Locking Between Tasklets/Timers | ||
199 | ------------------------------- | ||
200 | |||
201 | Sometimes a tasklet or timer might want to share data with another | ||
202 | tasklet or timer. | ||
203 | |||
204 | The Same Tasklet/Timer | ||
205 | ~~~~~~~~~~~~~~~~~~~~~~ | ||
206 | |||
207 | Since a tasklet is never run on two CPUs at once, you don't need to | ||
208 | worry about your tasklet being reentrant (running twice at once), even | ||
209 | on SMP. | ||
210 | |||
211 | Different Tasklets/Timers | ||
212 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
213 | |||
214 | If another tasklet/timer wants to share data with your tasklet or timer | ||
215 | , you will both need to use :c:func:`spin_lock()` and | ||
216 | :c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is | ||
217 | unnecessary here, as you are already in a tasklet, and none will be run | ||
218 | on the same CPU. | ||
219 | |||
220 | Locking Between Softirqs | ||
221 | ------------------------ | ||
222 | |||
223 | Often a softirq might want to share data with itself or a tasklet/timer. | ||
224 | |||
225 | The Same Softirq | ||
226 | ~~~~~~~~~~~~~~~~ | ||
227 | |||
228 | The same softirq can run on the other CPUs: you can use a per-CPU array | ||
229 | (see `Per-CPU Data <#per-cpu>`__) for better performance. If you're | ||
230 | going so far as to use a softirq, you probably care about scalable | ||
231 | performance enough to justify the extra complexity. | ||
232 | |||
233 | You'll need to use :c:func:`spin_lock()` and | ||
234 | :c:func:`spin_unlock()` for shared data. | ||
235 | |||
236 | Different Softirqs | ||
237 | ~~~~~~~~~~~~~~~~~~ | ||
238 | |||
239 | You'll need to use :c:func:`spin_lock()` and | ||
240 | :c:func:`spin_unlock()` for shared data, whether it be a timer, | ||
241 | tasklet, different softirq or the same or another softirq: any of them | ||
242 | could be running on a different CPU. | ||
243 | |||
244 | Hard IRQ Context | ||
245 | ================ | ||
246 | |||
247 | Hardware interrupts usually communicate with a tasklet or softirq. | ||
248 | Frequently this involves putting work in a queue, which the softirq will | ||
249 | take out. | ||
250 | |||
251 | Locking Between Hard IRQ and Softirqs/Tasklets | ||
252 | ---------------------------------------------- | ||
253 | |||
254 | If a hardware irq handler shares data with a softirq, you have two | ||
255 | concerns. Firstly, the softirq processing can be interrupted by a | ||
256 | hardware interrupt, and secondly, the critical region could be entered | ||
257 | by a hardware interrupt on another CPU. This is where | ||
258 | :c:func:`spin_lock_irq()` is used. It is defined to disable | ||
259 | interrupts on that cpu, then grab the lock. | ||
260 | :c:func:`spin_unlock_irq()` does the reverse. | ||
261 | |||
262 | The irq handler does not to use :c:func:`spin_lock_irq()`, because | ||
263 | the softirq cannot run while the irq handler is running: it can use | ||
264 | :c:func:`spin_lock()`, which is slightly faster. The only exception | ||
265 | would be if a different hardware irq handler uses the same lock: | ||
266 | :c:func:`spin_lock_irq()` will stop that from interrupting us. | ||
267 | |||
268 | This works perfectly for UP as well: the spin lock vanishes, and this | ||
269 | macro simply becomes :c:func:`local_irq_disable()` | ||
270 | (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH | ||
271 | being run. | ||
272 | |||
273 | :c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a | ||
274 | variant which saves whether interrupts were on or off in a flags word, | ||
275 | which is passed to :c:func:`spin_unlock_irqrestore()`. This means | ||
276 | that the same code can be used inside an hard irq handler (where | ||
277 | interrupts are already off) and in softirqs (where the irq disabling is | ||
278 | required). | ||
279 | |||
280 | Note that softirqs (and hence tasklets and timers) are run on return | ||
281 | from hardware interrupts, so :c:func:`spin_lock_irq()` also stops | ||
282 | these. In that sense, :c:func:`spin_lock_irqsave()` is the most | ||
283 | general and powerful locking function. | ||
284 | |||
285 | Locking Between Two Hard IRQ Handlers | ||
286 | ------------------------------------- | ||
287 | |||
288 | It is rare to have to share data between two IRQ handlers, but if you | ||
289 | do, :c:func:`spin_lock_irqsave()` should be used: it is | ||
290 | architecture-specific whether all interrupts are disabled inside irq | ||
291 | handlers themselves. | ||
292 | |||
293 | Cheat Sheet For Locking | ||
294 | ======================= | ||
295 | |||
296 | Pete Zaitcev gives the following summary: | ||
297 | |||
298 | - If you are in a process context (any syscall) and want to lock other | ||
299 | process out, use a mutex. You can take a mutex and sleep | ||
300 | (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``). | ||
301 | |||
302 | - Otherwise (== data can be touched in an interrupt), use | ||
303 | :c:func:`spin_lock_irqsave()` and | ||
304 | :c:func:`spin_unlock_irqrestore()`. | ||
305 | |||
306 | - Avoid holding spinlock for more than 5 lines of code and across any | ||
307 | function call (except accessors like :c:func:`readb()`). | ||
308 | |||
309 | Table of Minimum Requirements | ||
310 | ----------------------------- | ||
311 | |||
312 | The following table lists the *minimum* locking requirements between | ||
313 | various contexts. In some cases, the same context can only be running on | ||
314 | one CPU at a time, so no locking is required for that context (eg. a | ||
315 | particular thread can only run on one CPU at a time, but if it needs | ||
316 | shares data with another thread, locking is required). | ||
317 | |||
318 | Remember the advice above: you can always use | ||
319 | :c:func:`spin_lock_irqsave()`, which is a superset of all other | ||
320 | spinlock primitives. | ||
321 | |||
322 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
323 | | | IRQ Handler A | IRQ Handler B | Softirq A | Softirq B | Tasklet A | Tasklet B | Timer A | Timer B | User Context A | User Context B | | ||
324 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
325 | | IRQ Handler A | None | | | | | | | | | | | ||
326 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
327 | | IRQ Handler B | SLIS | None | | | | | | | | | | ||
328 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
329 | | Softirq A | SLI | SLI | SL | | | | | | | | | ||
330 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
331 | | Softirq B | SLI | SLI | SL | SL | | | | | | | | ||
332 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
333 | | Tasklet A | SLI | SLI | SL | SL | None | | | | | | | ||
334 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
335 | | Tasklet B | SLI | SLI | SL | SL | SL | None | | | | | | ||
336 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
337 | | Timer A | SLI | SLI | SL | SL | SL | SL | None | | | | | ||
338 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
339 | | Timer B | SLI | SLI | SL | SL | SL | SL | SL | None | | | | ||
340 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
341 | | User Context A | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | None | | | ||
342 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
343 | | User Context B | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | MLI | None | | ||
344 | +------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+ | ||
345 | |||
346 | Table: Table of Locking Requirements | ||
347 | |||
348 | +--------+----------------------------+ | ||
349 | | SLIS | spin_lock_irqsave | | ||
350 | +--------+----------------------------+ | ||
351 | | SLI | spin_lock_irq | | ||
352 | +--------+----------------------------+ | ||
353 | | SL | spin_lock | | ||
354 | +--------+----------------------------+ | ||
355 | | SLBH | spin_lock_bh | | ||
356 | +--------+----------------------------+ | ||
357 | | MLI | mutex_lock_interruptible | | ||
358 | +--------+----------------------------+ | ||
359 | |||
360 | Table: Legend for Locking Requirements Table | ||
361 | |||
362 | The trylock Functions | ||
363 | ===================== | ||
364 | |||
365 | There are functions that try to acquire a lock only once and immediately | ||
366 | return a value telling about success or failure to acquire the lock. | ||
367 | They can be used if you need no access to the data protected with the | ||
368 | lock when some other thread is holding the lock. You should acquire the | ||
369 | lock later if you then need access to the data protected with the lock. | ||
370 | |||
371 | :c:func:`spin_trylock()` does not spin but returns non-zero if it | ||
372 | acquires the spinlock on the first try or 0 if not. This function can be | ||
373 | used in all contexts like :c:func:`spin_lock()`: you must have | ||
374 | disabled the contexts that might interrupt you and acquire the spin | ||
375 | lock. | ||
376 | |||
377 | :c:func:`mutex_trylock()` does not suspend your task but returns | ||
378 | non-zero if it could lock the mutex on the first try or 0 if not. This | ||
379 | function cannot be safely used in hardware or software interrupt | ||
380 | contexts despite not sleeping. | ||
381 | |||
382 | Common Examples | ||
383 | =============== | ||
384 | |||
385 | Let's step through a simple example: a cache of number to name mappings. | ||
386 | The cache keeps a count of how often each of the objects is used, and | ||
387 | when it gets full, throws out the least used one. | ||
388 | |||
389 | All In User Context | ||
390 | ------------------- | ||
391 | |||
392 | For our first example, we assume that all operations are in user context | ||
393 | (ie. from system calls), so we can sleep. This means we can use a mutex | ||
394 | to protect the cache and all the objects within it. Here's the code:: | ||
395 | |||
396 | #include <linux/list.h> | ||
397 | #include <linux/slab.h> | ||
398 | #include <linux/string.h> | ||
399 | #include <linux/mutex.h> | ||
400 | #include <asm/errno.h> | ||
401 | |||
402 | struct object | ||
403 | { | ||
404 | struct list_head list; | ||
405 | int id; | ||
406 | char name[32]; | ||
407 | int popularity; | ||
408 | }; | ||
409 | |||
410 | /* Protects the cache, cache_num, and the objects within it */ | ||
411 | static DEFINE_MUTEX(cache_lock); | ||
412 | static LIST_HEAD(cache); | ||
413 | static unsigned int cache_num = 0; | ||
414 | #define MAX_CACHE_SIZE 10 | ||
415 | |||
416 | /* Must be holding cache_lock */ | ||
417 | static struct object *__cache_find(int id) | ||
418 | { | ||
419 | struct object *i; | ||
420 | |||
421 | list_for_each_entry(i, &cache, list) | ||
422 | if (i->id == id) { | ||
423 | i->popularity++; | ||
424 | return i; | ||
425 | } | ||
426 | return NULL; | ||
427 | } | ||
428 | |||
429 | /* Must be holding cache_lock */ | ||
430 | static void __cache_delete(struct object *obj) | ||
431 | { | ||
432 | BUG_ON(!obj); | ||
433 | list_del(&obj->list); | ||
434 | kfree(obj); | ||
435 | cache_num--; | ||
436 | } | ||
437 | |||
438 | /* Must be holding cache_lock */ | ||
439 | static void __cache_add(struct object *obj) | ||
440 | { | ||
441 | list_add(&obj->list, &cache); | ||
442 | if (++cache_num > MAX_CACHE_SIZE) { | ||
443 | struct object *i, *outcast = NULL; | ||
444 | list_for_each_entry(i, &cache, list) { | ||
445 | if (!outcast || i->popularity < outcast->popularity) | ||
446 | outcast = i; | ||
447 | } | ||
448 | __cache_delete(outcast); | ||
449 | } | ||
450 | } | ||
451 | |||
452 | int cache_add(int id, const char *name) | ||
453 | { | ||
454 | struct object *obj; | ||
455 | |||
456 | if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) | ||
457 | return -ENOMEM; | ||
458 | |||
459 | strlcpy(obj->name, name, sizeof(obj->name)); | ||
460 | obj->id = id; | ||
461 | obj->popularity = 0; | ||
462 | |||
463 | mutex_lock(&cache_lock); | ||
464 | __cache_add(obj); | ||
465 | mutex_unlock(&cache_lock); | ||
466 | return 0; | ||
467 | } | ||
468 | |||
469 | void cache_delete(int id) | ||
470 | { | ||
471 | mutex_lock(&cache_lock); | ||
472 | __cache_delete(__cache_find(id)); | ||
473 | mutex_unlock(&cache_lock); | ||
474 | } | ||
475 | |||
476 | int cache_find(int id, char *name) | ||
477 | { | ||
478 | struct object *obj; | ||
479 | int ret = -ENOENT; | ||
480 | |||
481 | mutex_lock(&cache_lock); | ||
482 | obj = __cache_find(id); | ||
483 | if (obj) { | ||
484 | ret = 0; | ||
485 | strcpy(name, obj->name); | ||
486 | } | ||
487 | mutex_unlock(&cache_lock); | ||
488 | return ret; | ||
489 | } | ||
490 | |||
491 | Note that we always make sure we have the cache_lock when we add, | ||
492 | delete, or look up the cache: both the cache infrastructure itself and | ||
493 | the contents of the objects are protected by the lock. In this case it's | ||
494 | easy, since we copy the data for the user, and never let them access the | ||
495 | objects directly. | ||
496 | |||
497 | There is a slight (and common) optimization here: in | ||
498 | :c:func:`cache_add()` we set up the fields of the object before | ||
499 | grabbing the lock. This is safe, as no-one else can access it until we | ||
500 | put it in cache. | ||
501 | |||
502 | Accessing From Interrupt Context | ||
503 | -------------------------------- | ||
504 | |||
505 | Now consider the case where :c:func:`cache_find()` can be called | ||
506 | from interrupt context: either a hardware interrupt or a softirq. An | ||
507 | example would be a timer which deletes object from the cache. | ||
508 | |||
509 | The change is shown below, in standard patch format: the ``-`` are lines | ||
510 | which are taken away, and the ``+`` are lines which are added. | ||
511 | |||
512 | :: | ||
513 | |||
514 | --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100 | ||
515 | +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100 | ||
516 | @@ -12,7 +12,7 @@ | ||
517 | int popularity; | ||
518 | }; | ||
519 | |||
520 | -static DEFINE_MUTEX(cache_lock); | ||
521 | +static DEFINE_SPINLOCK(cache_lock); | ||
522 | static LIST_HEAD(cache); | ||
523 | static unsigned int cache_num = 0; | ||
524 | #define MAX_CACHE_SIZE 10 | ||
525 | @@ -55,6 +55,7 @@ | ||
526 | int cache_add(int id, const char *name) | ||
527 | { | ||
528 | struct object *obj; | ||
529 | + unsigned long flags; | ||
530 | |||
531 | if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) | ||
532 | return -ENOMEM; | ||
533 | @@ -63,30 +64,33 @@ | ||
534 | obj->id = id; | ||
535 | obj->popularity = 0; | ||
536 | |||
537 | - mutex_lock(&cache_lock); | ||
538 | + spin_lock_irqsave(&cache_lock, flags); | ||
539 | __cache_add(obj); | ||
540 | - mutex_unlock(&cache_lock); | ||
541 | + spin_unlock_irqrestore(&cache_lock, flags); | ||
542 | return 0; | ||
543 | } | ||
544 | |||
545 | void cache_delete(int id) | ||
546 | { | ||
547 | - mutex_lock(&cache_lock); | ||
548 | + unsigned long flags; | ||
549 | + | ||
550 | + spin_lock_irqsave(&cache_lock, flags); | ||
551 | __cache_delete(__cache_find(id)); | ||
552 | - mutex_unlock(&cache_lock); | ||
553 | + spin_unlock_irqrestore(&cache_lock, flags); | ||
554 | } | ||
555 | |||
556 | int cache_find(int id, char *name) | ||
557 | { | ||
558 | struct object *obj; | ||
559 | int ret = -ENOENT; | ||
560 | + unsigned long flags; | ||
561 | |||
562 | - mutex_lock(&cache_lock); | ||
563 | + spin_lock_irqsave(&cache_lock, flags); | ||
564 | obj = __cache_find(id); | ||
565 | if (obj) { | ||
566 | ret = 0; | ||
567 | strcpy(name, obj->name); | ||
568 | } | ||
569 | - mutex_unlock(&cache_lock); | ||
570 | + spin_unlock_irqrestore(&cache_lock, flags); | ||
571 | return ret; | ||
572 | } | ||
573 | |||
574 | Note that the :c:func:`spin_lock_irqsave()` will turn off | ||
575 | interrupts if they are on, otherwise does nothing (if we are already in | ||
576 | an interrupt handler), hence these functions are safe to call from any | ||
577 | context. | ||
578 | |||
579 | Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()` | ||
580 | with the ``GFP_KERNEL`` flag, which is only legal in user context. I | ||
581 | have assumed that :c:func:`cache_add()` is still only called in | ||
582 | user context, otherwise this should become a parameter to | ||
583 | :c:func:`cache_add()`. | ||
584 | |||
585 | Exposing Objects Outside This File | ||
586 | ---------------------------------- | ||
587 | |||
588 | If our objects contained more information, it might not be sufficient to | ||
589 | copy the information in and out: other parts of the code might want to | ||
590 | keep pointers to these objects, for example, rather than looking up the | ||
591 | id every time. This produces two problems. | ||
592 | |||
593 | The first problem is that we use the ``cache_lock`` to protect objects: | ||
594 | we'd need to make this non-static so the rest of the code can use it. | ||
595 | This makes locking trickier, as it is no longer all in one place. | ||
596 | |||
597 | The second problem is the lifetime problem: if another structure keeps a | ||
598 | pointer to an object, it presumably expects that pointer to remain | ||
599 | valid. Unfortunately, this is only guaranteed while you hold the lock, | ||
600 | otherwise someone might call :c:func:`cache_delete()` and even | ||
601 | worse, add another object, re-using the same address. | ||
602 | |||
603 | As there is only one lock, you can't hold it forever: no-one else would | ||
604 | get any work done. | ||
605 | |||
606 | The solution to this problem is to use a reference count: everyone who | ||
607 | has a pointer to the object increases it when they first get the object, | ||
608 | and drops the reference count when they're finished with it. Whoever | ||
609 | drops it to zero knows it is unused, and can actually delete it. | ||
610 | |||
611 | Here is the code:: | ||
612 | |||
613 | --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100 | ||
614 | +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100 | ||
615 | @@ -7,6 +7,7 @@ | ||
616 | struct object | ||
617 | { | ||
618 | struct list_head list; | ||
619 | + unsigned int refcnt; | ||
620 | int id; | ||
621 | char name[32]; | ||
622 | int popularity; | ||
623 | @@ -17,6 +18,35 @@ | ||
624 | static unsigned int cache_num = 0; | ||
625 | #define MAX_CACHE_SIZE 10 | ||
626 | |||
627 | +static void __object_put(struct object *obj) | ||
628 | +{ | ||
629 | + if (--obj->refcnt == 0) | ||
630 | + kfree(obj); | ||
631 | +} | ||
632 | + | ||
633 | +static void __object_get(struct object *obj) | ||
634 | +{ | ||
635 | + obj->refcnt++; | ||
636 | +} | ||
637 | + | ||
638 | +void object_put(struct object *obj) | ||
639 | +{ | ||
640 | + unsigned long flags; | ||
641 | + | ||
642 | + spin_lock_irqsave(&cache_lock, flags); | ||
643 | + __object_put(obj); | ||
644 | + spin_unlock_irqrestore(&cache_lock, flags); | ||
645 | +} | ||
646 | + | ||
647 | +void object_get(struct object *obj) | ||
648 | +{ | ||
649 | + unsigned long flags; | ||
650 | + | ||
651 | + spin_lock_irqsave(&cache_lock, flags); | ||
652 | + __object_get(obj); | ||
653 | + spin_unlock_irqrestore(&cache_lock, flags); | ||
654 | +} | ||
655 | + | ||
656 | /* Must be holding cache_lock */ | ||
657 | static struct object *__cache_find(int id) | ||
658 | { | ||
659 | @@ -35,6 +65,7 @@ | ||
660 | { | ||
661 | BUG_ON(!obj); | ||
662 | list_del(&obj->list); | ||
663 | + __object_put(obj); | ||
664 | cache_num--; | ||
665 | } | ||
666 | |||
667 | @@ -63,6 +94,7 @@ | ||
668 | strlcpy(obj->name, name, sizeof(obj->name)); | ||
669 | obj->id = id; | ||
670 | obj->popularity = 0; | ||
671 | + obj->refcnt = 1; /* The cache holds a reference */ | ||
672 | |||
673 | spin_lock_irqsave(&cache_lock, flags); | ||
674 | __cache_add(obj); | ||
675 | @@ -79,18 +111,15 @@ | ||
676 | spin_unlock_irqrestore(&cache_lock, flags); | ||
677 | } | ||
678 | |||
679 | -int cache_find(int id, char *name) | ||
680 | +struct object *cache_find(int id) | ||
681 | { | ||
682 | struct object *obj; | ||
683 | - int ret = -ENOENT; | ||
684 | unsigned long flags; | ||
685 | |||
686 | spin_lock_irqsave(&cache_lock, flags); | ||
687 | obj = __cache_find(id); | ||
688 | - if (obj) { | ||
689 | - ret = 0; | ||
690 | - strcpy(name, obj->name); | ||
691 | - } | ||
692 | + if (obj) | ||
693 | + __object_get(obj); | ||
694 | spin_unlock_irqrestore(&cache_lock, flags); | ||
695 | - return ret; | ||
696 | + return obj; | ||
697 | } | ||
698 | |||
699 | We encapsulate the reference counting in the standard 'get' and 'put' | ||
700 | functions. Now we can return the object itself from | ||
701 | :c:func:`cache_find()` which has the advantage that the user can | ||
702 | now sleep holding the object (eg. to :c:func:`copy_to_user()` to | ||
703 | name to userspace). | ||
704 | |||
705 | The other point to note is that I said a reference should be held for | ||
706 | every pointer to the object: thus the reference count is 1 when first | ||
707 | inserted into the cache. In some versions the framework does not hold a | ||
708 | reference count, but they are more complicated. | ||
709 | |||
710 | Using Atomic Operations For The Reference Count | ||
711 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
712 | |||
713 | In practice, ``atomic_t`` would usually be used for refcnt. There are a | ||
714 | number of atomic operations defined in ``include/asm/atomic.h``: these | ||
715 | are guaranteed to be seen atomically from all CPUs in the system, so no | ||
716 | lock is required. In this case, it is simpler than using spinlocks, | ||
717 | although for anything non-trivial using spinlocks is clearer. The | ||
718 | :c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()` | ||
719 | are used instead of the standard increment and decrement operators, and | ||
720 | the lock is no longer used to protect the reference count itself. | ||
721 | |||
722 | :: | ||
723 | |||
724 | --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100 | ||
725 | +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100 | ||
726 | @@ -7,7 +7,7 @@ | ||
727 | struct object | ||
728 | { | ||
729 | struct list_head list; | ||
730 | - unsigned int refcnt; | ||
731 | + atomic_t refcnt; | ||
732 | int id; | ||
733 | char name[32]; | ||
734 | int popularity; | ||
735 | @@ -18,33 +18,15 @@ | ||
736 | static unsigned int cache_num = 0; | ||
737 | #define MAX_CACHE_SIZE 10 | ||
738 | |||
739 | -static void __object_put(struct object *obj) | ||
740 | -{ | ||
741 | - if (--obj->refcnt == 0) | ||
742 | - kfree(obj); | ||
743 | -} | ||
744 | - | ||
745 | -static void __object_get(struct object *obj) | ||
746 | -{ | ||
747 | - obj->refcnt++; | ||
748 | -} | ||
749 | - | ||
750 | void object_put(struct object *obj) | ||
751 | { | ||
752 | - unsigned long flags; | ||
753 | - | ||
754 | - spin_lock_irqsave(&cache_lock, flags); | ||
755 | - __object_put(obj); | ||
756 | - spin_unlock_irqrestore(&cache_lock, flags); | ||
757 | + if (atomic_dec_and_test(&obj->refcnt)) | ||
758 | + kfree(obj); | ||
759 | } | ||
760 | |||
761 | void object_get(struct object *obj) | ||
762 | { | ||
763 | - unsigned long flags; | ||
764 | - | ||
765 | - spin_lock_irqsave(&cache_lock, flags); | ||
766 | - __object_get(obj); | ||
767 | - spin_unlock_irqrestore(&cache_lock, flags); | ||
768 | + atomic_inc(&obj->refcnt); | ||
769 | } | ||
770 | |||
771 | /* Must be holding cache_lock */ | ||
772 | @@ -65,7 +47,7 @@ | ||
773 | { | ||
774 | BUG_ON(!obj); | ||
775 | list_del(&obj->list); | ||
776 | - __object_put(obj); | ||
777 | + object_put(obj); | ||
778 | cache_num--; | ||
779 | } | ||
780 | |||
781 | @@ -94,7 +76,7 @@ | ||
782 | strlcpy(obj->name, name, sizeof(obj->name)); | ||
783 | obj->id = id; | ||
784 | obj->popularity = 0; | ||
785 | - obj->refcnt = 1; /* The cache holds a reference */ | ||
786 | + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ | ||
787 | |||
788 | spin_lock_irqsave(&cache_lock, flags); | ||
789 | __cache_add(obj); | ||
790 | @@ -119,7 +101,7 @@ | ||
791 | spin_lock_irqsave(&cache_lock, flags); | ||
792 | obj = __cache_find(id); | ||
793 | if (obj) | ||
794 | - __object_get(obj); | ||
795 | + object_get(obj); | ||
796 | spin_unlock_irqrestore(&cache_lock, flags); | ||
797 | return obj; | ||
798 | } | ||
799 | |||
800 | Protecting The Objects Themselves | ||
801 | --------------------------------- | ||
802 | |||
803 | In these examples, we assumed that the objects (except the reference | ||
804 | counts) never changed once they are created. If we wanted to allow the | ||
805 | name to change, there are three possibilities: | ||
806 | |||
807 | - You can make ``cache_lock`` non-static, and tell people to grab that | ||
808 | lock before changing the name in any object. | ||
809 | |||
810 | - You can provide a :c:func:`cache_obj_rename()` which grabs this | ||
811 | lock and changes the name for the caller, and tell everyone to use | ||
812 | that function. | ||
813 | |||
814 | - You can make the ``cache_lock`` protect only the cache itself, and | ||
815 | use another lock to protect the name. | ||
816 | |||
817 | Theoretically, you can make the locks as fine-grained as one lock for | ||
818 | every field, for every object. In practice, the most common variants | ||
819 | are: | ||
820 | |||
821 | - One lock which protects the infrastructure (the ``cache`` list in | ||
822 | this example) and all the objects. This is what we have done so far. | ||
823 | |||
824 | - One lock which protects the infrastructure (including the list | ||
825 | pointers inside the objects), and one lock inside the object which | ||
826 | protects the rest of that object. | ||
827 | |||
828 | - Multiple locks to protect the infrastructure (eg. one lock per hash | ||
829 | chain), possibly with a separate per-object lock. | ||
830 | |||
831 | Here is the "lock-per-object" implementation: | ||
832 | |||
833 | :: | ||
834 | |||
835 | --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100 | ||
836 | +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 | ||
837 | @@ -6,11 +6,17 @@ | ||
838 | |||
839 | struct object | ||
840 | { | ||
841 | + /* These two protected by cache_lock. */ | ||
842 | struct list_head list; | ||
843 | + int popularity; | ||
844 | + | ||
845 | atomic_t refcnt; | ||
846 | + | ||
847 | + /* Doesn't change once created. */ | ||
848 | int id; | ||
849 | + | ||
850 | + spinlock_t lock; /* Protects the name */ | ||
851 | char name[32]; | ||
852 | - int popularity; | ||
853 | }; | ||
854 | |||
855 | static DEFINE_SPINLOCK(cache_lock); | ||
856 | @@ -77,6 +84,7 @@ | ||
857 | obj->id = id; | ||
858 | obj->popularity = 0; | ||
859 | atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ | ||
860 | + spin_lock_init(&obj->lock); | ||
861 | |||
862 | spin_lock_irqsave(&cache_lock, flags); | ||
863 | __cache_add(obj); | ||
864 | |||
865 | Note that I decide that the popularity count should be protected by the | ||
866 | ``cache_lock`` rather than the per-object lock: this is because it (like | ||
867 | the :c:type:`struct list_head <list_head>` inside the object) | ||
868 | is logically part of the infrastructure. This way, I don't need to grab | ||
869 | the lock of every object in :c:func:`__cache_add()` when seeking | ||
870 | the least popular. | ||
871 | |||
872 | I also decided that the id member is unchangeable, so I don't need to | ||
873 | grab each object lock in :c:func:`__cache_find()` to examine the | ||
874 | id: the object lock is only used by a caller who wants to read or write | ||
875 | the name field. | ||
876 | |||
877 | Note also that I added a comment describing what data was protected by | ||
878 | which locks. This is extremely important, as it describes the runtime | ||
879 | behavior of the code, and can be hard to gain from just reading. And as | ||
880 | Alan Cox says, “Lock data, not code”. | ||
881 | |||
882 | Common Problems | ||
883 | =============== | ||
884 | |||
885 | Deadlock: Simple and Advanced | ||
886 | ----------------------------- | ||
887 | |||
888 | There is a coding bug where a piece of code tries to grab a spinlock | ||
889 | twice: it will spin forever, waiting for the lock to be released | ||
890 | (spinlocks, rwlocks and mutexes are not recursive in Linux). This is | ||
891 | trivial to diagnose: not a | ||
892 | stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem. | ||
893 | |||
894 | For a slightly more complex case, imagine you have a region shared by a | ||
895 | softirq and user context. If you use a :c:func:`spin_lock()` call | ||
896 | to protect it, it is possible that the user context will be interrupted | ||
897 | by the softirq while it holds the lock, and the softirq will then spin | ||
898 | forever trying to get the same lock. | ||
899 | |||
900 | Both of these are called deadlock, and as shown above, it can occur even | ||
901 | with a single CPU (although not on UP compiles, since spinlocks vanish | ||
902 | on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data | ||
903 | corruption in the second example). | ||
904 | |||
905 | This complete lockup is easy to diagnose: on SMP boxes the watchdog | ||
906 | timer or compiling with ``DEBUG_SPINLOCK`` set | ||
907 | (``include/linux/spinlock.h``) will show this up immediately when it | ||
908 | happens. | ||
909 | |||
910 | A more complex problem is the so-called 'deadly embrace', involving two | ||
911 | or more locks. Say you have a hash table: each entry in the table is a | ||
912 | spinlock, and a chain of hashed objects. Inside a softirq handler, you | ||
913 | sometimes want to alter an object from one place in the hash to another: | ||
914 | you grab the spinlock of the old hash chain and the spinlock of the new | ||
915 | hash chain, and delete the object from the old one, and insert it in the | ||
916 | new one. | ||
917 | |||
918 | There are two problems here. First, if your code ever tries to move the | ||
919 | object to the same chain, it will deadlock with itself as it tries to | ||
920 | lock it twice. Secondly, if the same softirq on another CPU is trying to | ||
921 | move another object in the reverse direction, the following could | ||
922 | happen: | ||
923 | |||
924 | +-----------------------+-----------------------+ | ||
925 | | CPU 1 | CPU 2 | | ||
926 | +=======================+=======================+ | ||
927 | | Grab lock A -> OK | Grab lock B -> OK | | ||
928 | +-----------------------+-----------------------+ | ||
929 | | Grab lock B -> spin | Grab lock A -> spin | | ||
930 | +-----------------------+-----------------------+ | ||
931 | |||
932 | Table: Consequences | ||
933 | |||
934 | The two CPUs will spin forever, waiting for the other to give up their | ||
935 | lock. It will look, smell, and feel like a crash. | ||
936 | |||
937 | Preventing Deadlock | ||
938 | ------------------- | ||
939 | |||
940 | Textbooks will tell you that if you always lock in the same order, you | ||
941 | will never get this kind of deadlock. Practice will tell you that this | ||
942 | approach doesn't scale: when I create a new lock, I don't understand | ||
943 | enough of the kernel to figure out where in the 5000 lock hierarchy it | ||
944 | will fit. | ||
945 | |||
946 | The best locks are encapsulated: they never get exposed in headers, and | ||
947 | are never held around calls to non-trivial functions outside the same | ||
948 | file. You can read through this code and see that it will never | ||
949 | deadlock, because it never tries to grab another lock while it has that | ||
950 | one. People using your code don't even need to know you are using a | ||
951 | lock. | ||
952 | |||
953 | A classic problem here is when you provide callbacks or hooks: if you | ||
954 | call these with the lock held, you risk simple deadlock, or a deadly | ||
955 | embrace (who knows what the callback will do?). Remember, the other | ||
956 | programmers are out to get you, so don't do this. | ||
957 | |||
958 | Overzealous Prevention Of Deadlocks | ||
959 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
960 | |||
961 | Deadlocks are problematic, but not as bad as data corruption. Code which | ||
962 | grabs a read lock, searches a list, fails to find what it wants, drops | ||
963 | the read lock, grabs a write lock and inserts the object has a race | ||
964 | condition. | ||
965 | |||
966 | If you don't see why, please stay the fuck away from my code. | ||
967 | |||
968 | Racing Timers: A Kernel Pastime | ||
969 | ------------------------------- | ||
970 | |||
971 | Timers can produce their own special problems with races. Consider a | ||
972 | collection of objects (list, hash, etc) where each object has a timer | ||
973 | which is due to destroy it. | ||
974 | |||
975 | If you want to destroy the entire collection (say on module removal), | ||
976 | you might do the following:: | ||
977 | |||
978 | /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE | ||
979 | HUNGARIAN NOTATION */ | ||
980 | spin_lock_bh(&list_lock); | ||
981 | |||
982 | while (list) { | ||
983 | struct foo *next = list->next; | ||
984 | del_timer(&list->timer); | ||
985 | kfree(list); | ||
986 | list = next; | ||
987 | } | ||
988 | |||
989 | spin_unlock_bh(&list_lock); | ||
990 | |||
991 | |||
992 | Sooner or later, this will crash on SMP, because a timer can have just | ||
993 | gone off before the :c:func:`spin_lock_bh()`, and it will only get | ||
994 | the lock after we :c:func:`spin_unlock_bh()`, and then try to free | ||
995 | the element (which has already been freed!). | ||
996 | |||
997 | This can be avoided by checking the result of | ||
998 | :c:func:`del_timer()`: if it returns 1, the timer has been deleted. | ||
999 | If 0, it means (in this case) that it is currently running, so we can | ||
1000 | do:: | ||
1001 | |||
1002 | retry: | ||
1003 | spin_lock_bh(&list_lock); | ||
1004 | |||
1005 | while (list) { | ||
1006 | struct foo *next = list->next; | ||
1007 | if (!del_timer(&list->timer)) { | ||
1008 | /* Give timer a chance to delete this */ | ||
1009 | spin_unlock_bh(&list_lock); | ||
1010 | goto retry; | ||
1011 | } | ||
1012 | kfree(list); | ||
1013 | list = next; | ||
1014 | } | ||
1015 | |||
1016 | spin_unlock_bh(&list_lock); | ||
1017 | |||
1018 | |||
1019 | Another common problem is deleting timers which restart themselves (by | ||
1020 | calling :c:func:`add_timer()` at the end of their timer function). | ||
1021 | Because this is a fairly common case which is prone to races, you should | ||
1022 | use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to | ||
1023 | handle this case. It returns the number of times the timer had to be | ||
1024 | deleted before we finally stopped it from adding itself back in. | ||
1025 | |||
1026 | Locking Speed | ||
1027 | ============= | ||
1028 | |||
1029 | There are three main things to worry about when considering speed of | ||
1030 | some code which does locking. First is concurrency: how many things are | ||
1031 | going to be waiting while someone else is holding a lock. Second is the | ||
1032 | time taken to actually acquire and release an uncontended lock. Third is | ||
1033 | using fewer, or smarter locks. I'm assuming that the lock is used fairly | ||
1034 | often: otherwise, you wouldn't be concerned about efficiency. | ||
1035 | |||
1036 | Concurrency depends on how long the lock is usually held: you should | ||
1037 | hold the lock for as long as needed, but no longer. In the cache | ||
1038 | example, we always create the object without the lock held, and then | ||
1039 | grab the lock only when we are ready to insert it in the list. | ||
1040 | |||
1041 | Acquisition times depend on how much damage the lock operations do to | ||
1042 | the pipeline (pipeline stalls) and how likely it is that this CPU was | ||
1043 | the last one to grab the lock (ie. is the lock cache-hot for this CPU): | ||
1044 | on a machine with more CPUs, this likelihood drops fast. Consider a | ||
1045 | 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic | ||
1046 | increment takes about 58ns, a lock which is cache-hot on this CPU takes | ||
1047 | 160ns, and a cacheline transfer from another CPU takes an additional 170 | ||
1048 | to 360ns. (These figures from Paul McKenney's `Linux Journal RCU | ||
1049 | article <http://www.linuxjournal.com/article.php?sid=6993>`__). | ||
1050 | |||
1051 | These two aims conflict: holding a lock for a short time might be done | ||
1052 | by splitting locks into parts (such as in our final per-object-lock | ||
1053 | example), but this increases the number of lock acquisitions, and the | ||
1054 | results are often slower than having a single lock. This is another | ||
1055 | reason to advocate locking simplicity. | ||
1056 | |||
1057 | The third concern is addressed below: there are some methods to reduce | ||
1058 | the amount of locking which needs to be done. | ||
1059 | |||
1060 | Read/Write Lock Variants | ||
1061 | ------------------------ | ||
1062 | |||
1063 | Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and | ||
1064 | :c:type:`struct rw_semaphore <rw_semaphore>`. These divide | ||
1065 | users into two classes: the readers and the writers. If you are only | ||
1066 | reading the data, you can get a read lock, but to write to the data you | ||
1067 | need the write lock. Many people can hold a read lock, but a writer must | ||
1068 | be sole holder. | ||
1069 | |||
1070 | If your code divides neatly along reader/writer lines (as our cache code | ||
1071 | does), and the lock is held by readers for significant lengths of time, | ||
1072 | using these locks can help. They are slightly slower than the normal | ||
1073 | locks though, so in practice ``rwlock_t`` is not usually worthwhile. | ||
1074 | |||
1075 | Avoiding Locks: Read Copy Update | ||
1076 | -------------------------------- | ||
1077 | |||
1078 | There is a special method of read/write locking called Read Copy Update. | ||
1079 | Using RCU, the readers can avoid taking a lock altogether: as we expect | ||
1080 | our cache to be read more often than updated (otherwise the cache is a | ||
1081 | waste of time), it is a candidate for this optimization. | ||
1082 | |||
1083 | How do we get rid of read locks? Getting rid of read locks means that | ||
1084 | writers may be changing the list underneath the readers. That is | ||
1085 | actually quite simple: we can read a linked list while an element is | ||
1086 | being added if the writer adds the element very carefully. For example, | ||
1087 | adding ``new`` to a single linked list called ``list``:: | ||
1088 | |||
1089 | new->next = list->next; | ||
1090 | wmb(); | ||
1091 | list->next = new; | ||
1092 | |||
1093 | |||
1094 | The :c:func:`wmb()` is a write memory barrier. It ensures that the | ||
1095 | first operation (setting the new element's ``next`` pointer) is complete | ||
1096 | and will be seen by all CPUs, before the second operation is (putting | ||
1097 | the new element into the list). This is important, since modern | ||
1098 | compilers and modern CPUs can both reorder instructions unless told | ||
1099 | otherwise: we want a reader to either not see the new element at all, or | ||
1100 | see the new element with the ``next`` pointer correctly pointing at the | ||
1101 | rest of the list. | ||
1102 | |||
1103 | Fortunately, there is a function to do this for standard | ||
1104 | :c:type:`struct list_head <list_head>` lists: | ||
1105 | :c:func:`list_add_rcu()` (``include/linux/list.h``). | ||
1106 | |||
1107 | Removing an element from the list is even simpler: we replace the | ||
1108 | pointer to the old element with a pointer to its successor, and readers | ||
1109 | will either see it, or skip over it. | ||
1110 | |||
1111 | :: | ||
1112 | |||
1113 | list->next = old->next; | ||
1114 | |||
1115 | |||
1116 | There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which | ||
1117 | does this (the normal version poisons the old object, which we don't | ||
1118 | want). | ||
1119 | |||
1120 | The reader must also be careful: some CPUs can look through the ``next`` | ||
1121 | pointer to start reading the contents of the next element early, but | ||
1122 | don't realize that the pre-fetched contents is wrong when the ``next`` | ||
1123 | pointer changes underneath them. Once again, there is a | ||
1124 | :c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``) | ||
1125 | to help you. Of course, writers can just use | ||
1126 | :c:func:`list_for_each_entry()`, since there cannot be two | ||
1127 | simultaneous writers. | ||
1128 | |||
1129 | Our final dilemma is this: when can we actually destroy the removed | ||
1130 | element? Remember, a reader might be stepping through this element in | ||
1131 | the list right now: if we free this element and the ``next`` pointer | ||
1132 | changes, the reader will jump off into garbage and crash. We need to | ||
1133 | wait until we know that all the readers who were traversing the list | ||
1134 | when we deleted the element are finished. We use | ||
1135 | :c:func:`call_rcu()` to register a callback which will actually | ||
1136 | destroy the object once all pre-existing readers are finished. | ||
1137 | Alternatively, :c:func:`synchronize_rcu()` may be used to block | ||
1138 | until all pre-existing are finished. | ||
1139 | |||
1140 | But how does Read Copy Update know when the readers are finished? The | ||
1141 | method is this: firstly, the readers always traverse the list inside | ||
1142 | :c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs: | ||
1143 | these simply disable preemption so the reader won't go to sleep while | ||
1144 | reading the list. | ||
1145 | |||
1146 | RCU then waits until every other CPU has slept at least once: since | ||
1147 | readers cannot sleep, we know that any readers which were traversing the | ||
1148 | list during the deletion are finished, and the callback is triggered. | ||
1149 | The real Read Copy Update code is a little more optimized than this, but | ||
1150 | this is the fundamental idea. | ||
1151 | |||
1152 | :: | ||
1153 | |||
1154 | --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 | ||
1155 | +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100 | ||
1156 | @@ -1,15 +1,18 @@ | ||
1157 | #include <linux/list.h> | ||
1158 | #include <linux/slab.h> | ||
1159 | #include <linux/string.h> | ||
1160 | +#include <linux/rcupdate.h> | ||
1161 | #include <linux/mutex.h> | ||
1162 | #include <asm/errno.h> | ||
1163 | |||
1164 | struct object | ||
1165 | { | ||
1166 | - /* These two protected by cache_lock. */ | ||
1167 | + /* This is protected by RCU */ | ||
1168 | struct list_head list; | ||
1169 | int popularity; | ||
1170 | |||
1171 | + struct rcu_head rcu; | ||
1172 | + | ||
1173 | atomic_t refcnt; | ||
1174 | |||
1175 | /* Doesn't change once created. */ | ||
1176 | @@ -40,7 +43,7 @@ | ||
1177 | { | ||
1178 | struct object *i; | ||
1179 | |||
1180 | - list_for_each_entry(i, &cache, list) { | ||
1181 | + list_for_each_entry_rcu(i, &cache, list) { | ||
1182 | if (i->id == id) { | ||
1183 | i->popularity++; | ||
1184 | return i; | ||
1185 | @@ -49,19 +52,25 @@ | ||
1186 | return NULL; | ||
1187 | } | ||
1188 | |||
1189 | +/* Final discard done once we know no readers are looking. */ | ||
1190 | +static void cache_delete_rcu(void *arg) | ||
1191 | +{ | ||
1192 | + object_put(arg); | ||
1193 | +} | ||
1194 | + | ||
1195 | /* Must be holding cache_lock */ | ||
1196 | static void __cache_delete(struct object *obj) | ||
1197 | { | ||
1198 | BUG_ON(!obj); | ||
1199 | - list_del(&obj->list); | ||
1200 | - object_put(obj); | ||
1201 | + list_del_rcu(&obj->list); | ||
1202 | cache_num--; | ||
1203 | + call_rcu(&obj->rcu, cache_delete_rcu); | ||
1204 | } | ||
1205 | |||
1206 | /* Must be holding cache_lock */ | ||
1207 | static void __cache_add(struct object *obj) | ||
1208 | { | ||
1209 | - list_add(&obj->list, &cache); | ||
1210 | + list_add_rcu(&obj->list, &cache); | ||
1211 | if (++cache_num > MAX_CACHE_SIZE) { | ||
1212 | struct object *i, *outcast = NULL; | ||
1213 | list_for_each_entry(i, &cache, list) { | ||
1214 | @@ -104,12 +114,11 @@ | ||
1215 | struct object *cache_find(int id) | ||
1216 | { | ||
1217 | struct object *obj; | ||
1218 | - unsigned long flags; | ||
1219 | |||
1220 | - spin_lock_irqsave(&cache_lock, flags); | ||
1221 | + rcu_read_lock(); | ||
1222 | obj = __cache_find(id); | ||
1223 | if (obj) | ||
1224 | object_get(obj); | ||
1225 | - spin_unlock_irqrestore(&cache_lock, flags); | ||
1226 | + rcu_read_unlock(); | ||
1227 | return obj; | ||
1228 | } | ||
1229 | |||
1230 | Note that the reader will alter the popularity member in | ||
1231 | :c:func:`__cache_find()`, and now it doesn't hold a lock. One | ||
1232 | solution would be to make it an ``atomic_t``, but for this usage, we | ||
1233 | don't really care about races: an approximate result is good enough, so | ||
1234 | I didn't change it. | ||
1235 | |||
1236 | The result is that :c:func:`cache_find()` requires no | ||
1237 | synchronization with any other functions, so is almost as fast on SMP as | ||
1238 | it would be on UP. | ||
1239 | |||
1240 | There is a further optimization possible here: remember our original | ||
1241 | cache code, where there were no reference counts and the caller simply | ||
1242 | held the lock whenever using the object? This is still possible: if you | ||
1243 | hold the lock, no one can delete the object, so you don't need to get | ||
1244 | and put the reference count. | ||
1245 | |||
1246 | Now, because the 'read lock' in RCU is simply disabling preemption, a | ||
1247 | caller which always has preemption disabled between calling | ||
1248 | :c:func:`cache_find()` and :c:func:`object_put()` does not | ||
1249 | need to actually get and put the reference count: we could expose | ||
1250 | :c:func:`__cache_find()` by making it non-static, and such | ||
1251 | callers could simply call that. | ||
1252 | |||
1253 | The benefit here is that the reference count is not written to: the | ||
1254 | object is not altered in any way, which is much faster on SMP machines | ||
1255 | due to caching. | ||
1256 | |||
1257 | Per-CPU Data | ||
1258 | ------------ | ||
1259 | |||
1260 | Another technique for avoiding locking which is used fairly widely is to | ||
1261 | duplicate information for each CPU. For example, if you wanted to keep a | ||
1262 | count of a common condition, you could use a spin lock and a single | ||
1263 | counter. Nice and simple. | ||
1264 | |||
1265 | If that was too slow (it's usually not, but if you've got a really big | ||
1266 | machine to test on and can show that it is), you could instead use a | ||
1267 | counter for each CPU, then none of them need an exclusive lock. See | ||
1268 | :c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and | ||
1269 | :c:func:`put_cpu_var()` (``include/linux/percpu.h``). | ||
1270 | |||
1271 | Of particular use for simple per-cpu counters is the ``local_t`` type, | ||
1272 | and the :c:func:`cpu_local_inc()` and related functions, which are | ||
1273 | more efficient than simple code on some architectures | ||
1274 | (``include/asm/local.h``). | ||
1275 | |||
1276 | Note that there is no simple, reliable way of getting an exact value of | ||
1277 | such a counter, without introducing more locks. This is not a problem | ||
1278 | for some uses. | ||
1279 | |||
1280 | Data Which Mostly Used By An IRQ Handler | ||
1281 | ---------------------------------------- | ||
1282 | |||
1283 | If data is always accessed from within the same IRQ handler, you don't | ||
1284 | need a lock at all: the kernel already guarantees that the irq handler | ||
1285 | will not run simultaneously on multiple CPUs. | ||
1286 | |||
1287 | Manfred Spraul points out that you can still do this, even if the data | ||
1288 | is very occasionally accessed in user context or softirqs/tasklets. The | ||
1289 | irq handler doesn't use a lock, and all other accesses are done as so:: | ||
1290 | |||
1291 | spin_lock(&lock); | ||
1292 | disable_irq(irq); | ||
1293 | ... | ||
1294 | enable_irq(irq); | ||
1295 | spin_unlock(&lock); | ||
1296 | |||
1297 | The :c:func:`disable_irq()` prevents the irq handler from running | ||
1298 | (and waits for it to finish if it's currently running on other CPUs). | ||
1299 | The spinlock prevents any other accesses happening at the same time. | ||
1300 | Naturally, this is slower than just a :c:func:`spin_lock_irq()` | ||
1301 | call, so it only makes sense if this type of access happens extremely | ||
1302 | rarely. | ||
1303 | |||
1304 | What Functions Are Safe To Call From Interrupts? | ||
1305 | ================================================ | ||
1306 | |||
1307 | Many functions in the kernel sleep (ie. call schedule()) directly or | ||
1308 | indirectly: you can never call them while holding a spinlock, or with | ||
1309 | preemption disabled. This also means you need to be in user context: | ||
1310 | calling them from an interrupt is illegal. | ||
1311 | |||
1312 | Some Functions Which Sleep | ||
1313 | -------------------------- | ||
1314 | |||
1315 | The most common ones are listed below, but you usually have to read the | ||
1316 | code to find out if other calls are safe. If everyone else who calls it | ||
1317 | can sleep, you probably need to be able to sleep, too. In particular, | ||
1318 | registration and deregistration functions usually expect to be called | ||
1319 | from user context, and can sleep. | ||
1320 | |||
1321 | - Accesses to userspace: | ||
1322 | |||
1323 | - :c:func:`copy_from_user()` | ||
1324 | |||
1325 | - :c:func:`copy_to_user()` | ||
1326 | |||
1327 | - :c:func:`get_user()` | ||
1328 | |||
1329 | - :c:func:`put_user()` | ||
1330 | |||
1331 | - ``kmalloc(GFP_KERNEL)`` | ||
1332 | |||
1333 | - :c:func:`mutex_lock_interruptible()` and | ||
1334 | :c:func:`mutex_lock()` | ||
1335 | |||
1336 | There is a :c:func:`mutex_trylock()` which does not sleep. | ||
1337 | Still, it must not be used inside interrupt context since its | ||
1338 | implementation is not safe for that. :c:func:`mutex_unlock()` | ||
1339 | will also never sleep. It cannot be used in interrupt context either | ||
1340 | since a mutex must be released by the same task that acquired it. | ||
1341 | |||
1342 | Some Functions Which Don't Sleep | ||
1343 | -------------------------------- | ||
1344 | |||
1345 | Some functions are safe to call from any context, or holding almost any | ||
1346 | lock. | ||
1347 | |||
1348 | - :c:func:`printk()` | ||
1349 | |||
1350 | - :c:func:`kfree()` | ||
1351 | |||
1352 | - :c:func:`add_timer()` and :c:func:`del_timer()` | ||
1353 | |||
1354 | Mutex API reference | ||
1355 | =================== | ||
1356 | |||
1357 | .. kernel-doc:: include/linux/mutex.h | ||
1358 | :internal: | ||
1359 | |||
1360 | .. kernel-doc:: kernel/locking/mutex.c | ||
1361 | :export: | ||
1362 | |||
1363 | Futex API reference | ||
1364 | =================== | ||
1365 | |||
1366 | .. kernel-doc:: kernel/futex.c | ||
1367 | :internal: | ||
1368 | |||
1369 | Further reading | ||
1370 | =============== | ||
1371 | |||
1372 | - ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking | ||
1373 | tutorial in the kernel sources. | ||
1374 | |||
1375 | - Unix Systems for Modern Architectures: Symmetric Multiprocessing and | ||
1376 | Caching for Kernel Programmers: | ||
1377 | |||
1378 | Curt Schimmel's very good introduction to kernel level locking (not | ||
1379 | written for Linux, but nearly everything applies). The book is | ||
1380 | expensive, but really worth every penny to understand SMP locking. | ||
1381 | [ISBN: 0201633388] | ||
1382 | |||
1383 | Thanks | ||
1384 | ====== | ||
1385 | |||
1386 | Thanks to Telsa Gwynne for DocBooking, neatening and adding style. | ||
1387 | |||
1388 | Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras, | ||
1389 | Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev, | ||
1390 | James Morris, Robert Love, Paul McKenney, John Ashby for proofreading, | ||
1391 | correcting, flaming, commenting. | ||
1392 | |||
1393 | Thanks to the cabal for having no influence on this document. | ||
1394 | |||
1395 | Glossary | ||
1396 | ======== | ||
1397 | |||
1398 | preemption | ||
1399 | Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user | ||
1400 | context inside the kernel would not preempt each other (ie. you had that | ||
1401 | CPU until you gave it up, except for interrupts). With the addition of | ||
1402 | ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher | ||
1403 | priority tasks can "cut in": spinlocks were changed to disable | ||
1404 | preemption, even on UP. | ||
1405 | |||
1406 | bh | ||
1407 | Bottom Half: for historical reasons, functions with '_bh' in them often | ||
1408 | now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()` | ||
1409 | blocks any software interrupt on the current CPU. Bottom halves are | ||
1410 | deprecated, and will eventually be replaced by tasklets. Only one bottom | ||
1411 | half will be running at any time. | ||
1412 | |||
1413 | Hardware Interrupt / Hardware IRQ | ||
1414 | Hardware interrupt request. :c:func:`in_irq()` returns true in a | ||
1415 | hardware interrupt handler. | ||
1416 | |||
1417 | Interrupt Context | ||
1418 | Not user context: processing a hardware irq or software irq. Indicated | ||
1419 | by the :c:func:`in_interrupt()` macro returning true. | ||
1420 | |||
1421 | SMP | ||
1422 | Symmetric Multi-Processor: kernels compiled for multiple-CPU machines. | ||
1423 | (``CONFIG_SMP=y``). | ||
1424 | |||
1425 | Software Interrupt / softirq | ||
1426 | Software interrupt handler. :c:func:`in_irq()` returns false; | ||
1427 | :c:func:`in_softirq()` returns true. Tasklets and softirqs both | ||
1428 | fall into the category of 'software interrupts'. | ||
1429 | |||
1430 | Strictly speaking a softirq is one of up to 32 enumerated software | ||
1431 | interrupts which can run on multiple CPUs at once. Sometimes used to | ||
1432 | refer to tasklets as well (ie. all software interrupts). | ||
1433 | |||
1434 | tasklet | ||
1435 | A dynamically-registrable software interrupt, which is guaranteed to | ||
1436 | only run on one CPU at a time. | ||
1437 | |||
1438 | timer | ||
1439 | A dynamically-registrable software interrupt, which is run at (or close | ||
1440 | to) a given time. When running, it is just like a tasklet (in fact, they | ||
1441 | are called from the TIMER_SOFTIRQ). | ||
1442 | |||
1443 | UP | ||
1444 | Uni-Processor: Non-SMP. (CONFIG_SMP=n). | ||
1445 | |||
1446 | User Context | ||
1447 | The kernel executing on behalf of a particular process (ie. a system | ||
1448 | call or trap) or kernel thread. You can tell which process with the | ||
1449 | ``current`` macro.) Not to be confused with userspace. Can be | ||
1450 | interrupted by software or hardware interrupts. | ||
1451 | |||
1452 | Userspace | ||
1453 | A process executing its own code outside the kernel. | ||