summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMauro Carvalho Chehab <mchehab@s-opensource.com>2017-05-11 08:55:30 -0400
committerMauro Carvalho Chehab <mchehab@s-opensource.com>2017-05-16 07:00:58 -0400
commite548cdeffcd8ab8d3551a539890e682c08ab7828 (patch)
tree6749befde851c8656e6c240073f0a283b8280b76
parentdca1e58e3f1c82a840abaafb9328f84ae69a9926 (diff)
docs-rst: convert kernel-locking to ReST
Use pandoc to convert documentation to ReST by calling Documentation/sphinx/tmplcvt script. - Manually adjust tables with got broken by conversion Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
-rw-r--r--Documentation/DocBook/Makefile1
-rw-r--r--Documentation/DocBook/kernel-locking.tmpl2151
-rw-r--r--Documentation/kernel-hacking/index.rst1
-rw-r--r--Documentation/kernel-hacking/locking.rst1453
4 files changed, 1454 insertions, 2152 deletions
diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
index 7d7482b5ad92..9df94f7c2003 100644
--- a/Documentation/DocBook/Makefile
+++ b/Documentation/DocBook/Makefile
@@ -7,7 +7,6 @@
7# list of DOCBOOKS. 7# list of DOCBOOKS.
8 8
9DOCBOOKS := z8530book.xml \ 9DOCBOOKS := z8530book.xml \
10 kernel-locking.xml \
11 networking.xml \ 10 networking.xml \
12 filesystems.xml lsm.xml kgdb.xml \ 11 filesystems.xml lsm.xml kgdb.xml \
13 libata.xml mtdnand.xml librs.xml rapidio.xml \ 12 libata.xml mtdnand.xml librs.xml rapidio.xml \
diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl
deleted file mode 100644
index 7c9cc4846cb6..000000000000
--- a/Documentation/DocBook/kernel-locking.tmpl
+++ /dev/null
@@ -1,2151 +0,0 @@
1<?xml version="1.0" encoding="UTF-8"?>
2<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
3 "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
4
5<book id="LKLockingGuide">
6 <bookinfo>
7 <title>Unreliable Guide To Locking</title>
8
9 <authorgroup>
10 <author>
11 <firstname>Rusty</firstname>
12 <surname>Russell</surname>
13 <affiliation>
14 <address>
15 <email>rusty@rustcorp.com.au</email>
16 </address>
17 </affiliation>
18 </author>
19 </authorgroup>
20
21 <copyright>
22 <year>2003</year>
23 <holder>Rusty Russell</holder>
24 </copyright>
25
26 <legalnotice>
27 <para>
28 This documentation is free software; you can redistribute
29 it and/or modify it under the terms of the GNU General Public
30 License as published by the Free Software Foundation; either
31 version 2 of the License, or (at your option) any later
32 version.
33 </para>
34
35 <para>
36 This program is distributed in the hope that it will be
37 useful, but WITHOUT ANY WARRANTY; without even the implied
38 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
39 See the GNU General Public License for more details.
40 </para>
41
42 <para>
43 You should have received a copy of the GNU General Public
44 License along with this program; if not, write to the Free
45 Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
46 MA 02111-1307 USA
47 </para>
48
49 <para>
50 For more details see the file COPYING in the source
51 distribution of Linux.
52 </para>
53 </legalnotice>
54 </bookinfo>
55
56 <toc></toc>
57 <chapter id="intro">
58 <title>Introduction</title>
59 <para>
60 Welcome, to Rusty's Remarkably Unreliable Guide to Kernel
61 Locking issues. This document describes the locking systems in
62 the Linux Kernel in 2.6.
63 </para>
64 <para>
65 With the wide availability of HyperThreading, and <firstterm
66 linkend="gloss-preemption">preemption </firstterm> in the Linux
67 Kernel, everyone hacking on the kernel needs to know the
68 fundamentals of concurrency and locking for
69 <firstterm linkend="gloss-smp"><acronym>SMP</acronym></firstterm>.
70 </para>
71 </chapter>
72
73 <chapter id="races">
74 <title>The Problem With Concurrency</title>
75 <para>
76 (Skip this if you know what a Race Condition is).
77 </para>
78 <para>
79 In a normal program, you can increment a counter like so:
80 </para>
81 <programlisting>
82 very_important_count++;
83 </programlisting>
84
85 <para>
86 This is what they would expect to happen:
87 </para>
88
89 <table>
90 <title>Expected Results</title>
91
92 <tgroup cols="2" align="left">
93
94 <thead>
95 <row>
96 <entry>Instance 1</entry>
97 <entry>Instance 2</entry>
98 </row>
99 </thead>
100
101 <tbody>
102 <row>
103 <entry>read very_important_count (5)</entry>
104 <entry></entry>
105 </row>
106 <row>
107 <entry>add 1 (6)</entry>
108 <entry></entry>
109 </row>
110 <row>
111 <entry>write very_important_count (6)</entry>
112 <entry></entry>
113 </row>
114 <row>
115 <entry></entry>
116 <entry>read very_important_count (6)</entry>
117 </row>
118 <row>
119 <entry></entry>
120 <entry>add 1 (7)</entry>
121 </row>
122 <row>
123 <entry></entry>
124 <entry>write very_important_count (7)</entry>
125 </row>
126 </tbody>
127
128 </tgroup>
129 </table>
130
131 <para>
132 This is what might happen:
133 </para>
134
135 <table>
136 <title>Possible Results</title>
137
138 <tgroup cols="2" align="left">
139 <thead>
140 <row>
141 <entry>Instance 1</entry>
142 <entry>Instance 2</entry>
143 </row>
144 </thead>
145
146 <tbody>
147 <row>
148 <entry>read very_important_count (5)</entry>
149 <entry></entry>
150 </row>
151 <row>
152 <entry></entry>
153 <entry>read very_important_count (5)</entry>
154 </row>
155 <row>
156 <entry>add 1 (6)</entry>
157 <entry></entry>
158 </row>
159 <row>
160 <entry></entry>
161 <entry>add 1 (6)</entry>
162 </row>
163 <row>
164 <entry>write very_important_count (6)</entry>
165 <entry></entry>
166 </row>
167 <row>
168 <entry></entry>
169 <entry>write very_important_count (6)</entry>
170 </row>
171 </tbody>
172 </tgroup>
173 </table>
174
175 <sect1 id="race-condition">
176 <title>Race Conditions and Critical Regions</title>
177 <para>
178 This overlap, where the result depends on the
179 relative timing of multiple tasks, is called a <firstterm>race condition</firstterm>.
180 The piece of code containing the concurrency issue is called a
181 <firstterm>critical region</firstterm>. And especially since Linux starting running
182 on SMP machines, they became one of the major issues in kernel
183 design and implementation.
184 </para>
185 <para>
186 Preemption can have the same effect, even if there is only one
187 CPU: by preempting one task during the critical region, we have
188 exactly the same race condition. In this case the thread which
189 preempts might run the critical region itself.
190 </para>
191 <para>
192 The solution is to recognize when these simultaneous accesses
193 occur, and use locks to make sure that only one instance can
194 enter the critical region at any time. There are many
195 friendly primitives in the Linux kernel to help you do this.
196 And then there are the unfriendly primitives, but I'll pretend
197 they don't exist.
198 </para>
199 </sect1>
200 </chapter>
201
202 <chapter id="locks">
203 <title>Locking in the Linux Kernel</title>
204
205 <para>
206 If I could give you one piece of advice: never sleep with anyone
207 crazier than yourself. But if I had to give you advice on
208 locking: <emphasis>keep it simple</emphasis>.
209 </para>
210
211 <para>
212 Be reluctant to introduce new locks.
213 </para>
214
215 <para>
216 Strangely enough, this last one is the exact reverse of my advice when
217 you <emphasis>have</emphasis> slept with someone crazier than yourself.
218 And you should think about getting a big dog.
219 </para>
220
221 <sect1 id="lock-intro">
222 <title>Two Main Types of Kernel Locks: Spinlocks and Mutexes</title>
223
224 <para>
225 There are two main types of kernel locks. The fundamental type
226 is the spinlock
227 (<filename class="headerfile">include/asm/spinlock.h</filename>),
228 which is a very simple single-holder lock: if you can't get the
229 spinlock, you keep trying (spinning) until you can. Spinlocks are
230 very small and fast, and can be used anywhere.
231 </para>
232 <para>
233 The second type is a mutex
234 (<filename class="headerfile">include/linux/mutex.h</filename>): it
235 is like a spinlock, but you may block holding a mutex.
236 If you can't lock a mutex, your task will suspend itself, and be woken
237 up when the mutex is released. This means the CPU can do something
238 else while you are waiting. There are many cases when you simply
239 can't sleep (see <xref linkend="sleeping-things"/>), and so have to
240 use a spinlock instead.
241 </para>
242 <para>
243 Neither type of lock is recursive: see
244 <xref linkend="deadlock"/>.
245 </para>
246 </sect1>
247
248 <sect1 id="uniprocessor">
249 <title>Locks and Uniprocessor Kernels</title>
250
251 <para>
252 For kernels compiled without <symbol>CONFIG_SMP</symbol>, and
253 without <symbol>CONFIG_PREEMPT</symbol> spinlocks do not exist at
254 all. This is an excellent design decision: when no-one else can
255 run at the same time, there is no reason to have a lock.
256 </para>
257
258 <para>
259 If the kernel is compiled without <symbol>CONFIG_SMP</symbol>,
260 but <symbol>CONFIG_PREEMPT</symbol> is set, then spinlocks
261 simply disable preemption, which is sufficient to prevent any
262 races. For most purposes, we can think of preemption as
263 equivalent to SMP, and not worry about it separately.
264 </para>
265
266 <para>
267 You should always test your locking code with <symbol>CONFIG_SMP</symbol>
268 and <symbol>CONFIG_PREEMPT</symbol> enabled, even if you don't have an SMP test box, because it
269 will still catch some kinds of locking bugs.
270 </para>
271
272 <para>
273 Mutexes still exist, because they are required for
274 synchronization between <firstterm linkend="gloss-usercontext">user
275 contexts</firstterm>, as we will see below.
276 </para>
277 </sect1>
278
279 <sect1 id="usercontextlocking">
280 <title>Locking Only In User Context</title>
281
282 <para>
283 If you have a data structure which is only ever accessed from
284 user context, then you can use a simple mutex
285 (<filename>include/linux/mutex.h</filename>) to protect it. This
286 is the most trivial case: you initialize the mutex. Then you can
287 call <function>mutex_lock_interruptible()</function> to grab the mutex,
288 and <function>mutex_unlock()</function> to release it. There is also a
289 <function>mutex_lock()</function>, which should be avoided, because it
290 will not return if a signal is received.
291 </para>
292
293 <para>
294 Example: <filename>net/netfilter/nf_sockopt.c</filename> allows
295 registration of new <function>setsockopt()</function> and
296 <function>getsockopt()</function> calls, with
297 <function>nf_register_sockopt()</function>. Registration and
298 de-registration are only done on module load and unload (and boot
299 time, where there is no concurrency), and the list of registrations
300 is only consulted for an unknown <function>setsockopt()</function>
301 or <function>getsockopt()</function> system call. The
302 <varname>nf_sockopt_mutex</varname> is perfect to protect this,
303 especially since the setsockopt and getsockopt calls may well
304 sleep.
305 </para>
306 </sect1>
307
308 <sect1 id="lock-user-bh">
309 <title>Locking Between User Context and Softirqs</title>
310
311 <para>
312 If a <firstterm linkend="gloss-softirq">softirq</firstterm> shares
313 data with user context, you have two problems. Firstly, the current
314 user context can be interrupted by a softirq, and secondly, the
315 critical region could be entered from another CPU. This is where
316 <function>spin_lock_bh()</function>
317 (<filename class="headerfile">include/linux/spinlock.h</filename>) is
318 used. It disables softirqs on that CPU, then grabs the lock.
319 <function>spin_unlock_bh()</function> does the reverse. (The
320 '_bh' suffix is a historical reference to "Bottom Halves", the
321 old name for software interrupts. It should really be
322 called spin_lock_softirq()' in a perfect world).
323 </para>
324
325 <para>
326 Note that you can also use <function>spin_lock_irq()</function>
327 or <function>spin_lock_irqsave()</function> here, which stop
328 hardware interrupts as well: see <xref linkend="hardirq-context"/>.
329 </para>
330
331 <para>
332 This works perfectly for <firstterm linkend="gloss-up"><acronym>UP
333 </acronym></firstterm> as well: the spin lock vanishes, and this macro
334 simply becomes <function>local_bh_disable()</function>
335 (<filename class="headerfile">include/linux/interrupt.h</filename>), which
336 protects you from the softirq being run.
337 </para>
338 </sect1>
339
340 <sect1 id="lock-user-tasklet">
341 <title>Locking Between User Context and Tasklets</title>
342
343 <para>
344 This is exactly the same as above, because <firstterm
345 linkend="gloss-tasklet">tasklets</firstterm> are actually run
346 from a softirq.
347 </para>
348 </sect1>
349
350 <sect1 id="lock-user-timers">
351 <title>Locking Between User Context and Timers</title>
352
353 <para>
354 This, too, is exactly the same as above, because <firstterm
355 linkend="gloss-timers">timers</firstterm> are actually run from
356 a softirq. From a locking point of view, tasklets and timers
357 are identical.
358 </para>
359 </sect1>
360
361 <sect1 id="lock-tasklets">
362 <title>Locking Between Tasklets/Timers</title>
363
364 <para>
365 Sometimes a tasklet or timer might want to share data with
366 another tasklet or timer.
367 </para>
368
369 <sect2 id="lock-tasklets-same">
370 <title>The Same Tasklet/Timer</title>
371 <para>
372 Since a tasklet is never run on two CPUs at once, you don't
373 need to worry about your tasklet being reentrant (running
374 twice at once), even on SMP.
375 </para>
376 </sect2>
377
378 <sect2 id="lock-tasklets-different">
379 <title>Different Tasklets/Timers</title>
380 <para>
381 If another tasklet/timer wants
382 to share data with your tasklet or timer , you will both need to use
383 <function>spin_lock()</function> and
384 <function>spin_unlock()</function> calls.
385 <function>spin_lock_bh()</function> is
386 unnecessary here, as you are already in a tasklet, and
387 none will be run on the same CPU.
388 </para>
389 </sect2>
390 </sect1>
391
392 <sect1 id="lock-softirqs">
393 <title>Locking Between Softirqs</title>
394
395 <para>
396 Often a softirq might
397 want to share data with itself or a tasklet/timer.
398 </para>
399
400 <sect2 id="lock-softirqs-same">
401 <title>The Same Softirq</title>
402
403 <para>
404 The same softirq can run on the other CPUs: you can use a
405 per-CPU array (see <xref linkend="per-cpu"/>) for better
406 performance. If you're going so far as to use a softirq,
407 you probably care about scalable performance enough
408 to justify the extra complexity.
409 </para>
410
411 <para>
412 You'll need to use <function>spin_lock()</function> and
413 <function>spin_unlock()</function> for shared data.
414 </para>
415 </sect2>
416
417 <sect2 id="lock-softirqs-different">
418 <title>Different Softirqs</title>
419
420 <para>
421 You'll need to use <function>spin_lock()</function> and
422 <function>spin_unlock()</function> for shared data, whether it
423 be a timer, tasklet, different softirq or the same or another
424 softirq: any of them could be running on a different CPU.
425 </para>
426 </sect2>
427 </sect1>
428 </chapter>
429
430 <chapter id="hardirq-context">
431 <title>Hard IRQ Context</title>
432
433 <para>
434 Hardware interrupts usually communicate with a
435 tasklet or softirq. Frequently this involves putting work in a
436 queue, which the softirq will take out.
437 </para>
438
439 <sect1 id="hardirq-softirq">
440 <title>Locking Between Hard IRQ and Softirqs/Tasklets</title>
441
442 <para>
443 If a hardware irq handler shares data with a softirq, you have
444 two concerns. Firstly, the softirq processing can be
445 interrupted by a hardware interrupt, and secondly, the
446 critical region could be entered by a hardware interrupt on
447 another CPU. This is where <function>spin_lock_irq()</function> is
448 used. It is defined to disable interrupts on that cpu, then grab
449 the lock. <function>spin_unlock_irq()</function> does the reverse.
450 </para>
451
452 <para>
453 The irq handler does not to use
454 <function>spin_lock_irq()</function>, because the softirq cannot
455 run while the irq handler is running: it can use
456 <function>spin_lock()</function>, which is slightly faster. The
457 only exception would be if a different hardware irq handler uses
458 the same lock: <function>spin_lock_irq()</function> will stop
459 that from interrupting us.
460 </para>
461
462 <para>
463 This works perfectly for UP as well: the spin lock vanishes,
464 and this macro simply becomes <function>local_irq_disable()</function>
465 (<filename class="headerfile">include/asm/smp.h</filename>), which
466 protects you from the softirq/tasklet/BH being run.
467 </para>
468
469 <para>
470 <function>spin_lock_irqsave()</function>
471 (<filename>include/linux/spinlock.h</filename>) is a variant
472 which saves whether interrupts were on or off in a flags word,
473 which is passed to <function>spin_unlock_irqrestore()</function>. This
474 means that the same code can be used inside an hard irq handler (where
475 interrupts are already off) and in softirqs (where the irq
476 disabling is required).
477 </para>
478
479 <para>
480 Note that softirqs (and hence tasklets and timers) are run on
481 return from hardware interrupts, so
482 <function>spin_lock_irq()</function> also stops these. In that
483 sense, <function>spin_lock_irqsave()</function> is the most
484 general and powerful locking function.
485 </para>
486
487 </sect1>
488 <sect1 id="hardirq-hardirq">
489 <title>Locking Between Two Hard IRQ Handlers</title>
490 <para>
491 It is rare to have to share data between two IRQ handlers, but
492 if you do, <function>spin_lock_irqsave()</function> should be
493 used: it is architecture-specific whether all interrupts are
494 disabled inside irq handlers themselves.
495 </para>
496 </sect1>
497
498 </chapter>
499
500 <chapter id="cheatsheet">
501 <title>Cheat Sheet For Locking</title>
502 <para>
503 Pete Zaitcev gives the following summary:
504 </para>
505 <itemizedlist>
506 <listitem>
507 <para>
508 If you are in a process context (any syscall) and want to
509 lock other process out, use a mutex. You can take a mutex
510 and sleep (<function>copy_from_user*(</function> or
511 <function>kmalloc(x,GFP_KERNEL)</function>).
512 </para>
513 </listitem>
514 <listitem>
515 <para>
516 Otherwise (== data can be touched in an interrupt), use
517 <function>spin_lock_irqsave()</function> and
518 <function>spin_unlock_irqrestore()</function>.
519 </para>
520 </listitem>
521 <listitem>
522 <para>
523 Avoid holding spinlock for more than 5 lines of code and
524 across any function call (except accessors like
525 <function>readb</function>).
526 </para>
527 </listitem>
528 </itemizedlist>
529
530 <sect1 id="minimum-lock-reqirements">
531 <title>Table of Minimum Requirements</title>
532
533 <para> The following table lists the <emphasis>minimum</emphasis>
534 locking requirements between various contexts. In some cases,
535 the same context can only be running on one CPU at a time, so
536 no locking is required for that context (eg. a particular
537 thread can only run on one CPU at a time, but if it needs
538 shares data with another thread, locking is required).
539 </para>
540 <para>
541 Remember the advice above: you can always use
542 <function>spin_lock_irqsave()</function>, which is a superset
543 of all other spinlock primitives.
544 </para>
545
546 <table>
547<title>Table of Locking Requirements</title>
548<tgroup cols="11">
549<tbody>
550
551<row>
552<entry></entry>
553<entry>IRQ Handler A</entry>
554<entry>IRQ Handler B</entry>
555<entry>Softirq A</entry>
556<entry>Softirq B</entry>
557<entry>Tasklet A</entry>
558<entry>Tasklet B</entry>
559<entry>Timer A</entry>
560<entry>Timer B</entry>
561<entry>User Context A</entry>
562<entry>User Context B</entry>
563</row>
564
565<row>
566<entry>IRQ Handler A</entry>
567<entry>None</entry>
568</row>
569
570<row>
571<entry>IRQ Handler B</entry>
572<entry>SLIS</entry>
573<entry>None</entry>
574</row>
575
576<row>
577<entry>Softirq A</entry>
578<entry>SLI</entry>
579<entry>SLI</entry>
580<entry>SL</entry>
581</row>
582
583<row>
584<entry>Softirq B</entry>
585<entry>SLI</entry>
586<entry>SLI</entry>
587<entry>SL</entry>
588<entry>SL</entry>
589</row>
590
591<row>
592<entry>Tasklet A</entry>
593<entry>SLI</entry>
594<entry>SLI</entry>
595<entry>SL</entry>
596<entry>SL</entry>
597<entry>None</entry>
598</row>
599
600<row>
601<entry>Tasklet B</entry>
602<entry>SLI</entry>
603<entry>SLI</entry>
604<entry>SL</entry>
605<entry>SL</entry>
606<entry>SL</entry>
607<entry>None</entry>
608</row>
609
610<row>
611<entry>Timer A</entry>
612<entry>SLI</entry>
613<entry>SLI</entry>
614<entry>SL</entry>
615<entry>SL</entry>
616<entry>SL</entry>
617<entry>SL</entry>
618<entry>None</entry>
619</row>
620
621<row>
622<entry>Timer B</entry>
623<entry>SLI</entry>
624<entry>SLI</entry>
625<entry>SL</entry>
626<entry>SL</entry>
627<entry>SL</entry>
628<entry>SL</entry>
629<entry>SL</entry>
630<entry>None</entry>
631</row>
632
633<row>
634<entry>User Context A</entry>
635<entry>SLI</entry>
636<entry>SLI</entry>
637<entry>SLBH</entry>
638<entry>SLBH</entry>
639<entry>SLBH</entry>
640<entry>SLBH</entry>
641<entry>SLBH</entry>
642<entry>SLBH</entry>
643<entry>None</entry>
644</row>
645
646<row>
647<entry>User Context B</entry>
648<entry>SLI</entry>
649<entry>SLI</entry>
650<entry>SLBH</entry>
651<entry>SLBH</entry>
652<entry>SLBH</entry>
653<entry>SLBH</entry>
654<entry>SLBH</entry>
655<entry>SLBH</entry>
656<entry>MLI</entry>
657<entry>None</entry>
658</row>
659
660</tbody>
661</tgroup>
662</table>
663
664 <table>
665<title>Legend for Locking Requirements Table</title>
666<tgroup cols="2">
667<tbody>
668
669<row>
670<entry>SLIS</entry>
671<entry>spin_lock_irqsave</entry>
672</row>
673<row>
674<entry>SLI</entry>
675<entry>spin_lock_irq</entry>
676</row>
677<row>
678<entry>SL</entry>
679<entry>spin_lock</entry>
680</row>
681<row>
682<entry>SLBH</entry>
683<entry>spin_lock_bh</entry>
684</row>
685<row>
686<entry>MLI</entry>
687<entry>mutex_lock_interruptible</entry>
688</row>
689
690</tbody>
691</tgroup>
692</table>
693
694</sect1>
695</chapter>
696
697<chapter id="trylock-functions">
698 <title>The trylock Functions</title>
699 <para>
700 There are functions that try to acquire a lock only once and immediately
701 return a value telling about success or failure to acquire the lock.
702 They can be used if you need no access to the data protected with the lock
703 when some other thread is holding the lock. You should acquire the lock
704 later if you then need access to the data protected with the lock.
705 </para>
706
707 <para>
708 <function>spin_trylock()</function> does not spin but returns non-zero if
709 it acquires the spinlock on the first try or 0 if not. This function can
710 be used in all contexts like <function>spin_lock</function>: you must have
711 disabled the contexts that might interrupt you and acquire the spin lock.
712 </para>
713
714 <para>
715 <function>mutex_trylock()</function> does not suspend your task
716 but returns non-zero if it could lock the mutex on the first try
717 or 0 if not. This function cannot be safely used in hardware or software
718 interrupt contexts despite not sleeping.
719 </para>
720</chapter>
721
722 <chapter id="Examples">
723 <title>Common Examples</title>
724 <para>
725Let's step through a simple example: a cache of number to name
726mappings. The cache keeps a count of how often each of the objects is
727used, and when it gets full, throws out the least used one.
728
729 </para>
730
731 <sect1 id="examples-usercontext">
732 <title>All In User Context</title>
733 <para>
734For our first example, we assume that all operations are in user
735context (ie. from system calls), so we can sleep. This means we can
736use a mutex to protect the cache and all the objects within
737it. Here's the code:
738 </para>
739
740 <programlisting>
741#include &lt;linux/list.h&gt;
742#include &lt;linux/slab.h&gt;
743#include &lt;linux/string.h&gt;
744#include &lt;linux/mutex.h&gt;
745#include &lt;asm/errno.h&gt;
746
747struct object
748{
749 struct list_head list;
750 int id;
751 char name[32];
752 int popularity;
753};
754
755/* Protects the cache, cache_num, and the objects within it */
756static DEFINE_MUTEX(cache_lock);
757static LIST_HEAD(cache);
758static unsigned int cache_num = 0;
759#define MAX_CACHE_SIZE 10
760
761/* Must be holding cache_lock */
762static struct object *__cache_find(int id)
763{
764 struct object *i;
765
766 list_for_each_entry(i, &amp;cache, list)
767 if (i-&gt;id == id) {
768 i-&gt;popularity++;
769 return i;
770 }
771 return NULL;
772}
773
774/* Must be holding cache_lock */
775static void __cache_delete(struct object *obj)
776{
777 BUG_ON(!obj);
778 list_del(&amp;obj-&gt;list);
779 kfree(obj);
780 cache_num--;
781}
782
783/* Must be holding cache_lock */
784static void __cache_add(struct object *obj)
785{
786 list_add(&amp;obj-&gt;list, &amp;cache);
787 if (++cache_num > MAX_CACHE_SIZE) {
788 struct object *i, *outcast = NULL;
789 list_for_each_entry(i, &amp;cache, list) {
790 if (!outcast || i-&gt;popularity &lt; outcast-&gt;popularity)
791 outcast = i;
792 }
793 __cache_delete(outcast);
794 }
795}
796
797int cache_add(int id, const char *name)
798{
799 struct object *obj;
800
801 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
802 return -ENOMEM;
803
804 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name));
805 obj-&gt;id = id;
806 obj-&gt;popularity = 0;
807
808 mutex_lock(&amp;cache_lock);
809 __cache_add(obj);
810 mutex_unlock(&amp;cache_lock);
811 return 0;
812}
813
814void cache_delete(int id)
815{
816 mutex_lock(&amp;cache_lock);
817 __cache_delete(__cache_find(id));
818 mutex_unlock(&amp;cache_lock);
819}
820
821int cache_find(int id, char *name)
822{
823 struct object *obj;
824 int ret = -ENOENT;
825
826 mutex_lock(&amp;cache_lock);
827 obj = __cache_find(id);
828 if (obj) {
829 ret = 0;
830 strcpy(name, obj-&gt;name);
831 }
832 mutex_unlock(&amp;cache_lock);
833 return ret;
834}
835</programlisting>
836
837 <para>
838Note that we always make sure we have the cache_lock when we add,
839delete, or look up the cache: both the cache infrastructure itself and
840the contents of the objects are protected by the lock. In this case
841it's easy, since we copy the data for the user, and never let them
842access the objects directly.
843 </para>
844 <para>
845There is a slight (and common) optimization here: in
846<function>cache_add</function> we set up the fields of the object
847before grabbing the lock. This is safe, as no-one else can access it
848until we put it in cache.
849 </para>
850 </sect1>
851
852 <sect1 id="examples-interrupt">
853 <title>Accessing From Interrupt Context</title>
854 <para>
855Now consider the case where <function>cache_find</function> can be
856called from interrupt context: either a hardware interrupt or a
857softirq. An example would be a timer which deletes object from the
858cache.
859 </para>
860 <para>
861The change is shown below, in standard patch format: the
862<symbol>-</symbol> are lines which are taken away, and the
863<symbol>+</symbol> are lines which are added.
864 </para>
865<programlisting>
866--- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
867+++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
868@@ -12,7 +12,7 @@
869 int popularity;
870 };
871
872-static DEFINE_MUTEX(cache_lock);
873+static DEFINE_SPINLOCK(cache_lock);
874 static LIST_HEAD(cache);
875 static unsigned int cache_num = 0;
876 #define MAX_CACHE_SIZE 10
877@@ -55,6 +55,7 @@
878 int cache_add(int id, const char *name)
879 {
880 struct object *obj;
881+ unsigned long flags;
882
883 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
884 return -ENOMEM;
885@@ -63,30 +64,33 @@
886 obj-&gt;id = id;
887 obj-&gt;popularity = 0;
888
889- mutex_lock(&amp;cache_lock);
890+ spin_lock_irqsave(&amp;cache_lock, flags);
891 __cache_add(obj);
892- mutex_unlock(&amp;cache_lock);
893+ spin_unlock_irqrestore(&amp;cache_lock, flags);
894 return 0;
895 }
896
897 void cache_delete(int id)
898 {
899- mutex_lock(&amp;cache_lock);
900+ unsigned long flags;
901+
902+ spin_lock_irqsave(&amp;cache_lock, flags);
903 __cache_delete(__cache_find(id));
904- mutex_unlock(&amp;cache_lock);
905+ spin_unlock_irqrestore(&amp;cache_lock, flags);
906 }
907
908 int cache_find(int id, char *name)
909 {
910 struct object *obj;
911 int ret = -ENOENT;
912+ unsigned long flags;
913
914- mutex_lock(&amp;cache_lock);
915+ spin_lock_irqsave(&amp;cache_lock, flags);
916 obj = __cache_find(id);
917 if (obj) {
918 ret = 0;
919 strcpy(name, obj-&gt;name);
920 }
921- mutex_unlock(&amp;cache_lock);
922+ spin_unlock_irqrestore(&amp;cache_lock, flags);
923 return ret;
924 }
925</programlisting>
926
927 <para>
928Note that the <function>spin_lock_irqsave</function> will turn off
929interrupts if they are on, otherwise does nothing (if we are already
930in an interrupt handler), hence these functions are safe to call from
931any context.
932 </para>
933 <para>
934Unfortunately, <function>cache_add</function> calls
935<function>kmalloc</function> with the <symbol>GFP_KERNEL</symbol>
936flag, which is only legal in user context. I have assumed that
937<function>cache_add</function> is still only called in user context,
938otherwise this should become a parameter to
939<function>cache_add</function>.
940 </para>
941 </sect1>
942 <sect1 id="examples-refcnt">
943 <title>Exposing Objects Outside This File</title>
944 <para>
945If our objects contained more information, it might not be sufficient
946to copy the information in and out: other parts of the code might want
947to keep pointers to these objects, for example, rather than looking up
948the id every time. This produces two problems.
949 </para>
950 <para>
951The first problem is that we use the <symbol>cache_lock</symbol> to
952protect objects: we'd need to make this non-static so the rest of the
953code can use it. This makes locking trickier, as it is no longer all
954in one place.
955 </para>
956 <para>
957The second problem is the lifetime problem: if another structure keeps
958a pointer to an object, it presumably expects that pointer to remain
959valid. Unfortunately, this is only guaranteed while you hold the
960lock, otherwise someone might call <function>cache_delete</function>
961and even worse, add another object, re-using the same address.
962 </para>
963 <para>
964As there is only one lock, you can't hold it forever: no-one else would
965get any work done.
966 </para>
967 <para>
968The solution to this problem is to use a reference count: everyone who
969has a pointer to the object increases it when they first get the
970object, and drops the reference count when they're finished with it.
971Whoever drops it to zero knows it is unused, and can actually delete it.
972 </para>
973 <para>
974Here is the code:
975 </para>
976
977<programlisting>
978--- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
979+++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
980@@ -7,6 +7,7 @@
981 struct object
982 {
983 struct list_head list;
984+ unsigned int refcnt;
985 int id;
986 char name[32];
987 int popularity;
988@@ -17,6 +18,35 @@
989 static unsigned int cache_num = 0;
990 #define MAX_CACHE_SIZE 10
991
992+static void __object_put(struct object *obj)
993+{
994+ if (--obj-&gt;refcnt == 0)
995+ kfree(obj);
996+}
997+
998+static void __object_get(struct object *obj)
999+{
1000+ obj-&gt;refcnt++;
1001+}
1002+
1003+void object_put(struct object *obj)
1004+{
1005+ unsigned long flags;
1006+
1007+ spin_lock_irqsave(&amp;cache_lock, flags);
1008+ __object_put(obj);
1009+ spin_unlock_irqrestore(&amp;cache_lock, flags);
1010+}
1011+
1012+void object_get(struct object *obj)
1013+{
1014+ unsigned long flags;
1015+
1016+ spin_lock_irqsave(&amp;cache_lock, flags);
1017+ __object_get(obj);
1018+ spin_unlock_irqrestore(&amp;cache_lock, flags);
1019+}
1020+
1021 /* Must be holding cache_lock */
1022 static struct object *__cache_find(int id)
1023 {
1024@@ -35,6 +65,7 @@
1025 {
1026 BUG_ON(!obj);
1027 list_del(&amp;obj-&gt;list);
1028+ __object_put(obj);
1029 cache_num--;
1030 }
1031
1032@@ -63,6 +94,7 @@
1033 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name));
1034 obj-&gt;id = id;
1035 obj-&gt;popularity = 0;
1036+ obj-&gt;refcnt = 1; /* The cache holds a reference */
1037
1038 spin_lock_irqsave(&amp;cache_lock, flags);
1039 __cache_add(obj);
1040@@ -79,18 +111,15 @@
1041 spin_unlock_irqrestore(&amp;cache_lock, flags);
1042 }
1043
1044-int cache_find(int id, char *name)
1045+struct object *cache_find(int id)
1046 {
1047 struct object *obj;
1048- int ret = -ENOENT;
1049 unsigned long flags;
1050
1051 spin_lock_irqsave(&amp;cache_lock, flags);
1052 obj = __cache_find(id);
1053- if (obj) {
1054- ret = 0;
1055- strcpy(name, obj-&gt;name);
1056- }
1057+ if (obj)
1058+ __object_get(obj);
1059 spin_unlock_irqrestore(&amp;cache_lock, flags);
1060- return ret;
1061+ return obj;
1062 }
1063</programlisting>
1064
1065<para>
1066We encapsulate the reference counting in the standard 'get' and 'put'
1067functions. Now we can return the object itself from
1068<function>cache_find</function> which has the advantage that the user
1069can now sleep holding the object (eg. to
1070<function>copy_to_user</function> to name to userspace).
1071</para>
1072<para>
1073The other point to note is that I said a reference should be held for
1074every pointer to the object: thus the reference count is 1 when first
1075inserted into the cache. In some versions the framework does not hold
1076a reference count, but they are more complicated.
1077</para>
1078
1079 <sect2 id="examples-refcnt-atomic">
1080 <title>Using Atomic Operations For The Reference Count</title>
1081<para>
1082In practice, <type>atomic_t</type> would usually be used for
1083<structfield>refcnt</structfield>. There are a number of atomic
1084operations defined in
1085
1086<filename class="headerfile">include/asm/atomic.h</filename>: these are
1087guaranteed to be seen atomically from all CPUs in the system, so no
1088lock is required. In this case, it is simpler than using spinlocks,
1089although for anything non-trivial using spinlocks is clearer. The
1090<function>atomic_inc</function> and
1091<function>atomic_dec_and_test</function> are used instead of the
1092standard increment and decrement operators, and the lock is no longer
1093used to protect the reference count itself.
1094</para>
1095
1096<programlisting>
1097--- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
1098+++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
1099@@ -7,7 +7,7 @@
1100 struct object
1101 {
1102 struct list_head list;
1103- unsigned int refcnt;
1104+ atomic_t refcnt;
1105 int id;
1106 char name[32];
1107 int popularity;
1108@@ -18,33 +18,15 @@
1109 static unsigned int cache_num = 0;
1110 #define MAX_CACHE_SIZE 10
1111
1112-static void __object_put(struct object *obj)
1113-{
1114- if (--obj-&gt;refcnt == 0)
1115- kfree(obj);
1116-}
1117-
1118-static void __object_get(struct object *obj)
1119-{
1120- obj-&gt;refcnt++;
1121-}
1122-
1123 void object_put(struct object *obj)
1124 {
1125- unsigned long flags;
1126-
1127- spin_lock_irqsave(&amp;cache_lock, flags);
1128- __object_put(obj);
1129- spin_unlock_irqrestore(&amp;cache_lock, flags);
1130+ if (atomic_dec_and_test(&amp;obj-&gt;refcnt))
1131+ kfree(obj);
1132 }
1133
1134 void object_get(struct object *obj)
1135 {
1136- unsigned long flags;
1137-
1138- spin_lock_irqsave(&amp;cache_lock, flags);
1139- __object_get(obj);
1140- spin_unlock_irqrestore(&amp;cache_lock, flags);
1141+ atomic_inc(&amp;obj-&gt;refcnt);
1142 }
1143
1144 /* Must be holding cache_lock */
1145@@ -65,7 +47,7 @@
1146 {
1147 BUG_ON(!obj);
1148 list_del(&amp;obj-&gt;list);
1149- __object_put(obj);
1150+ object_put(obj);
1151 cache_num--;
1152 }
1153
1154@@ -94,7 +76,7 @@
1155 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name));
1156 obj-&gt;id = id;
1157 obj-&gt;popularity = 0;
1158- obj-&gt;refcnt = 1; /* The cache holds a reference */
1159+ atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */
1160
1161 spin_lock_irqsave(&amp;cache_lock, flags);
1162 __cache_add(obj);
1163@@ -119,7 +101,7 @@
1164 spin_lock_irqsave(&amp;cache_lock, flags);
1165 obj = __cache_find(id);
1166 if (obj)
1167- __object_get(obj);
1168+ object_get(obj);
1169 spin_unlock_irqrestore(&amp;cache_lock, flags);
1170 return obj;
1171 }
1172</programlisting>
1173</sect2>
1174</sect1>
1175
1176 <sect1 id="examples-lock-per-obj">
1177 <title>Protecting The Objects Themselves</title>
1178 <para>
1179In these examples, we assumed that the objects (except the reference
1180counts) never changed once they are created. If we wanted to allow
1181the name to change, there are three possibilities:
1182 </para>
1183 <itemizedlist>
1184 <listitem>
1185 <para>
1186You can make <symbol>cache_lock</symbol> non-static, and tell people
1187to grab that lock before changing the name in any object.
1188 </para>
1189 </listitem>
1190 <listitem>
1191 <para>
1192You can provide a <function>cache_obj_rename</function> which grabs
1193this lock and changes the name for the caller, and tell everyone to
1194use that function.
1195 </para>
1196 </listitem>
1197 <listitem>
1198 <para>
1199You can make the <symbol>cache_lock</symbol> protect only the cache
1200itself, and use another lock to protect the name.
1201 </para>
1202 </listitem>
1203 </itemizedlist>
1204
1205 <para>
1206Theoretically, you can make the locks as fine-grained as one lock for
1207every field, for every object. In practice, the most common variants
1208are:
1209</para>
1210 <itemizedlist>
1211 <listitem>
1212 <para>
1213One lock which protects the infrastructure (the <symbol>cache</symbol>
1214list in this example) and all the objects. This is what we have done
1215so far.
1216 </para>
1217 </listitem>
1218 <listitem>
1219 <para>
1220One lock which protects the infrastructure (including the list
1221pointers inside the objects), and one lock inside the object which
1222protects the rest of that object.
1223 </para>
1224 </listitem>
1225 <listitem>
1226 <para>
1227Multiple locks to protect the infrastructure (eg. one lock per hash
1228chain), possibly with a separate per-object lock.
1229 </para>
1230 </listitem>
1231 </itemizedlist>
1232
1233<para>
1234Here is the "lock-per-object" implementation:
1235</para>
1236<programlisting>
1237--- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
1238+++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1239@@ -6,11 +6,17 @@
1240
1241 struct object
1242 {
1243+ /* These two protected by cache_lock. */
1244 struct list_head list;
1245+ int popularity;
1246+
1247 atomic_t refcnt;
1248+
1249+ /* Doesn't change once created. */
1250 int id;
1251+
1252+ spinlock_t lock; /* Protects the name */
1253 char name[32];
1254- int popularity;
1255 };
1256
1257 static DEFINE_SPINLOCK(cache_lock);
1258@@ -77,6 +84,7 @@
1259 obj-&gt;id = id;
1260 obj-&gt;popularity = 0;
1261 atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */
1262+ spin_lock_init(&amp;obj-&gt;lock);
1263
1264 spin_lock_irqsave(&amp;cache_lock, flags);
1265 __cache_add(obj);
1266</programlisting>
1267
1268<para>
1269Note that I decide that the <structfield>popularity</structfield>
1270count should be protected by the <symbol>cache_lock</symbol> rather
1271than the per-object lock: this is because it (like the
1272<structname>struct list_head</structname> inside the object) is
1273logically part of the infrastructure. This way, I don't need to grab
1274the lock of every object in <function>__cache_add</function> when
1275seeking the least popular.
1276</para>
1277
1278<para>
1279I also decided that the <structfield>id</structfield> member is
1280unchangeable, so I don't need to grab each object lock in
1281<function>__cache_find()</function> to examine the
1282<structfield>id</structfield>: the object lock is only used by a
1283caller who wants to read or write the <structfield>name</structfield>
1284field.
1285</para>
1286
1287<para>
1288Note also that I added a comment describing what data was protected by
1289which locks. This is extremely important, as it describes the runtime
1290behavior of the code, and can be hard to gain from just reading. And
1291as Alan Cox says, <quote>Lock data, not code</quote>.
1292</para>
1293</sect1>
1294</chapter>
1295
1296 <chapter id="common-problems">
1297 <title>Common Problems</title>
1298 <sect1 id="deadlock">
1299 <title>Deadlock: Simple and Advanced</title>
1300
1301 <para>
1302 There is a coding bug where a piece of code tries to grab a
1303 spinlock twice: it will spin forever, waiting for the lock to
1304 be released (spinlocks, rwlocks and mutexes are not
1305 recursive in Linux). This is trivial to diagnose: not a
1306 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of
1307 problem.
1308 </para>
1309
1310 <para>
1311 For a slightly more complex case, imagine you have a region
1312 shared by a softirq and user context. If you use a
1313 <function>spin_lock()</function> call to protect it, it is
1314 possible that the user context will be interrupted by the softirq
1315 while it holds the lock, and the softirq will then spin
1316 forever trying to get the same lock.
1317 </para>
1318
1319 <para>
1320 Both of these are called deadlock, and as shown above, it can
1321 occur even with a single CPU (although not on UP compiles,
1322 since spinlocks vanish on kernel compiles with
1323 <symbol>CONFIG_SMP</symbol>=n. You'll still get data corruption
1324 in the second example).
1325 </para>
1326
1327 <para>
1328 This complete lockup is easy to diagnose: on SMP boxes the
1329 watchdog timer or compiling with <symbol>DEBUG_SPINLOCK</symbol> set
1330 (<filename>include/linux/spinlock.h</filename>) will show this up
1331 immediately when it happens.
1332 </para>
1333
1334 <para>
1335 A more complex problem is the so-called 'deadly embrace',
1336 involving two or more locks. Say you have a hash table: each
1337 entry in the table is a spinlock, and a chain of hashed
1338 objects. Inside a softirq handler, you sometimes want to
1339 alter an object from one place in the hash to another: you
1340 grab the spinlock of the old hash chain and the spinlock of
1341 the new hash chain, and delete the object from the old one,
1342 and insert it in the new one.
1343 </para>
1344
1345 <para>
1346 There are two problems here. First, if your code ever
1347 tries to move the object to the same chain, it will deadlock
1348 with itself as it tries to lock it twice. Secondly, if the
1349 same softirq on another CPU is trying to move another object
1350 in the reverse direction, the following could happen:
1351 </para>
1352
1353 <table>
1354 <title>Consequences</title>
1355
1356 <tgroup cols="2" align="left">
1357
1358 <thead>
1359 <row>
1360 <entry>CPU 1</entry>
1361 <entry>CPU 2</entry>
1362 </row>
1363 </thead>
1364
1365 <tbody>
1366 <row>
1367 <entry>Grab lock A -&gt; OK</entry>
1368 <entry>Grab lock B -&gt; OK</entry>
1369 </row>
1370 <row>
1371 <entry>Grab lock B -&gt; spin</entry>
1372 <entry>Grab lock A -&gt; spin</entry>
1373 </row>
1374 </tbody>
1375 </tgroup>
1376 </table>
1377
1378 <para>
1379 The two CPUs will spin forever, waiting for the other to give up
1380 their lock. It will look, smell, and feel like a crash.
1381 </para>
1382 </sect1>
1383
1384 <sect1 id="techs-deadlock-prevent">
1385 <title>Preventing Deadlock</title>
1386
1387 <para>
1388 Textbooks will tell you that if you always lock in the same
1389 order, you will never get this kind of deadlock. Practice
1390 will tell you that this approach doesn't scale: when I
1391 create a new lock, I don't understand enough of the kernel
1392 to figure out where in the 5000 lock hierarchy it will fit.
1393 </para>
1394
1395 <para>
1396 The best locks are encapsulated: they never get exposed in
1397 headers, and are never held around calls to non-trivial
1398 functions outside the same file. You can read through this
1399 code and see that it will never deadlock, because it never
1400 tries to grab another lock while it has that one. People
1401 using your code don't even need to know you are using a
1402 lock.
1403 </para>
1404
1405 <para>
1406 A classic problem here is when you provide callbacks or
1407 hooks: if you call these with the lock held, you risk simple
1408 deadlock, or a deadly embrace (who knows what the callback
1409 will do?). Remember, the other programmers are out to get
1410 you, so don't do this.
1411 </para>
1412
1413 <sect2 id="techs-deadlock-overprevent">
1414 <title>Overzealous Prevention Of Deadlocks</title>
1415
1416 <para>
1417 Deadlocks are problematic, but not as bad as data
1418 corruption. Code which grabs a read lock, searches a list,
1419 fails to find what it wants, drops the read lock, grabs a
1420 write lock and inserts the object has a race condition.
1421 </para>
1422
1423 <para>
1424 If you don't see why, please stay the fuck away from my code.
1425 </para>
1426 </sect2>
1427 </sect1>
1428
1429 <sect1 id="racing-timers">
1430 <title>Racing Timers: A Kernel Pastime</title>
1431
1432 <para>
1433 Timers can produce their own special problems with races.
1434 Consider a collection of objects (list, hash, etc) where each
1435 object has a timer which is due to destroy it.
1436 </para>
1437
1438 <para>
1439 If you want to destroy the entire collection (say on module
1440 removal), you might do the following:
1441 </para>
1442
1443 <programlisting>
1444 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
1445 HUNGARIAN NOTATION */
1446 spin_lock_bh(&amp;list_lock);
1447
1448 while (list) {
1449 struct foo *next = list-&gt;next;
1450 del_timer(&amp;list-&gt;timer);
1451 kfree(list);
1452 list = next;
1453 }
1454
1455 spin_unlock_bh(&amp;list_lock);
1456 </programlisting>
1457
1458 <para>
1459 Sooner or later, this will crash on SMP, because a timer can
1460 have just gone off before the <function>spin_lock_bh()</function>,
1461 and it will only get the lock after we
1462 <function>spin_unlock_bh()</function>, and then try to free
1463 the element (which has already been freed!).
1464 </para>
1465
1466 <para>
1467 This can be avoided by checking the result of
1468 <function>del_timer()</function>: if it returns
1469 <returnvalue>1</returnvalue>, the timer has been deleted.
1470 If <returnvalue>0</returnvalue>, it means (in this
1471 case) that it is currently running, so we can do:
1472 </para>
1473
1474 <programlisting>
1475 retry:
1476 spin_lock_bh(&amp;list_lock);
1477
1478 while (list) {
1479 struct foo *next = list-&gt;next;
1480 if (!del_timer(&amp;list-&gt;timer)) {
1481 /* Give timer a chance to delete this */
1482 spin_unlock_bh(&amp;list_lock);
1483 goto retry;
1484 }
1485 kfree(list);
1486 list = next;
1487 }
1488
1489 spin_unlock_bh(&amp;list_lock);
1490 </programlisting>
1491
1492 <para>
1493 Another common problem is deleting timers which restart
1494 themselves (by calling <function>add_timer()</function> at the end
1495 of their timer function). Because this is a fairly common case
1496 which is prone to races, you should use <function>del_timer_sync()</function>
1497 (<filename class="headerfile">include/linux/timer.h</filename>)
1498 to handle this case. It returns the number of times the timer
1499 had to be deleted before we finally stopped it from adding itself back
1500 in.
1501 </para>
1502 </sect1>
1503
1504 </chapter>
1505
1506 <chapter id="Efficiency">
1507 <title>Locking Speed</title>
1508
1509 <para>
1510There are three main things to worry about when considering speed of
1511some code which does locking. First is concurrency: how many things
1512are going to be waiting while someone else is holding a lock. Second
1513is the time taken to actually acquire and release an uncontended lock.
1514Third is using fewer, or smarter locks. I'm assuming that the lock is
1515used fairly often: otherwise, you wouldn't be concerned about
1516efficiency.
1517</para>
1518 <para>
1519Concurrency depends on how long the lock is usually held: you should
1520hold the lock for as long as needed, but no longer. In the cache
1521example, we always create the object without the lock held, and then
1522grab the lock only when we are ready to insert it in the list.
1523</para>
1524 <para>
1525Acquisition times depend on how much damage the lock operations do to
1526the pipeline (pipeline stalls) and how likely it is that this CPU was
1527the last one to grab the lock (ie. is the lock cache-hot for this
1528CPU): on a machine with more CPUs, this likelihood drops fast.
1529Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns,
1530an atomic increment takes about 58ns, a lock which is cache-hot on
1531this CPU takes 160ns, and a cacheline transfer from another CPU takes
1532an additional 170 to 360ns. (These figures from Paul McKenney's
1533<ulink url="http://www.linuxjournal.com/article.php?sid=6993"> Linux
1534Journal RCU article</ulink>).
1535</para>
1536 <para>
1537These two aims conflict: holding a lock for a short time might be done
1538by splitting locks into parts (such as in our final per-object-lock
1539example), but this increases the number of lock acquisitions, and the
1540results are often slower than having a single lock. This is another
1541reason to advocate locking simplicity.
1542</para>
1543 <para>
1544The third concern is addressed below: there are some methods to reduce
1545the amount of locking which needs to be done.
1546</para>
1547
1548 <sect1 id="efficiency-rwlocks">
1549 <title>Read/Write Lock Variants</title>
1550
1551 <para>
1552 Both spinlocks and mutexes have read/write variants:
1553 <type>rwlock_t</type> and <structname>struct rw_semaphore</structname>.
1554 These divide users into two classes: the readers and the writers. If
1555 you are only reading the data, you can get a read lock, but to write to
1556 the data you need the write lock. Many people can hold a read lock,
1557 but a writer must be sole holder.
1558 </para>
1559
1560 <para>
1561 If your code divides neatly along reader/writer lines (as our
1562 cache code does), and the lock is held by readers for
1563 significant lengths of time, using these locks can help. They
1564 are slightly slower than the normal locks though, so in practice
1565 <type>rwlock_t</type> is not usually worthwhile.
1566 </para>
1567 </sect1>
1568
1569 <sect1 id="efficiency-read-copy-update">
1570 <title>Avoiding Locks: Read Copy Update</title>
1571
1572 <para>
1573 There is a special method of read/write locking called Read Copy
1574 Update. Using RCU, the readers can avoid taking a lock
1575 altogether: as we expect our cache to be read more often than
1576 updated (otherwise the cache is a waste of time), it is a
1577 candidate for this optimization.
1578 </para>
1579
1580 <para>
1581 How do we get rid of read locks? Getting rid of read locks
1582 means that writers may be changing the list underneath the
1583 readers. That is actually quite simple: we can read a linked
1584 list while an element is being added if the writer adds the
1585 element very carefully. For example, adding
1586 <symbol>new</symbol> to a single linked list called
1587 <symbol>list</symbol>:
1588 </para>
1589
1590 <programlisting>
1591 new-&gt;next = list-&gt;next;
1592 wmb();
1593 list-&gt;next = new;
1594 </programlisting>
1595
1596 <para>
1597 The <function>wmb()</function> is a write memory barrier. It
1598 ensures that the first operation (setting the new element's
1599 <symbol>next</symbol> pointer) is complete and will be seen by
1600 all CPUs, before the second operation is (putting the new
1601 element into the list). This is important, since modern
1602 compilers and modern CPUs can both reorder instructions unless
1603 told otherwise: we want a reader to either not see the new
1604 element at all, or see the new element with the
1605 <symbol>next</symbol> pointer correctly pointing at the rest of
1606 the list.
1607 </para>
1608 <para>
1609 Fortunately, there is a function to do this for standard
1610 <structname>struct list_head</structname> lists:
1611 <function>list_add_rcu()</function>
1612 (<filename>include/linux/list.h</filename>).
1613 </para>
1614 <para>
1615 Removing an element from the list is even simpler: we replace
1616 the pointer to the old element with a pointer to its successor,
1617 and readers will either see it, or skip over it.
1618 </para>
1619 <programlisting>
1620 list-&gt;next = old-&gt;next;
1621 </programlisting>
1622 <para>
1623 There is <function>list_del_rcu()</function>
1624 (<filename>include/linux/list.h</filename>) which does this (the
1625 normal version poisons the old object, which we don't want).
1626 </para>
1627 <para>
1628 The reader must also be careful: some CPUs can look through the
1629 <symbol>next</symbol> pointer to start reading the contents of
1630 the next element early, but don't realize that the pre-fetched
1631 contents is wrong when the <symbol>next</symbol> pointer changes
1632 underneath them. Once again, there is a
1633 <function>list_for_each_entry_rcu()</function>
1634 (<filename>include/linux/list.h</filename>) to help you. Of
1635 course, writers can just use
1636 <function>list_for_each_entry()</function>, since there cannot
1637 be two simultaneous writers.
1638 </para>
1639 <para>
1640 Our final dilemma is this: when can we actually destroy the
1641 removed element? Remember, a reader might be stepping through
1642 this element in the list right now: if we free this element and
1643 the <symbol>next</symbol> pointer changes, the reader will jump
1644 off into garbage and crash. We need to wait until we know that
1645 all the readers who were traversing the list when we deleted the
1646 element are finished. We use <function>call_rcu()</function> to
1647 register a callback which will actually destroy the object once
1648 all pre-existing readers are finished. Alternatively,
1649 <function>synchronize_rcu()</function> may be used to block until
1650 all pre-existing are finished.
1651 </para>
1652 <para>
1653 But how does Read Copy Update know when the readers are
1654 finished? The method is this: firstly, the readers always
1655 traverse the list inside
1656 <function>rcu_read_lock()</function>/<function>rcu_read_unlock()</function>
1657 pairs: these simply disable preemption so the reader won't go to
1658 sleep while reading the list.
1659 </para>
1660 <para>
1661 RCU then waits until every other CPU has slept at least once:
1662 since readers cannot sleep, we know that any readers which were
1663 traversing the list during the deletion are finished, and the
1664 callback is triggered. The real Read Copy Update code is a
1665 little more optimized than this, but this is the fundamental
1666 idea.
1667 </para>
1668
1669<programlisting>
1670--- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1671+++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1672@@ -1,15 +1,18 @@
1673 #include &lt;linux/list.h&gt;
1674 #include &lt;linux/slab.h&gt;
1675 #include &lt;linux/string.h&gt;
1676+#include &lt;linux/rcupdate.h&gt;
1677 #include &lt;linux/mutex.h&gt;
1678 #include &lt;asm/errno.h&gt;
1679
1680 struct object
1681 {
1682- /* These two protected by cache_lock. */
1683+ /* This is protected by RCU */
1684 struct list_head list;
1685 int popularity;
1686
1687+ struct rcu_head rcu;
1688+
1689 atomic_t refcnt;
1690
1691 /* Doesn't change once created. */
1692@@ -40,7 +43,7 @@
1693 {
1694 struct object *i;
1695
1696- list_for_each_entry(i, &amp;cache, list) {
1697+ list_for_each_entry_rcu(i, &amp;cache, list) {
1698 if (i-&gt;id == id) {
1699 i-&gt;popularity++;
1700 return i;
1701@@ -49,19 +52,25 @@
1702 return NULL;
1703 }
1704
1705+/* Final discard done once we know no readers are looking. */
1706+static void cache_delete_rcu(void *arg)
1707+{
1708+ object_put(arg);
1709+}
1710+
1711 /* Must be holding cache_lock */
1712 static void __cache_delete(struct object *obj)
1713 {
1714 BUG_ON(!obj);
1715- list_del(&amp;obj-&gt;list);
1716- object_put(obj);
1717+ list_del_rcu(&amp;obj-&gt;list);
1718 cache_num--;
1719+ call_rcu(&amp;obj-&gt;rcu, cache_delete_rcu);
1720 }
1721
1722 /* Must be holding cache_lock */
1723 static void __cache_add(struct object *obj)
1724 {
1725- list_add(&amp;obj-&gt;list, &amp;cache);
1726+ list_add_rcu(&amp;obj-&gt;list, &amp;cache);
1727 if (++cache_num > MAX_CACHE_SIZE) {
1728 struct object *i, *outcast = NULL;
1729 list_for_each_entry(i, &amp;cache, list) {
1730@@ -104,12 +114,11 @@
1731 struct object *cache_find(int id)
1732 {
1733 struct object *obj;
1734- unsigned long flags;
1735
1736- spin_lock_irqsave(&amp;cache_lock, flags);
1737+ rcu_read_lock();
1738 obj = __cache_find(id);
1739 if (obj)
1740 object_get(obj);
1741- spin_unlock_irqrestore(&amp;cache_lock, flags);
1742+ rcu_read_unlock();
1743 return obj;
1744 }
1745</programlisting>
1746
1747<para>
1748Note that the reader will alter the
1749<structfield>popularity</structfield> member in
1750<function>__cache_find()</function>, and now it doesn't hold a lock.
1751One solution would be to make it an <type>atomic_t</type>, but for
1752this usage, we don't really care about races: an approximate result is
1753good enough, so I didn't change it.
1754</para>
1755
1756<para>
1757The result is that <function>cache_find()</function> requires no
1758synchronization with any other functions, so is almost as fast on SMP
1759as it would be on UP.
1760</para>
1761
1762<para>
1763There is a further optimization possible here: remember our original
1764cache code, where there were no reference counts and the caller simply
1765held the lock whenever using the object? This is still possible: if
1766you hold the lock, no one can delete the object, so you don't need to
1767get and put the reference count.
1768</para>
1769
1770<para>
1771Now, because the 'read lock' in RCU is simply disabling preemption, a
1772caller which always has preemption disabled between calling
1773<function>cache_find()</function> and
1774<function>object_put()</function> does not need to actually get and
1775put the reference count: we could expose
1776<function>__cache_find()</function> by making it non-static, and
1777such callers could simply call that.
1778</para>
1779<para>
1780The benefit here is that the reference count is not written to: the
1781object is not altered in any way, which is much faster on SMP
1782machines due to caching.
1783</para>
1784 </sect1>
1785
1786 <sect1 id="per-cpu">
1787 <title>Per-CPU Data</title>
1788
1789 <para>
1790 Another technique for avoiding locking which is used fairly
1791 widely is to duplicate information for each CPU. For example,
1792 if you wanted to keep a count of a common condition, you could
1793 use a spin lock and a single counter. Nice and simple.
1794 </para>
1795
1796 <para>
1797 If that was too slow (it's usually not, but if you've got a
1798 really big machine to test on and can show that it is), you
1799 could instead use a counter for each CPU, then none of them need
1800 an exclusive lock. See <function>DEFINE_PER_CPU()</function>,
1801 <function>get_cpu_var()</function> and
1802 <function>put_cpu_var()</function>
1803 (<filename class="headerfile">include/linux/percpu.h</filename>).
1804 </para>
1805
1806 <para>
1807 Of particular use for simple per-cpu counters is the
1808 <type>local_t</type> type, and the
1809 <function>cpu_local_inc()</function> and related functions,
1810 which are more efficient than simple code on some architectures
1811 (<filename class="headerfile">include/asm/local.h</filename>).
1812 </para>
1813
1814 <para>
1815 Note that there is no simple, reliable way of getting an exact
1816 value of such a counter, without introducing more locks. This
1817 is not a problem for some uses.
1818 </para>
1819 </sect1>
1820
1821 <sect1 id="mostly-hardirq">
1822 <title>Data Which Mostly Used By An IRQ Handler</title>
1823
1824 <para>
1825 If data is always accessed from within the same IRQ handler, you
1826 don't need a lock at all: the kernel already guarantees that the
1827 irq handler will not run simultaneously on multiple CPUs.
1828 </para>
1829 <para>
1830 Manfred Spraul points out that you can still do this, even if
1831 the data is very occasionally accessed in user context or
1832 softirqs/tasklets. The irq handler doesn't use a lock, and
1833 all other accesses are done as so:
1834 </para>
1835
1836<programlisting>
1837 spin_lock(&amp;lock);
1838 disable_irq(irq);
1839 ...
1840 enable_irq(irq);
1841 spin_unlock(&amp;lock);
1842</programlisting>
1843 <para>
1844 The <function>disable_irq()</function> prevents the irq handler
1845 from running (and waits for it to finish if it's currently
1846 running on other CPUs). The spinlock prevents any other
1847 accesses happening at the same time. Naturally, this is slower
1848 than just a <function>spin_lock_irq()</function> call, so it
1849 only makes sense if this type of access happens extremely
1850 rarely.
1851 </para>
1852 </sect1>
1853 </chapter>
1854
1855 <chapter id="sleeping-things">
1856 <title>What Functions Are Safe To Call From Interrupts?</title>
1857
1858 <para>
1859 Many functions in the kernel sleep (ie. call schedule())
1860 directly or indirectly: you can never call them while holding a
1861 spinlock, or with preemption disabled. This also means you need
1862 to be in user context: calling them from an interrupt is illegal.
1863 </para>
1864
1865 <sect1 id="sleeping">
1866 <title>Some Functions Which Sleep</title>
1867
1868 <para>
1869 The most common ones are listed below, but you usually have to
1870 read the code to find out if other calls are safe. If everyone
1871 else who calls it can sleep, you probably need to be able to
1872 sleep, too. In particular, registration and deregistration
1873 functions usually expect to be called from user context, and can
1874 sleep.
1875 </para>
1876
1877 <itemizedlist>
1878 <listitem>
1879 <para>
1880 Accesses to
1881 <firstterm linkend="gloss-userspace">userspace</firstterm>:
1882 </para>
1883 <itemizedlist>
1884 <listitem>
1885 <para>
1886 <function>copy_from_user()</function>
1887 </para>
1888 </listitem>
1889 <listitem>
1890 <para>
1891 <function>copy_to_user()</function>
1892 </para>
1893 </listitem>
1894 <listitem>
1895 <para>
1896 <function>get_user()</function>
1897 </para>
1898 </listitem>
1899 <listitem>
1900 <para>
1901 <function>put_user()</function>
1902 </para>
1903 </listitem>
1904 </itemizedlist>
1905 </listitem>
1906
1907 <listitem>
1908 <para>
1909 <function>kmalloc(GFP_KERNEL)</function>
1910 </para>
1911 </listitem>
1912
1913 <listitem>
1914 <para>
1915 <function>mutex_lock_interruptible()</function> and
1916 <function>mutex_lock()</function>
1917 </para>
1918 <para>
1919 There is a <function>mutex_trylock()</function> which does not
1920 sleep. Still, it must not be used inside interrupt context since
1921 its implementation is not safe for that.
1922 <function>mutex_unlock()</function> will also never sleep.
1923 It cannot be used in interrupt context either since a mutex
1924 must be released by the same task that acquired it.
1925 </para>
1926 </listitem>
1927 </itemizedlist>
1928 </sect1>
1929
1930 <sect1 id="dont-sleep">
1931 <title>Some Functions Which Don't Sleep</title>
1932
1933 <para>
1934 Some functions are safe to call from any context, or holding
1935 almost any lock.
1936 </para>
1937
1938 <itemizedlist>
1939 <listitem>
1940 <para>
1941 <function>printk()</function>
1942 </para>
1943 </listitem>
1944 <listitem>
1945 <para>
1946 <function>kfree()</function>
1947 </para>
1948 </listitem>
1949 <listitem>
1950 <para>
1951 <function>add_timer()</function> and <function>del_timer()</function>
1952 </para>
1953 </listitem>
1954 </itemizedlist>
1955 </sect1>
1956 </chapter>
1957
1958 <chapter id="apiref-mutex">
1959 <title>Mutex API reference</title>
1960!Iinclude/linux/mutex.h
1961!Ekernel/locking/mutex.c
1962 </chapter>
1963
1964 <chapter id="apiref-futex">
1965 <title>Futex API reference</title>
1966!Ikernel/futex.c
1967 </chapter>
1968
1969 <chapter id="references">
1970 <title>Further reading</title>
1971
1972 <itemizedlist>
1973 <listitem>
1974 <para>
1975 <filename>Documentation/locking/spinlocks.txt</filename>:
1976 Linus Torvalds' spinlocking tutorial in the kernel sources.
1977 </para>
1978 </listitem>
1979
1980 <listitem>
1981 <para>
1982 Unix Systems for Modern Architectures: Symmetric
1983 Multiprocessing and Caching for Kernel Programmers:
1984 </para>
1985
1986 <para>
1987 Curt Schimmel's very good introduction to kernel level
1988 locking (not written for Linux, but nearly everything
1989 applies). The book is expensive, but really worth every
1990 penny to understand SMP locking. [ISBN: 0201633388]
1991 </para>
1992 </listitem>
1993 </itemizedlist>
1994 </chapter>
1995
1996 <chapter id="thanks">
1997 <title>Thanks</title>
1998
1999 <para>
2000 Thanks to Telsa Gwynne for DocBooking, neatening and adding
2001 style.
2002 </para>
2003
2004 <para>
2005 Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul
2006 Mackerras, Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim
2007 Waugh, Pete Zaitcev, James Morris, Robert Love, Paul McKenney,
2008 John Ashby for proofreading, correcting, flaming, commenting.
2009 </para>
2010
2011 <para>
2012 Thanks to the cabal for having no influence on this document.
2013 </para>
2014 </chapter>
2015
2016 <glossary id="glossary">
2017 <title>Glossary</title>
2018
2019 <glossentry id="gloss-preemption">
2020 <glossterm>preemption</glossterm>
2021 <glossdef>
2022 <para>
2023 Prior to 2.5, or when <symbol>CONFIG_PREEMPT</symbol> is
2024 unset, processes in user context inside the kernel would not
2025 preempt each other (ie. you had that CPU until you gave it up,
2026 except for interrupts). With the addition of
2027 <symbol>CONFIG_PREEMPT</symbol> in 2.5.4, this changed: when
2028 in user context, higher priority tasks can "cut in": spinlocks
2029 were changed to disable preemption, even on UP.
2030 </para>
2031 </glossdef>
2032 </glossentry>
2033
2034 <glossentry id="gloss-bh">
2035 <glossterm>bh</glossterm>
2036 <glossdef>
2037 <para>
2038 Bottom Half: for historical reasons, functions with
2039 '_bh' in them often now refer to any software interrupt, e.g.
2040 <function>spin_lock_bh()</function> blocks any software interrupt
2041 on the current CPU. Bottom halves are deprecated, and will
2042 eventually be replaced by tasklets. Only one bottom half will be
2043 running at any time.
2044 </para>
2045 </glossdef>
2046 </glossentry>
2047
2048 <glossentry id="gloss-hwinterrupt">
2049 <glossterm>Hardware Interrupt / Hardware IRQ</glossterm>
2050 <glossdef>
2051 <para>
2052 Hardware interrupt request. <function>in_irq()</function> returns
2053 <returnvalue>true</returnvalue> in a hardware interrupt handler.
2054 </para>
2055 </glossdef>
2056 </glossentry>
2057
2058 <glossentry id="gloss-interruptcontext">
2059 <glossterm>Interrupt Context</glossterm>
2060 <glossdef>
2061 <para>
2062 Not user context: processing a hardware irq or software irq.
2063 Indicated by the <function>in_interrupt()</function> macro
2064 returning <returnvalue>true</returnvalue>.
2065 </para>
2066 </glossdef>
2067 </glossentry>
2068
2069 <glossentry id="gloss-smp">
2070 <glossterm><acronym>SMP</acronym></glossterm>
2071 <glossdef>
2072 <para>
2073 Symmetric Multi-Processor: kernels compiled for multiple-CPU
2074 machines. (CONFIG_SMP=y).
2075 </para>
2076 </glossdef>
2077 </glossentry>
2078
2079 <glossentry id="gloss-softirq">
2080 <glossterm>Software Interrupt / softirq</glossterm>
2081 <glossdef>
2082 <para>
2083 Software interrupt handler. <function>in_irq()</function> returns
2084 <returnvalue>false</returnvalue>; <function>in_softirq()</function>
2085 returns <returnvalue>true</returnvalue>. Tasklets and softirqs
2086 both fall into the category of 'software interrupts'.
2087 </para>
2088 <para>
2089 Strictly speaking a softirq is one of up to 32 enumerated software
2090 interrupts which can run on multiple CPUs at once.
2091 Sometimes used to refer to tasklets as
2092 well (ie. all software interrupts).
2093 </para>
2094 </glossdef>
2095 </glossentry>
2096
2097 <glossentry id="gloss-tasklet">
2098 <glossterm>tasklet</glossterm>
2099 <glossdef>
2100 <para>
2101 A dynamically-registrable software interrupt,
2102 which is guaranteed to only run on one CPU at a time.
2103 </para>
2104 </glossdef>
2105 </glossentry>
2106
2107 <glossentry id="gloss-timers">
2108 <glossterm>timer</glossterm>
2109 <glossdef>
2110 <para>
2111 A dynamically-registrable software interrupt, which is run at
2112 (or close to) a given time. When running, it is just like a
2113 tasklet (in fact, they are called from the TIMER_SOFTIRQ).
2114 </para>
2115 </glossdef>
2116 </glossentry>
2117
2118 <glossentry id="gloss-up">
2119 <glossterm><acronym>UP</acronym></glossterm>
2120 <glossdef>
2121 <para>
2122 Uni-Processor: Non-SMP. (CONFIG_SMP=n).
2123 </para>
2124 </glossdef>
2125 </glossentry>
2126
2127 <glossentry id="gloss-usercontext">
2128 <glossterm>User Context</glossterm>
2129 <glossdef>
2130 <para>
2131 The kernel executing on behalf of a particular process (ie. a
2132 system call or trap) or kernel thread. You can tell which
2133 process with the <symbol>current</symbol> macro.) Not to
2134 be confused with userspace. Can be interrupted by software or
2135 hardware interrupts.
2136 </para>
2137 </glossdef>
2138 </glossentry>
2139
2140 <glossentry id="gloss-userspace">
2141 <glossterm>Userspace</glossterm>
2142 <glossdef>
2143 <para>
2144 A process executing its own code outside the kernel.
2145 </para>
2146 </glossdef>
2147 </glossentry>
2148
2149 </glossary>
2150</book>
2151
diff --git a/Documentation/kernel-hacking/index.rst b/Documentation/kernel-hacking/index.rst
index ba208bf03ce9..fcb0eda3cca3 100644
--- a/Documentation/kernel-hacking/index.rst
+++ b/Documentation/kernel-hacking/index.rst
@@ -6,3 +6,4 @@ Kernel Hacking Guides
6 :maxdepth: 2 6 :maxdepth: 2
7 7
8 hacking 8 hacking
9 locking
diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst
new file mode 100644
index 000000000000..976b2703df75
--- /dev/null
+++ b/Documentation/kernel-hacking/locking.rst
@@ -0,0 +1,1453 @@
1===========================
2Unreliable Guide To Locking
3===========================
4
5:Author: Rusty Russell
6
7Introduction
8============
9
10Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
11issues. This document describes the locking systems in the Linux Kernel
12in 2.6.
13
14With the wide availability of HyperThreading, and preemption in the
15Linux Kernel, everyone hacking on the kernel needs to know the
16fundamentals of concurrency and locking for SMP.
17
18The Problem With Concurrency
19============================
20
21(Skip this if you know what a Race Condition is).
22
23In a normal program, you can increment a counter like so:
24
25::
26
27 very_important_count++;
28
29
30This is what they would expect to happen:
31
32+------------------------------------+------------------------------------+
33| Instance 1 | Instance 2 |
34+====================================+====================================+
35| read very_important_count (5) | |
36+------------------------------------+------------------------------------+
37| add 1 (6) | |
38+------------------------------------+------------------------------------+
39| write very_important_count (6) | |
40+------------------------------------+------------------------------------+
41| | read very_important_count (6) |
42+------------------------------------+------------------------------------+
43| | add 1 (7) |
44+------------------------------------+------------------------------------+
45| | write very_important_count (7) |
46+------------------------------------+------------------------------------+
47
48Table: Expected Results
49
50This is what might happen:
51
52+------------------------------------+------------------------------------+
53| Instance 1 | Instance 2 |
54+====================================+====================================+
55| read very_important_count (5) | |
56+------------------------------------+------------------------------------+
57| | read very_important_count (5) |
58+------------------------------------+------------------------------------+
59| add 1 (6) | |
60+------------------------------------+------------------------------------+
61| | add 1 (6) |
62+------------------------------------+------------------------------------+
63| write very_important_count (6) | |
64+------------------------------------+------------------------------------+
65| | write very_important_count (6) |
66+------------------------------------+------------------------------------+
67
68Table: Possible Results
69
70Race Conditions and Critical Regions
71------------------------------------
72
73This overlap, where the result depends on the relative timing of
74multiple tasks, is called a race condition. The piece of code containing
75the concurrency issue is called a critical region. And especially since
76Linux starting running on SMP machines, they became one of the major
77issues in kernel design and implementation.
78
79Preemption can have the same effect, even if there is only one CPU: by
80preempting one task during the critical region, we have exactly the same
81race condition. In this case the thread which preempts might run the
82critical region itself.
83
84The solution is to recognize when these simultaneous accesses occur, and
85use locks to make sure that only one instance can enter the critical
86region at any time. There are many friendly primitives in the Linux
87kernel to help you do this. And then there are the unfriendly
88primitives, but I'll pretend they don't exist.
89
90Locking in the Linux Kernel
91===========================
92
93If I could give you one piece of advice: never sleep with anyone crazier
94than yourself. But if I had to give you advice on locking: *keep it
95simple*.
96
97Be reluctant to introduce new locks.
98
99Strangely enough, this last one is the exact reverse of my advice when
100you *have* slept with someone crazier than yourself. And you should
101think about getting a big dog.
102
103Two Main Types of Kernel Locks: Spinlocks and Mutexes
104-----------------------------------------------------
105
106There are two main types of kernel locks. The fundamental type is the
107spinlock (``include/asm/spinlock.h``), which is a very simple
108single-holder lock: if you can't get the spinlock, you keep trying
109(spinning) until you can. Spinlocks are very small and fast, and can be
110used anywhere.
111
112The second type is a mutex (``include/linux/mutex.h``): it is like a
113spinlock, but you may block holding a mutex. If you can't lock a mutex,
114your task will suspend itself, and be woken up when the mutex is
115released. This means the CPU can do something else while you are
116waiting. There are many cases when you simply can't sleep (see
117`What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__),
118and so have to use a spinlock instead.
119
120Neither type of lock is recursive: see
121`Deadlock: Simple and Advanced <#deadlock>`__.
122
123Locks and Uniprocessor Kernels
124------------------------------
125
126For kernels compiled without ``CONFIG_SMP``, and without
127``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
128design decision: when no-one else can run at the same time, there is no
129reason to have a lock.
130
131If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
132is set, then spinlocks simply disable preemption, which is sufficient to
133prevent any races. For most purposes, we can think of preemption as
134equivalent to SMP, and not worry about it separately.
135
136You should always test your locking code with ``CONFIG_SMP`` and
137``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
138because it will still catch some kinds of locking bugs.
139
140Mutexes still exist, because they are required for synchronization
141between user contexts, as we will see below.
142
143Locking Only In User Context
144----------------------------
145
146If you have a data structure which is only ever accessed from user
147context, then you can use a simple mutex (``include/linux/mutex.h``) to
148protect it. This is the most trivial case: you initialize the mutex.
149Then you can call :c:func:`mutex_lock_interruptible()` to grab the
150mutex, and :c:func:`mutex_unlock()` to release it. There is also a
151:c:func:`mutex_lock()`, which should be avoided, because it will
152not return if a signal is received.
153
154Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
155:c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with
156:c:func:`nf_register_sockopt()`. Registration and de-registration
157are only done on module load and unload (and boot time, where there is
158no concurrency), and the list of registrations is only consulted for an
159unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system
160call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
161since the setsockopt and getsockopt calls may well sleep.
162
163Locking Between User Context and Softirqs
164-----------------------------------------
165
166If a softirq shares data with user context, you have two problems.
167Firstly, the current user context can be interrupted by a softirq, and
168secondly, the critical region could be entered from another CPU. This is
169where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is
170used. It disables softirqs on that CPU, then grabs the lock.
171:c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is
172a historical reference to "Bottom Halves", the old name for software
173interrupts. It should really be called spin_lock_softirq()' in a
174perfect world).
175
176Note that you can also use :c:func:`spin_lock_irq()` or
177:c:func:`spin_lock_irqsave()` here, which stop hardware interrupts
178as well: see `Hard IRQ Context <#hardirq-context>`__.
179
180This works perfectly for UP as well: the spin lock vanishes, and this
181macro simply becomes :c:func:`local_bh_disable()`
182(``include/linux/interrupt.h``), which protects you from the softirq
183being run.
184
185Locking Between User Context and Tasklets
186-----------------------------------------
187
188This is exactly the same as above, because tasklets are actually run
189from a softirq.
190
191Locking Between User Context and Timers
192---------------------------------------
193
194This, too, is exactly the same as above, because timers are actually run
195from a softirq. From a locking point of view, tasklets and timers are
196identical.
197
198Locking Between Tasklets/Timers
199-------------------------------
200
201Sometimes a tasklet or timer might want to share data with another
202tasklet or timer.
203
204The Same Tasklet/Timer
205~~~~~~~~~~~~~~~~~~~~~~
206
207Since a tasklet is never run on two CPUs at once, you don't need to
208worry about your tasklet being reentrant (running twice at once), even
209on SMP.
210
211Different Tasklets/Timers
212~~~~~~~~~~~~~~~~~~~~~~~~~
213
214If another tasklet/timer wants to share data with your tasklet or timer
215, you will both need to use :c:func:`spin_lock()` and
216:c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is
217unnecessary here, as you are already in a tasklet, and none will be run
218on the same CPU.
219
220Locking Between Softirqs
221------------------------
222
223Often a softirq might want to share data with itself or a tasklet/timer.
224
225The Same Softirq
226~~~~~~~~~~~~~~~~
227
228The same softirq can run on the other CPUs: you can use a per-CPU array
229(see `Per-CPU Data <#per-cpu>`__) for better performance. If you're
230going so far as to use a softirq, you probably care about scalable
231performance enough to justify the extra complexity.
232
233You'll need to use :c:func:`spin_lock()` and
234:c:func:`spin_unlock()` for shared data.
235
236Different Softirqs
237~~~~~~~~~~~~~~~~~~
238
239You'll need to use :c:func:`spin_lock()` and
240:c:func:`spin_unlock()` for shared data, whether it be a timer,
241tasklet, different softirq or the same or another softirq: any of them
242could be running on a different CPU.
243
244Hard IRQ Context
245================
246
247Hardware interrupts usually communicate with a tasklet or softirq.
248Frequently this involves putting work in a queue, which the softirq will
249take out.
250
251Locking Between Hard IRQ and Softirqs/Tasklets
252----------------------------------------------
253
254If a hardware irq handler shares data with a softirq, you have two
255concerns. Firstly, the softirq processing can be interrupted by a
256hardware interrupt, and secondly, the critical region could be entered
257by a hardware interrupt on another CPU. This is where
258:c:func:`spin_lock_irq()` is used. It is defined to disable
259interrupts on that cpu, then grab the lock.
260:c:func:`spin_unlock_irq()` does the reverse.
261
262The irq handler does not to use :c:func:`spin_lock_irq()`, because
263the softirq cannot run while the irq handler is running: it can use
264:c:func:`spin_lock()`, which is slightly faster. The only exception
265would be if a different hardware irq handler uses the same lock:
266:c:func:`spin_lock_irq()` will stop that from interrupting us.
267
268This works perfectly for UP as well: the spin lock vanishes, and this
269macro simply becomes :c:func:`local_irq_disable()`
270(``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
271being run.
272
273:c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a
274variant which saves whether interrupts were on or off in a flags word,
275which is passed to :c:func:`spin_unlock_irqrestore()`. This means
276that the same code can be used inside an hard irq handler (where
277interrupts are already off) and in softirqs (where the irq disabling is
278required).
279
280Note that softirqs (and hence tasklets and timers) are run on return
281from hardware interrupts, so :c:func:`spin_lock_irq()` also stops
282these. In that sense, :c:func:`spin_lock_irqsave()` is the most
283general and powerful locking function.
284
285Locking Between Two Hard IRQ Handlers
286-------------------------------------
287
288It is rare to have to share data between two IRQ handlers, but if you
289do, :c:func:`spin_lock_irqsave()` should be used: it is
290architecture-specific whether all interrupts are disabled inside irq
291handlers themselves.
292
293Cheat Sheet For Locking
294=======================
295
296Pete Zaitcev gives the following summary:
297
298- If you are in a process context (any syscall) and want to lock other
299 process out, use a mutex. You can take a mutex and sleep
300 (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
301
302- Otherwise (== data can be touched in an interrupt), use
303 :c:func:`spin_lock_irqsave()` and
304 :c:func:`spin_unlock_irqrestore()`.
305
306- Avoid holding spinlock for more than 5 lines of code and across any
307 function call (except accessors like :c:func:`readb()`).
308
309Table of Minimum Requirements
310-----------------------------
311
312The following table lists the *minimum* locking requirements between
313various contexts. In some cases, the same context can only be running on
314one CPU at a time, so no locking is required for that context (eg. a
315particular thread can only run on one CPU at a time, but if it needs
316shares data with another thread, locking is required).
317
318Remember the advice above: you can always use
319:c:func:`spin_lock_irqsave()`, which is a superset of all other
320spinlock primitives.
321
322+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
323| | IRQ Handler A | IRQ Handler B | Softirq A | Softirq B | Tasklet A | Tasklet B | Timer A | Timer B | User Context A | User Context B |
324+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
325| IRQ Handler A | None | | | | | | | | | |
326+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
327| IRQ Handler B | SLIS | None | | | | | | | | |
328+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
329| Softirq A | SLI | SLI | SL | | | | | | | |
330+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
331| Softirq B | SLI | SLI | SL | SL | | | | | | |
332+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
333| Tasklet A | SLI | SLI | SL | SL | None | | | | | |
334+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
335| Tasklet B | SLI | SLI | SL | SL | SL | None | | | | |
336+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
337| Timer A | SLI | SLI | SL | SL | SL | SL | None | | | |
338+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
339| Timer B | SLI | SLI | SL | SL | SL | SL | SL | None | | |
340+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
341| User Context A | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | None | |
342+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
343| User Context B | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | MLI | None |
344+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
345
346Table: Table of Locking Requirements
347
348+--------+----------------------------+
349| SLIS | spin_lock_irqsave |
350+--------+----------------------------+
351| SLI | spin_lock_irq |
352+--------+----------------------------+
353| SL | spin_lock |
354+--------+----------------------------+
355| SLBH | spin_lock_bh |
356+--------+----------------------------+
357| MLI | mutex_lock_interruptible |
358+--------+----------------------------+
359
360Table: Legend for Locking Requirements Table
361
362The trylock Functions
363=====================
364
365There are functions that try to acquire a lock only once and immediately
366return a value telling about success or failure to acquire the lock.
367They can be used if you need no access to the data protected with the
368lock when some other thread is holding the lock. You should acquire the
369lock later if you then need access to the data protected with the lock.
370
371:c:func:`spin_trylock()` does not spin but returns non-zero if it
372acquires the spinlock on the first try or 0 if not. This function can be
373used in all contexts like :c:func:`spin_lock()`: you must have
374disabled the contexts that might interrupt you and acquire the spin
375lock.
376
377:c:func:`mutex_trylock()` does not suspend your task but returns
378non-zero if it could lock the mutex on the first try or 0 if not. This
379function cannot be safely used in hardware or software interrupt
380contexts despite not sleeping.
381
382Common Examples
383===============
384
385Let's step through a simple example: a cache of number to name mappings.
386The cache keeps a count of how often each of the objects is used, and
387when it gets full, throws out the least used one.
388
389All In User Context
390-------------------
391
392For our first example, we assume that all operations are in user context
393(ie. from system calls), so we can sleep. This means we can use a mutex
394to protect the cache and all the objects within it. Here's the code::
395
396 #include <linux/list.h>
397 #include <linux/slab.h>
398 #include <linux/string.h>
399 #include <linux/mutex.h>
400 #include <asm/errno.h>
401
402 struct object
403 {
404 struct list_head list;
405 int id;
406 char name[32];
407 int popularity;
408 };
409
410 /* Protects the cache, cache_num, and the objects within it */
411 static DEFINE_MUTEX(cache_lock);
412 static LIST_HEAD(cache);
413 static unsigned int cache_num = 0;
414 #define MAX_CACHE_SIZE 10
415
416 /* Must be holding cache_lock */
417 static struct object *__cache_find(int id)
418 {
419 struct object *i;
420
421 list_for_each_entry(i, &cache, list)
422 if (i->id == id) {
423 i->popularity++;
424 return i;
425 }
426 return NULL;
427 }
428
429 /* Must be holding cache_lock */
430 static void __cache_delete(struct object *obj)
431 {
432 BUG_ON(!obj);
433 list_del(&obj->list);
434 kfree(obj);
435 cache_num--;
436 }
437
438 /* Must be holding cache_lock */
439 static void __cache_add(struct object *obj)
440 {
441 list_add(&obj->list, &cache);
442 if (++cache_num > MAX_CACHE_SIZE) {
443 struct object *i, *outcast = NULL;
444 list_for_each_entry(i, &cache, list) {
445 if (!outcast || i->popularity < outcast->popularity)
446 outcast = i;
447 }
448 __cache_delete(outcast);
449 }
450 }
451
452 int cache_add(int id, const char *name)
453 {
454 struct object *obj;
455
456 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
457 return -ENOMEM;
458
459 strlcpy(obj->name, name, sizeof(obj->name));
460 obj->id = id;
461 obj->popularity = 0;
462
463 mutex_lock(&cache_lock);
464 __cache_add(obj);
465 mutex_unlock(&cache_lock);
466 return 0;
467 }
468
469 void cache_delete(int id)
470 {
471 mutex_lock(&cache_lock);
472 __cache_delete(__cache_find(id));
473 mutex_unlock(&cache_lock);
474 }
475
476 int cache_find(int id, char *name)
477 {
478 struct object *obj;
479 int ret = -ENOENT;
480
481 mutex_lock(&cache_lock);
482 obj = __cache_find(id);
483 if (obj) {
484 ret = 0;
485 strcpy(name, obj->name);
486 }
487 mutex_unlock(&cache_lock);
488 return ret;
489 }
490
491Note that we always make sure we have the cache_lock when we add,
492delete, or look up the cache: both the cache infrastructure itself and
493the contents of the objects are protected by the lock. In this case it's
494easy, since we copy the data for the user, and never let them access the
495objects directly.
496
497There is a slight (and common) optimization here: in
498:c:func:`cache_add()` we set up the fields of the object before
499grabbing the lock. This is safe, as no-one else can access it until we
500put it in cache.
501
502Accessing From Interrupt Context
503--------------------------------
504
505Now consider the case where :c:func:`cache_find()` can be called
506from interrupt context: either a hardware interrupt or a softirq. An
507example would be a timer which deletes object from the cache.
508
509The change is shown below, in standard patch format: the ``-`` are lines
510which are taken away, and the ``+`` are lines which are added.
511
512::
513
514 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
515 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
516 @@ -12,7 +12,7 @@
517 int popularity;
518 };
519
520 -static DEFINE_MUTEX(cache_lock);
521 +static DEFINE_SPINLOCK(cache_lock);
522 static LIST_HEAD(cache);
523 static unsigned int cache_num = 0;
524 #define MAX_CACHE_SIZE 10
525 @@ -55,6 +55,7 @@
526 int cache_add(int id, const char *name)
527 {
528 struct object *obj;
529 + unsigned long flags;
530
531 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
532 return -ENOMEM;
533 @@ -63,30 +64,33 @@
534 obj->id = id;
535 obj->popularity = 0;
536
537 - mutex_lock(&cache_lock);
538 + spin_lock_irqsave(&cache_lock, flags);
539 __cache_add(obj);
540 - mutex_unlock(&cache_lock);
541 + spin_unlock_irqrestore(&cache_lock, flags);
542 return 0;
543 }
544
545 void cache_delete(int id)
546 {
547 - mutex_lock(&cache_lock);
548 + unsigned long flags;
549 +
550 + spin_lock_irqsave(&cache_lock, flags);
551 __cache_delete(__cache_find(id));
552 - mutex_unlock(&cache_lock);
553 + spin_unlock_irqrestore(&cache_lock, flags);
554 }
555
556 int cache_find(int id, char *name)
557 {
558 struct object *obj;
559 int ret = -ENOENT;
560 + unsigned long flags;
561
562 - mutex_lock(&cache_lock);
563 + spin_lock_irqsave(&cache_lock, flags);
564 obj = __cache_find(id);
565 if (obj) {
566 ret = 0;
567 strcpy(name, obj->name);
568 }
569 - mutex_unlock(&cache_lock);
570 + spin_unlock_irqrestore(&cache_lock, flags);
571 return ret;
572 }
573
574Note that the :c:func:`spin_lock_irqsave()` will turn off
575interrupts if they are on, otherwise does nothing (if we are already in
576an interrupt handler), hence these functions are safe to call from any
577context.
578
579Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()`
580with the ``GFP_KERNEL`` flag, which is only legal in user context. I
581have assumed that :c:func:`cache_add()` is still only called in
582user context, otherwise this should become a parameter to
583:c:func:`cache_add()`.
584
585Exposing Objects Outside This File
586----------------------------------
587
588If our objects contained more information, it might not be sufficient to
589copy the information in and out: other parts of the code might want to
590keep pointers to these objects, for example, rather than looking up the
591id every time. This produces two problems.
592
593The first problem is that we use the ``cache_lock`` to protect objects:
594we'd need to make this non-static so the rest of the code can use it.
595This makes locking trickier, as it is no longer all in one place.
596
597The second problem is the lifetime problem: if another structure keeps a
598pointer to an object, it presumably expects that pointer to remain
599valid. Unfortunately, this is only guaranteed while you hold the lock,
600otherwise someone might call :c:func:`cache_delete()` and even
601worse, add another object, re-using the same address.
602
603As there is only one lock, you can't hold it forever: no-one else would
604get any work done.
605
606The solution to this problem is to use a reference count: everyone who
607has a pointer to the object increases it when they first get the object,
608and drops the reference count when they're finished with it. Whoever
609drops it to zero knows it is unused, and can actually delete it.
610
611Here is the code::
612
613 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
614 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
615 @@ -7,6 +7,7 @@
616 struct object
617 {
618 struct list_head list;
619 + unsigned int refcnt;
620 int id;
621 char name[32];
622 int popularity;
623 @@ -17,6 +18,35 @@
624 static unsigned int cache_num = 0;
625 #define MAX_CACHE_SIZE 10
626
627 +static void __object_put(struct object *obj)
628 +{
629 + if (--obj->refcnt == 0)
630 + kfree(obj);
631 +}
632 +
633 +static void __object_get(struct object *obj)
634 +{
635 + obj->refcnt++;
636 +}
637 +
638 +void object_put(struct object *obj)
639 +{
640 + unsigned long flags;
641 +
642 + spin_lock_irqsave(&cache_lock, flags);
643 + __object_put(obj);
644 + spin_unlock_irqrestore(&cache_lock, flags);
645 +}
646 +
647 +void object_get(struct object *obj)
648 +{
649 + unsigned long flags;
650 +
651 + spin_lock_irqsave(&cache_lock, flags);
652 + __object_get(obj);
653 + spin_unlock_irqrestore(&cache_lock, flags);
654 +}
655 +
656 /* Must be holding cache_lock */
657 static struct object *__cache_find(int id)
658 {
659 @@ -35,6 +65,7 @@
660 {
661 BUG_ON(!obj);
662 list_del(&obj->list);
663 + __object_put(obj);
664 cache_num--;
665 }
666
667 @@ -63,6 +94,7 @@
668 strlcpy(obj->name, name, sizeof(obj->name));
669 obj->id = id;
670 obj->popularity = 0;
671 + obj->refcnt = 1; /* The cache holds a reference */
672
673 spin_lock_irqsave(&cache_lock, flags);
674 __cache_add(obj);
675 @@ -79,18 +111,15 @@
676 spin_unlock_irqrestore(&cache_lock, flags);
677 }
678
679 -int cache_find(int id, char *name)
680 +struct object *cache_find(int id)
681 {
682 struct object *obj;
683 - int ret = -ENOENT;
684 unsigned long flags;
685
686 spin_lock_irqsave(&cache_lock, flags);
687 obj = __cache_find(id);
688 - if (obj) {
689 - ret = 0;
690 - strcpy(name, obj->name);
691 - }
692 + if (obj)
693 + __object_get(obj);
694 spin_unlock_irqrestore(&cache_lock, flags);
695 - return ret;
696 + return obj;
697 }
698
699We encapsulate the reference counting in the standard 'get' and 'put'
700functions. Now we can return the object itself from
701:c:func:`cache_find()` which has the advantage that the user can
702now sleep holding the object (eg. to :c:func:`copy_to_user()` to
703name to userspace).
704
705The other point to note is that I said a reference should be held for
706every pointer to the object: thus the reference count is 1 when first
707inserted into the cache. In some versions the framework does not hold a
708reference count, but they are more complicated.
709
710Using Atomic Operations For The Reference Count
711~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
712
713In practice, ``atomic_t`` would usually be used for refcnt. There are a
714number of atomic operations defined in ``include/asm/atomic.h``: these
715are guaranteed to be seen atomically from all CPUs in the system, so no
716lock is required. In this case, it is simpler than using spinlocks,
717although for anything non-trivial using spinlocks is clearer. The
718:c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()`
719are used instead of the standard increment and decrement operators, and
720the lock is no longer used to protect the reference count itself.
721
722::
723
724 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
725 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
726 @@ -7,7 +7,7 @@
727 struct object
728 {
729 struct list_head list;
730 - unsigned int refcnt;
731 + atomic_t refcnt;
732 int id;
733 char name[32];
734 int popularity;
735 @@ -18,33 +18,15 @@
736 static unsigned int cache_num = 0;
737 #define MAX_CACHE_SIZE 10
738
739 -static void __object_put(struct object *obj)
740 -{
741 - if (--obj->refcnt == 0)
742 - kfree(obj);
743 -}
744 -
745 -static void __object_get(struct object *obj)
746 -{
747 - obj->refcnt++;
748 -}
749 -
750 void object_put(struct object *obj)
751 {
752 - unsigned long flags;
753 -
754 - spin_lock_irqsave(&cache_lock, flags);
755 - __object_put(obj);
756 - spin_unlock_irqrestore(&cache_lock, flags);
757 + if (atomic_dec_and_test(&obj->refcnt))
758 + kfree(obj);
759 }
760
761 void object_get(struct object *obj)
762 {
763 - unsigned long flags;
764 -
765 - spin_lock_irqsave(&cache_lock, flags);
766 - __object_get(obj);
767 - spin_unlock_irqrestore(&cache_lock, flags);
768 + atomic_inc(&obj->refcnt);
769 }
770
771 /* Must be holding cache_lock */
772 @@ -65,7 +47,7 @@
773 {
774 BUG_ON(!obj);
775 list_del(&obj->list);
776 - __object_put(obj);
777 + object_put(obj);
778 cache_num--;
779 }
780
781 @@ -94,7 +76,7 @@
782 strlcpy(obj->name, name, sizeof(obj->name));
783 obj->id = id;
784 obj->popularity = 0;
785 - obj->refcnt = 1; /* The cache holds a reference */
786 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
787
788 spin_lock_irqsave(&cache_lock, flags);
789 __cache_add(obj);
790 @@ -119,7 +101,7 @@
791 spin_lock_irqsave(&cache_lock, flags);
792 obj = __cache_find(id);
793 if (obj)
794 - __object_get(obj);
795 + object_get(obj);
796 spin_unlock_irqrestore(&cache_lock, flags);
797 return obj;
798 }
799
800Protecting The Objects Themselves
801---------------------------------
802
803In these examples, we assumed that the objects (except the reference
804counts) never changed once they are created. If we wanted to allow the
805name to change, there are three possibilities:
806
807- You can make ``cache_lock`` non-static, and tell people to grab that
808 lock before changing the name in any object.
809
810- You can provide a :c:func:`cache_obj_rename()` which grabs this
811 lock and changes the name for the caller, and tell everyone to use
812 that function.
813
814- You can make the ``cache_lock`` protect only the cache itself, and
815 use another lock to protect the name.
816
817Theoretically, you can make the locks as fine-grained as one lock for
818every field, for every object. In practice, the most common variants
819are:
820
821- One lock which protects the infrastructure (the ``cache`` list in
822 this example) and all the objects. This is what we have done so far.
823
824- One lock which protects the infrastructure (including the list
825 pointers inside the objects), and one lock inside the object which
826 protects the rest of that object.
827
828- Multiple locks to protect the infrastructure (eg. one lock per hash
829 chain), possibly with a separate per-object lock.
830
831Here is the "lock-per-object" implementation:
832
833::
834
835 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
836 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
837 @@ -6,11 +6,17 @@
838
839 struct object
840 {
841 + /* These two protected by cache_lock. */
842 struct list_head list;
843 + int popularity;
844 +
845 atomic_t refcnt;
846 +
847 + /* Doesn't change once created. */
848 int id;
849 +
850 + spinlock_t lock; /* Protects the name */
851 char name[32];
852 - int popularity;
853 };
854
855 static DEFINE_SPINLOCK(cache_lock);
856 @@ -77,6 +84,7 @@
857 obj->id = id;
858 obj->popularity = 0;
859 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
860 + spin_lock_init(&obj->lock);
861
862 spin_lock_irqsave(&cache_lock, flags);
863 __cache_add(obj);
864
865Note that I decide that the popularity count should be protected by the
866``cache_lock`` rather than the per-object lock: this is because it (like
867the :c:type:`struct list_head <list_head>` inside the object)
868is logically part of the infrastructure. This way, I don't need to grab
869the lock of every object in :c:func:`__cache_add()` when seeking
870the least popular.
871
872I also decided that the id member is unchangeable, so I don't need to
873grab each object lock in :c:func:`__cache_find()` to examine the
874id: the object lock is only used by a caller who wants to read or write
875the name field.
876
877Note also that I added a comment describing what data was protected by
878which locks. This is extremely important, as it describes the runtime
879behavior of the code, and can be hard to gain from just reading. And as
880Alan Cox says, “Lock data, not code”.
881
882Common Problems
883===============
884
885Deadlock: Simple and Advanced
886-----------------------------
887
888There is a coding bug where a piece of code tries to grab a spinlock
889twice: it will spin forever, waiting for the lock to be released
890(spinlocks, rwlocks and mutexes are not recursive in Linux). This is
891trivial to diagnose: not a
892stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
893
894For a slightly more complex case, imagine you have a region shared by a
895softirq and user context. If you use a :c:func:`spin_lock()` call
896to protect it, it is possible that the user context will be interrupted
897by the softirq while it holds the lock, and the softirq will then spin
898forever trying to get the same lock.
899
900Both of these are called deadlock, and as shown above, it can occur even
901with a single CPU (although not on UP compiles, since spinlocks vanish
902on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
903corruption in the second example).
904
905This complete lockup is easy to diagnose: on SMP boxes the watchdog
906timer or compiling with ``DEBUG_SPINLOCK`` set
907(``include/linux/spinlock.h``) will show this up immediately when it
908happens.
909
910A more complex problem is the so-called 'deadly embrace', involving two
911or more locks. Say you have a hash table: each entry in the table is a
912spinlock, and a chain of hashed objects. Inside a softirq handler, you
913sometimes want to alter an object from one place in the hash to another:
914you grab the spinlock of the old hash chain and the spinlock of the new
915hash chain, and delete the object from the old one, and insert it in the
916new one.
917
918There are two problems here. First, if your code ever tries to move the
919object to the same chain, it will deadlock with itself as it tries to
920lock it twice. Secondly, if the same softirq on another CPU is trying to
921move another object in the reverse direction, the following could
922happen:
923
924+-----------------------+-----------------------+
925| CPU 1 | CPU 2 |
926+=======================+=======================+
927| Grab lock A -> OK | Grab lock B -> OK |
928+-----------------------+-----------------------+
929| Grab lock B -> spin | Grab lock A -> spin |
930+-----------------------+-----------------------+
931
932Table: Consequences
933
934The two CPUs will spin forever, waiting for the other to give up their
935lock. It will look, smell, and feel like a crash.
936
937Preventing Deadlock
938-------------------
939
940Textbooks will tell you that if you always lock in the same order, you
941will never get this kind of deadlock. Practice will tell you that this
942approach doesn't scale: when I create a new lock, I don't understand
943enough of the kernel to figure out where in the 5000 lock hierarchy it
944will fit.
945
946The best locks are encapsulated: they never get exposed in headers, and
947are never held around calls to non-trivial functions outside the same
948file. You can read through this code and see that it will never
949deadlock, because it never tries to grab another lock while it has that
950one. People using your code don't even need to know you are using a
951lock.
952
953A classic problem here is when you provide callbacks or hooks: if you
954call these with the lock held, you risk simple deadlock, or a deadly
955embrace (who knows what the callback will do?). Remember, the other
956programmers are out to get you, so don't do this.
957
958Overzealous Prevention Of Deadlocks
959~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
960
961Deadlocks are problematic, but not as bad as data corruption. Code which
962grabs a read lock, searches a list, fails to find what it wants, drops
963the read lock, grabs a write lock and inserts the object has a race
964condition.
965
966If you don't see why, please stay the fuck away from my code.
967
968Racing Timers: A Kernel Pastime
969-------------------------------
970
971Timers can produce their own special problems with races. Consider a
972collection of objects (list, hash, etc) where each object has a timer
973which is due to destroy it.
974
975If you want to destroy the entire collection (say on module removal),
976you might do the following::
977
978 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
979 HUNGARIAN NOTATION */
980 spin_lock_bh(&list_lock);
981
982 while (list) {
983 struct foo *next = list->next;
984 del_timer(&list->timer);
985 kfree(list);
986 list = next;
987 }
988
989 spin_unlock_bh(&list_lock);
990
991
992Sooner or later, this will crash on SMP, because a timer can have just
993gone off before the :c:func:`spin_lock_bh()`, and it will only get
994the lock after we :c:func:`spin_unlock_bh()`, and then try to free
995the element (which has already been freed!).
996
997This can be avoided by checking the result of
998:c:func:`del_timer()`: if it returns 1, the timer has been deleted.
999If 0, it means (in this case) that it is currently running, so we can
1000do::
1001
1002 retry:
1003 spin_lock_bh(&list_lock);
1004
1005 while (list) {
1006 struct foo *next = list->next;
1007 if (!del_timer(&list->timer)) {
1008 /* Give timer a chance to delete this */
1009 spin_unlock_bh(&list_lock);
1010 goto retry;
1011 }
1012 kfree(list);
1013 list = next;
1014 }
1015
1016 spin_unlock_bh(&list_lock);
1017
1018
1019Another common problem is deleting timers which restart themselves (by
1020calling :c:func:`add_timer()` at the end of their timer function).
1021Because this is a fairly common case which is prone to races, you should
1022use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to
1023handle this case. It returns the number of times the timer had to be
1024deleted before we finally stopped it from adding itself back in.
1025
1026Locking Speed
1027=============
1028
1029There are three main things to worry about when considering speed of
1030some code which does locking. First is concurrency: how many things are
1031going to be waiting while someone else is holding a lock. Second is the
1032time taken to actually acquire and release an uncontended lock. Third is
1033using fewer, or smarter locks. I'm assuming that the lock is used fairly
1034often: otherwise, you wouldn't be concerned about efficiency.
1035
1036Concurrency depends on how long the lock is usually held: you should
1037hold the lock for as long as needed, but no longer. In the cache
1038example, we always create the object without the lock held, and then
1039grab the lock only when we are ready to insert it in the list.
1040
1041Acquisition times depend on how much damage the lock operations do to
1042the pipeline (pipeline stalls) and how likely it is that this CPU was
1043the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1044on a machine with more CPUs, this likelihood drops fast. Consider a
1045700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
1046increment takes about 58ns, a lock which is cache-hot on this CPU takes
1047160ns, and a cacheline transfer from another CPU takes an additional 170
1048to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
1049article <http://www.linuxjournal.com/article.php?sid=6993>`__).
1050
1051These two aims conflict: holding a lock for a short time might be done
1052by splitting locks into parts (such as in our final per-object-lock
1053example), but this increases the number of lock acquisitions, and the
1054results are often slower than having a single lock. This is another
1055reason to advocate locking simplicity.
1056
1057The third concern is addressed below: there are some methods to reduce
1058the amount of locking which needs to be done.
1059
1060Read/Write Lock Variants
1061------------------------
1062
1063Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
1064:c:type:`struct rw_semaphore <rw_semaphore>`. These divide
1065users into two classes: the readers and the writers. If you are only
1066reading the data, you can get a read lock, but to write to the data you
1067need the write lock. Many people can hold a read lock, but a writer must
1068be sole holder.
1069
1070If your code divides neatly along reader/writer lines (as our cache code
1071does), and the lock is held by readers for significant lengths of time,
1072using these locks can help. They are slightly slower than the normal
1073locks though, so in practice ``rwlock_t`` is not usually worthwhile.
1074
1075Avoiding Locks: Read Copy Update
1076--------------------------------
1077
1078There is a special method of read/write locking called Read Copy Update.
1079Using RCU, the readers can avoid taking a lock altogether: as we expect
1080our cache to be read more often than updated (otherwise the cache is a
1081waste of time), it is a candidate for this optimization.
1082
1083How do we get rid of read locks? Getting rid of read locks means that
1084writers may be changing the list underneath the readers. That is
1085actually quite simple: we can read a linked list while an element is
1086being added if the writer adds the element very carefully. For example,
1087adding ``new`` to a single linked list called ``list``::
1088
1089 new->next = list->next;
1090 wmb();
1091 list->next = new;
1092
1093
1094The :c:func:`wmb()` is a write memory barrier. It ensures that the
1095first operation (setting the new element's ``next`` pointer) is complete
1096and will be seen by all CPUs, before the second operation is (putting
1097the new element into the list). This is important, since modern
1098compilers and modern CPUs can both reorder instructions unless told
1099otherwise: we want a reader to either not see the new element at all, or
1100see the new element with the ``next`` pointer correctly pointing at the
1101rest of the list.
1102
1103Fortunately, there is a function to do this for standard
1104:c:type:`struct list_head <list_head>` lists:
1105:c:func:`list_add_rcu()` (``include/linux/list.h``).
1106
1107Removing an element from the list is even simpler: we replace the
1108pointer to the old element with a pointer to its successor, and readers
1109will either see it, or skip over it.
1110
1111::
1112
1113 list->next = old->next;
1114
1115
1116There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which
1117does this (the normal version poisons the old object, which we don't
1118want).
1119
1120The reader must also be careful: some CPUs can look through the ``next``
1121pointer to start reading the contents of the next element early, but
1122don't realize that the pre-fetched contents is wrong when the ``next``
1123pointer changes underneath them. Once again, there is a
1124:c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``)
1125to help you. Of course, writers can just use
1126:c:func:`list_for_each_entry()`, since there cannot be two
1127simultaneous writers.
1128
1129Our final dilemma is this: when can we actually destroy the removed
1130element? Remember, a reader might be stepping through this element in
1131the list right now: if we free this element and the ``next`` pointer
1132changes, the reader will jump off into garbage and crash. We need to
1133wait until we know that all the readers who were traversing the list
1134when we deleted the element are finished. We use
1135:c:func:`call_rcu()` to register a callback which will actually
1136destroy the object once all pre-existing readers are finished.
1137Alternatively, :c:func:`synchronize_rcu()` may be used to block
1138until all pre-existing are finished.
1139
1140But how does Read Copy Update know when the readers are finished? The
1141method is this: firstly, the readers always traverse the list inside
1142:c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs:
1143these simply disable preemption so the reader won't go to sleep while
1144reading the list.
1145
1146RCU then waits until every other CPU has slept at least once: since
1147readers cannot sleep, we know that any readers which were traversing the
1148list during the deletion are finished, and the callback is triggered.
1149The real Read Copy Update code is a little more optimized than this, but
1150this is the fundamental idea.
1151
1152::
1153
1154 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1155 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1156 @@ -1,15 +1,18 @@
1157 #include <linux/list.h>
1158 #include <linux/slab.h>
1159 #include <linux/string.h>
1160 +#include <linux/rcupdate.h>
1161 #include <linux/mutex.h>
1162 #include <asm/errno.h>
1163
1164 struct object
1165 {
1166 - /* These two protected by cache_lock. */
1167 + /* This is protected by RCU */
1168 struct list_head list;
1169 int popularity;
1170
1171 + struct rcu_head rcu;
1172 +
1173 atomic_t refcnt;
1174
1175 /* Doesn't change once created. */
1176 @@ -40,7 +43,7 @@
1177 {
1178 struct object *i;
1179
1180 - list_for_each_entry(i, &cache, list) {
1181 + list_for_each_entry_rcu(i, &cache, list) {
1182 if (i->id == id) {
1183 i->popularity++;
1184 return i;
1185 @@ -49,19 +52,25 @@
1186 return NULL;
1187 }
1188
1189 +/* Final discard done once we know no readers are looking. */
1190 +static void cache_delete_rcu(void *arg)
1191 +{
1192 + object_put(arg);
1193 +}
1194 +
1195 /* Must be holding cache_lock */
1196 static void __cache_delete(struct object *obj)
1197 {
1198 BUG_ON(!obj);
1199 - list_del(&obj->list);
1200 - object_put(obj);
1201 + list_del_rcu(&obj->list);
1202 cache_num--;
1203 + call_rcu(&obj->rcu, cache_delete_rcu);
1204 }
1205
1206 /* Must be holding cache_lock */
1207 static void __cache_add(struct object *obj)
1208 {
1209 - list_add(&obj->list, &cache);
1210 + list_add_rcu(&obj->list, &cache);
1211 if (++cache_num > MAX_CACHE_SIZE) {
1212 struct object *i, *outcast = NULL;
1213 list_for_each_entry(i, &cache, list) {
1214 @@ -104,12 +114,11 @@
1215 struct object *cache_find(int id)
1216 {
1217 struct object *obj;
1218 - unsigned long flags;
1219
1220 - spin_lock_irqsave(&cache_lock, flags);
1221 + rcu_read_lock();
1222 obj = __cache_find(id);
1223 if (obj)
1224 object_get(obj);
1225 - spin_unlock_irqrestore(&cache_lock, flags);
1226 + rcu_read_unlock();
1227 return obj;
1228 }
1229
1230Note that the reader will alter the popularity member in
1231:c:func:`__cache_find()`, and now it doesn't hold a lock. One
1232solution would be to make it an ``atomic_t``, but for this usage, we
1233don't really care about races: an approximate result is good enough, so
1234I didn't change it.
1235
1236The result is that :c:func:`cache_find()` requires no
1237synchronization with any other functions, so is almost as fast on SMP as
1238it would be on UP.
1239
1240There is a further optimization possible here: remember our original
1241cache code, where there were no reference counts and the caller simply
1242held the lock whenever using the object? This is still possible: if you
1243hold the lock, no one can delete the object, so you don't need to get
1244and put the reference count.
1245
1246Now, because the 'read lock' in RCU is simply disabling preemption, a
1247caller which always has preemption disabled between calling
1248:c:func:`cache_find()` and :c:func:`object_put()` does not
1249need to actually get and put the reference count: we could expose
1250:c:func:`__cache_find()` by making it non-static, and such
1251callers could simply call that.
1252
1253The benefit here is that the reference count is not written to: the
1254object is not altered in any way, which is much faster on SMP machines
1255due to caching.
1256
1257Per-CPU Data
1258------------
1259
1260Another technique for avoiding locking which is used fairly widely is to
1261duplicate information for each CPU. For example, if you wanted to keep a
1262count of a common condition, you could use a spin lock and a single
1263counter. Nice and simple.
1264
1265If that was too slow (it's usually not, but if you've got a really big
1266machine to test on and can show that it is), you could instead use a
1267counter for each CPU, then none of them need an exclusive lock. See
1268:c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and
1269:c:func:`put_cpu_var()` (``include/linux/percpu.h``).
1270
1271Of particular use for simple per-cpu counters is the ``local_t`` type,
1272and the :c:func:`cpu_local_inc()` and related functions, which are
1273more efficient than simple code on some architectures
1274(``include/asm/local.h``).
1275
1276Note that there is no simple, reliable way of getting an exact value of
1277such a counter, without introducing more locks. This is not a problem
1278for some uses.
1279
1280Data Which Mostly Used By An IRQ Handler
1281----------------------------------------
1282
1283If data is always accessed from within the same IRQ handler, you don't
1284need a lock at all: the kernel already guarantees that the irq handler
1285will not run simultaneously on multiple CPUs.
1286
1287Manfred Spraul points out that you can still do this, even if the data
1288is very occasionally accessed in user context or softirqs/tasklets. The
1289irq handler doesn't use a lock, and all other accesses are done as so::
1290
1291 spin_lock(&lock);
1292 disable_irq(irq);
1293 ...
1294 enable_irq(irq);
1295 spin_unlock(&lock);
1296
1297The :c:func:`disable_irq()` prevents the irq handler from running
1298(and waits for it to finish if it's currently running on other CPUs).
1299The spinlock prevents any other accesses happening at the same time.
1300Naturally, this is slower than just a :c:func:`spin_lock_irq()`
1301call, so it only makes sense if this type of access happens extremely
1302rarely.
1303
1304What Functions Are Safe To Call From Interrupts?
1305================================================
1306
1307Many functions in the kernel sleep (ie. call schedule()) directly or
1308indirectly: you can never call them while holding a spinlock, or with
1309preemption disabled. This also means you need to be in user context:
1310calling them from an interrupt is illegal.
1311
1312Some Functions Which Sleep
1313--------------------------
1314
1315The most common ones are listed below, but you usually have to read the
1316code to find out if other calls are safe. If everyone else who calls it
1317can sleep, you probably need to be able to sleep, too. In particular,
1318registration and deregistration functions usually expect to be called
1319from user context, and can sleep.
1320
1321- Accesses to userspace:
1322
1323 - :c:func:`copy_from_user()`
1324
1325 - :c:func:`copy_to_user()`
1326
1327 - :c:func:`get_user()`
1328
1329 - :c:func:`put_user()`
1330
1331- ``kmalloc(GFP_KERNEL)``
1332
1333- :c:func:`mutex_lock_interruptible()` and
1334 :c:func:`mutex_lock()`
1335
1336 There is a :c:func:`mutex_trylock()` which does not sleep.
1337 Still, it must not be used inside interrupt context since its
1338 implementation is not safe for that. :c:func:`mutex_unlock()`
1339 will also never sleep. It cannot be used in interrupt context either
1340 since a mutex must be released by the same task that acquired it.
1341
1342Some Functions Which Don't Sleep
1343--------------------------------
1344
1345Some functions are safe to call from any context, or holding almost any
1346lock.
1347
1348- :c:func:`printk()`
1349
1350- :c:func:`kfree()`
1351
1352- :c:func:`add_timer()` and :c:func:`del_timer()`
1353
1354Mutex API reference
1355===================
1356
1357.. kernel-doc:: include/linux/mutex.h
1358 :internal:
1359
1360.. kernel-doc:: kernel/locking/mutex.c
1361 :export:
1362
1363Futex API reference
1364===================
1365
1366.. kernel-doc:: kernel/futex.c
1367 :internal:
1368
1369Further reading
1370===============
1371
1372- ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking
1373 tutorial in the kernel sources.
1374
1375- Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1376 Caching for Kernel Programmers:
1377
1378 Curt Schimmel's very good introduction to kernel level locking (not
1379 written for Linux, but nearly everything applies). The book is
1380 expensive, but really worth every penny to understand SMP locking.
1381 [ISBN: 0201633388]
1382
1383Thanks
1384======
1385
1386Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
1387
1388Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
1389Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
1390James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
1391correcting, flaming, commenting.
1392
1393Thanks to the cabal for having no influence on this document.
1394
1395Glossary
1396========
1397
1398preemption
1399 Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
1400 context inside the kernel would not preempt each other (ie. you had that
1401 CPU until you gave it up, except for interrupts). With the addition of
1402 ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
1403 priority tasks can "cut in": spinlocks were changed to disable
1404 preemption, even on UP.
1405
1406bh
1407 Bottom Half: for historical reasons, functions with '_bh' in them often
1408 now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()`
1409 blocks any software interrupt on the current CPU. Bottom halves are
1410 deprecated, and will eventually be replaced by tasklets. Only one bottom
1411 half will be running at any time.
1412
1413Hardware Interrupt / Hardware IRQ
1414 Hardware interrupt request. :c:func:`in_irq()` returns true in a
1415 hardware interrupt handler.
1416
1417Interrupt Context
1418 Not user context: processing a hardware irq or software irq. Indicated
1419 by the :c:func:`in_interrupt()` macro returning true.
1420
1421SMP
1422 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1423 (``CONFIG_SMP=y``).
1424
1425Software Interrupt / softirq
1426 Software interrupt handler. :c:func:`in_irq()` returns false;
1427 :c:func:`in_softirq()` returns true. Tasklets and softirqs both
1428 fall into the category of 'software interrupts'.
1429
1430 Strictly speaking a softirq is one of up to 32 enumerated software
1431 interrupts which can run on multiple CPUs at once. Sometimes used to
1432 refer to tasklets as well (ie. all software interrupts).
1433
1434tasklet
1435 A dynamically-registrable software interrupt, which is guaranteed to
1436 only run on one CPU at a time.
1437
1438timer
1439 A dynamically-registrable software interrupt, which is run at (or close
1440 to) a given time. When running, it is just like a tasklet (in fact, they
1441 are called from the TIMER_SOFTIRQ).
1442
1443UP
1444 Uni-Processor: Non-SMP. (CONFIG_SMP=n).
1445
1446User Context
1447 The kernel executing on behalf of a particular process (ie. a system
1448 call or trap) or kernel thread. You can tell which process with the
1449 ``current`` macro.) Not to be confused with userspace. Can be
1450 interrupted by software or hardware interrupts.
1451
1452Userspace
1453 A process executing its own code outside the kernel.