summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.ibm.com>2019-01-09 17:48:09 -0500
committerPaul E. McKenney <paulmck@linux.ibm.com>2019-03-26 17:37:06 -0400
commit4fea6ef0b219d66b8a901fea1744745a1ed2f79b (patch)
treea0ce6014446c81420aa66b8ca36e69ee456f9f4a
parent9e98c678c2d6ae3a17cb2de55d17f69dddaa231b (diff)
doc: Remove obsolete RCU update functions from RCU documentation
Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh, rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited, call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched, and cond_synchronize_sched are obsolete, let's remove them from the documentation aside from a small historical section. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
-rw-r--r--Documentation/RCU/Design/Data-Structures/Data-Structures.html3
-rw-r--r--Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html4
-rw-r--r--Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html5
-rw-r--r--Documentation/RCU/NMI-RCU.txt13
-rw-r--r--Documentation/RCU/UP.txt6
-rw-r--r--Documentation/RCU/checklist.txt76
-rw-r--r--Documentation/RCU/rcu.txt8
-rw-r--r--Documentation/RCU/rcubarrier.txt27
8 files changed, 66 insertions, 76 deletions
diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
index 18f179807563..c30c1957c7e6 100644
--- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html
+++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
@@ -155,8 +155,7 @@ keeping lock contention under control at all tree levels regardless
155of the level of loading on the system. 155of the level of loading on the system.
156 156
157</p><p>RCU updaters wait for normal grace periods by registering 157</p><p>RCU updaters wait for normal grace periods by registering
158RCU callbacks, either directly via <tt>call_rcu()</tt> and 158RCU callbacks, either directly via <tt>call_rcu()</tt>
159friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>),
160or indirectly via <tt>synchronize_rcu()</tt> and friends. 159or indirectly via <tt>synchronize_rcu()</tt> and friends.
161RCU callbacks are represented by <tt>rcu_head</tt> structures, 160RCU callbacks are represented by <tt>rcu_head</tt> structures,
162which are queued on <tt>rcu_data</tt> structures while they are 161which are queued on <tt>rcu_data</tt> structures while they are
diff --git a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
index 19e7a5fb6b73..57300db4b5ff 100644
--- a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
+++ b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
@@ -56,6 +56,7 @@ sections.
56RCU-preempt Expedited Grace Periods</a></h2> 56RCU-preempt Expedited Grace Periods</a></h2>
57 57
58<p> 58<p>
59<tt>CONFIG_PREEMPT=y</tt> kernels implement RCU-preempt.
59The overall flow of the handling of a given CPU by an RCU-preempt 60The overall flow of the handling of a given CPU by an RCU-preempt
60expedited grace period is shown in the following diagram: 61expedited grace period is shown in the following diagram:
61 62
@@ -139,6 +140,7 @@ or offline, among other things.
139RCU-sched Expedited Grace Periods</a></h2> 140RCU-sched Expedited Grace Periods</a></h2>
140 141
141<p> 142<p>
143<tt>CONFIG_PREEMPT=n</tt> kernels implement RCU-sched.
142The overall flow of the handling of a given CPU by an RCU-sched 144The overall flow of the handling of a given CPU by an RCU-sched
143expedited grace period is shown in the following diagram: 145expedited grace period is shown in the following diagram:
144 146
@@ -146,7 +148,7 @@ expedited grace period is shown in the following diagram:
146 148
147<p> 149<p>
148As with RCU-preempt, RCU-sched's 150As with RCU-preempt, RCU-sched's
149<tt>synchronize_sched_expedited()</tt> ignores offline and 151<tt>synchronize_rcu_expedited()</tt> ignores offline and
150idle CPUs, again because they are in remotely detectable 152idle CPUs, again because they are in remotely detectable
151quiescent states. 153quiescent states.
152However, because the 154However, because the
diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
index 8d21af02b1f0..c64f8d26609f 100644
--- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
+++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
@@ -34,12 +34,11 @@ Similarly, any code that happens before the beginning of a given RCU grace
34period is guaranteed to see the effects of all accesses following the end 34period is guaranteed to see the effects of all accesses following the end
35of that grace period that are within RCU read-side critical sections. 35of that grace period that are within RCU read-side critical sections.
36 36
37<p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>, 37<p>Note well that RCU-sched read-side critical sections include any region
38for which RCU-sched read-side critical sections include any region
39of code for which preemption is disabled. 38of code for which preemption is disabled.
40Given that each individual machine instruction can be thought of as 39Given that each individual machine instruction can be thought of as
41an extremely small region of preemption-disabled code, one can think of 40an extremely small region of preemption-disabled code, one can think of
42<tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids. 41<tt>synchronize_rcu()</tt> as <tt>smp_mb()</tt> on steroids.
43 42
44<p>RCU updaters use this guarantee by splitting their updates into 43<p>RCU updaters use this guarantee by splitting their updates into
45two phases, one of which is executed before the grace period and 44two phases, one of which is executed before the grace period and
diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt
index 687777f83b23..881353fd5bff 100644
--- a/Documentation/RCU/NMI-RCU.txt
+++ b/Documentation/RCU/NMI-RCU.txt
@@ -81,18 +81,19 @@ currently executing on some other CPU. We therefore cannot free
81up any data structures used by the old NMI handler until execution 81up any data structures used by the old NMI handler until execution
82of it completes on all other CPUs. 82of it completes on all other CPUs.
83 83
84One way to accomplish this is via synchronize_sched(), perhaps as 84One way to accomplish this is via synchronize_rcu(), perhaps as
85follows: 85follows:
86 86
87 unset_nmi_callback(); 87 unset_nmi_callback();
88 synchronize_sched(); 88 synchronize_rcu();
89 kfree(my_nmi_data); 89 kfree(my_nmi_data);
90 90
91This works because synchronize_sched() blocks until all CPUs complete 91This works because (as of v4.20) synchronize_rcu() blocks until all
92any preemption-disabled segments of code that they were executing. 92CPUs complete any preemption-disabled segments of code that they were
93Since NMI handlers disable preemption, synchronize_sched() is guaranteed 93executing.
94Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
94not to return until all ongoing NMI handlers exit. It is therefore safe 95not to return until all ongoing NMI handlers exit. It is therefore safe
95to free up the handler's data as soon as synchronize_sched() returns. 96to free up the handler's data as soon as synchronize_rcu() returns.
96 97
97Important note: for this to work, the architecture in question must 98Important note: for this to work, the architecture in question must
98invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively. 99invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
diff --git a/Documentation/RCU/UP.txt b/Documentation/RCU/UP.txt
index 90ec5341ee98..53bde717017b 100644
--- a/Documentation/RCU/UP.txt
+++ b/Documentation/RCU/UP.txt
@@ -86,10 +86,8 @@ even on a UP system. So do not do it! Even on a UP system, the RCU
86infrastructure -must- respect grace periods, and -must- invoke callbacks 86infrastructure -must- respect grace periods, and -must- invoke callbacks
87from a known environment in which no locks are held. 87from a known environment in which no locks are held.
88 88
89It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return 89Note that it -is- safe for synchronize_rcu() to return immediately on
90immediately on an UP system. It is also safe for synchronize_rcu() 90UP systems, including !PREEMPT SMP builds running on UP systems.
91to return immediately on UP systems, except when running preemptable
92RCU.
93 91
94Quick Quiz #3: Why can't synchronize_rcu() return immediately on 92Quick Quiz #3: Why can't synchronize_rcu() return immediately on
95 UP systems running preemptable RCU? 93 UP systems running preemptable RCU?
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index 6f469864d9f5..fcc59fea5cd4 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -182,16 +182,13 @@ over a rather long period of time, but improvements are always welcome!
182 when publicizing a pointer to a structure that can 182 when publicizing a pointer to a structure that can
183 be traversed by an RCU read-side critical section. 183 be traversed by an RCU read-side critical section.
184 184
1855. If call_rcu(), or a related primitive such as call_rcu_bh(), 1855. If call_rcu() or call_srcu() is used, the callback function will
186 call_rcu_sched(), or call_srcu() is used, the callback function 186 be called from softirq context. In particular, it cannot block.
187 will be called from softirq context. In particular, it cannot
188 block.
189 187
1906. Since synchronize_rcu() can block, it cannot be called from 1886. Since synchronize_rcu() can block, it cannot be called
191 any sort of irq context. The same rule applies for 189 from any sort of irq context. The same rule applies
192 synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), 190 for synchronize_srcu(), synchronize_rcu_expedited(), and
193 synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), 191 synchronize_srcu_expedited().
194 synchronize_sched_expedite(), and synchronize_srcu_expedited().
195 192
196 The expedited forms of these primitives have the same semantics 193 The expedited forms of these primitives have the same semantics
197 as the non-expedited forms, but expediting is both expensive and 194 as the non-expedited forms, but expediting is both expensive and
@@ -212,20 +209,20 @@ over a rather long period of time, but improvements are always welcome!
212 of the system, especially to real-time workloads running on 209 of the system, especially to real-time workloads running on
213 the rest of the system. 210 the rest of the system.
214 211
2157. If the updater uses call_rcu() or synchronize_rcu(), then the 2127. As of v4.20, a given kernel implements only one RCU flavor,
216 corresponding readers must use rcu_read_lock() and 213 which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
217 rcu_read_unlock(). If the updater uses call_rcu_bh() or 214 If the updater uses call_rcu() or synchronize_rcu(),
218 synchronize_rcu_bh(), then the corresponding readers must 215 then the corresponding readers my use rcu_read_lock() and
219 use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the 216 rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
220 updater uses call_rcu_sched() or synchronize_sched(), then 217 or any pair of primitives that disables and re-enables preemption,
221 the corresponding readers must disable preemption, possibly 218 for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
222 by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). 219 If the updater uses synchronize_srcu() or call_srcu(),
223 If the updater uses synchronize_srcu() or call_srcu(), then 220 then the corresponding readers must use srcu_read_lock() and
224 the corresponding readers must use srcu_read_lock() and
225 srcu_read_unlock(), and with the same srcu_struct. The rules for 221 srcu_read_unlock(), and with the same srcu_struct. The rules for
226 the expedited primitives are the same as for their non-expedited 222 the expedited primitives are the same as for their non-expedited
227 counterparts. Mixing things up will result in confusion and 223 counterparts. Mixing things up will result in confusion and
228 broken kernels. 224 broken kernels, and has even resulted in an exploitable security
225 issue.
229 226
230 One exception to this rule: rcu_read_lock() and rcu_read_unlock() 227 One exception to this rule: rcu_read_lock() and rcu_read_unlock()
231 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() 228 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
@@ -288,8 +285,7 @@ over a rather long period of time, but improvements are always welcome!
288 d. Periodically invoke synchronize_rcu(), permitting a limited 285 d. Periodically invoke synchronize_rcu(), permitting a limited
289 number of updates per grace period. 286 number of updates per grace period.
290 287
291 The same cautions apply to call_rcu_bh(), call_rcu_sched(), 288 The same cautions apply to call_srcu() and kfree_rcu().
292 call_srcu(), and kfree_rcu().
293 289
294 Note that although these primitives do take action to avoid memory 290 Note that although these primitives do take action to avoid memory
295 exhaustion when any given CPU has too many callbacks, a determined 291 exhaustion when any given CPU has too many callbacks, a determined
@@ -336,12 +332,12 @@ over a rather long period of time, but improvements are always welcome!
336 to safely access and/or modify that data structure. 332 to safely access and/or modify that data structure.
337 333
338 RCU callbacks are -usually- executed on the same CPU that executed 334 RCU callbacks are -usually- executed on the same CPU that executed
339 the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), 335 the corresponding call_rcu() or call_srcu(). but are by -no-
340 but are by -no- means guaranteed to be. For example, if a given 336 means guaranteed to be. For example, if a given CPU goes offline
341 CPU goes offline while having an RCU callback pending, then that 337 while having an RCU callback pending, then that RCU callback
342 RCU callback will execute on some surviving CPU. (If this was 338 will execute on some surviving CPU. (If this was not the case,
343 not the case, a self-spawning RCU callback would prevent the 339 a self-spawning RCU callback would prevent the victim CPU from
344 victim CPU from ever going offline.) 340 ever going offline.)
345 341
34613. Unlike other forms of RCU, it -is- permissible to block in an 34213. Unlike other forms of RCU, it -is- permissible to block in an
347 SRCU read-side critical section (demarked by srcu_read_lock() 343 SRCU read-side critical section (demarked by srcu_read_lock()
@@ -381,8 +377,7 @@ over a rather long period of time, but improvements are always welcome!
381 377
382 SRCU's expedited primitive (synchronize_srcu_expedited()) 378 SRCU's expedited primitive (synchronize_srcu_expedited())
383 never sends IPIs to other CPUs, so it is easier on 379 never sends IPIs to other CPUs, so it is easier on
384 real-time workloads than is synchronize_rcu_expedited(), 380 real-time workloads than is synchronize_rcu_expedited().
385 synchronize_rcu_bh_expedited() or synchronize_sched_expedited().
386 381
387 Note that rcu_dereference() and rcu_assign_pointer() relate to 382 Note that rcu_dereference() and rcu_assign_pointer() relate to
388 SRCU just as they do to other forms of RCU. 383 SRCU just as they do to other forms of RCU.
@@ -428,22 +423,19 @@ over a rather long period of time, but improvements are always welcome!
428 These debugging aids can help you find problems that are 423 These debugging aids can help you find problems that are
429 otherwise extremely difficult to spot. 424 otherwise extremely difficult to spot.
430 425
43117. If you register a callback using call_rcu(), call_rcu_bh(), 42617. If you register a callback using call_rcu() or call_srcu(),
432 call_rcu_sched(), or call_srcu(), and pass in a function defined 427 and pass in a function defined within a loadable module,
433 within a loadable module, then it in necessary to wait for 428 then it in necessary to wait for all pending callbacks to
434 all pending callbacks to be invoked after the last invocation 429 be invoked after the last invocation and before unloading
435 and before unloading that module. Note that it is absolutely 430 that module. Note that it is absolutely -not- sufficient to
436 -not- sufficient to wait for a grace period! The current (say) 431 wait for a grace period! The current (say) synchronize_rcu()
437 synchronize_rcu() implementation waits only for all previous 432 implementation waits only for all previous callbacks registered
438 callbacks registered on the CPU that synchronize_rcu() is running 433 on the CPU that synchronize_rcu() is running on, but it is -not-
439 on, but it is -not- guaranteed to wait for callbacks registered 434 guaranteed to wait for callbacks registered on other CPUs.
440 on other CPUs.
441 435
442 You instead need to use one of the barrier functions: 436 You instead need to use one of the barrier functions:
443 437
444 o call_rcu() -> rcu_barrier() 438 o call_rcu() -> rcu_barrier()
445 o call_rcu_bh() -> rcu_barrier()
446 o call_rcu_sched() -> rcu_barrier()
447 o call_srcu() -> srcu_barrier() 439 o call_srcu() -> srcu_barrier()
448 440
449 However, these barrier functions are absolutely -not- guaranteed 441 However, these barrier functions are absolutely -not- guaranteed
diff --git a/Documentation/RCU/rcu.txt b/Documentation/RCU/rcu.txt
index 721b3e426515..c818cf65c5a9 100644
--- a/Documentation/RCU/rcu.txt
+++ b/Documentation/RCU/rcu.txt
@@ -52,10 +52,10 @@ o If I am running on a uniprocessor kernel, which can only do one
52o How can I see where RCU is currently used in the Linux kernel? 52o How can I see where RCU is currently used in the Linux kernel?
53 53
54 Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu", 54 Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu",
55 "rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh", 55 "rcu_read_lock_bh", "rcu_read_unlock_bh", "srcu_read_lock",
56 "srcu_read_lock", "srcu_read_unlock", "synchronize_rcu", 56 "srcu_read_unlock", "synchronize_rcu", "synchronize_net",
57 "synchronize_net", "synchronize_srcu", and the other RCU 57 "synchronize_srcu", and the other RCU primitives. Or grab one
58 primitives. Or grab one of the cscope databases from: 58 of the cscope databases from:
59 59
60 http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html 60 http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html
61 61
diff --git a/Documentation/RCU/rcubarrier.txt b/Documentation/RCU/rcubarrier.txt
index 5d7759071a3e..a2782df69732 100644
--- a/Documentation/RCU/rcubarrier.txt
+++ b/Documentation/RCU/rcubarrier.txt
@@ -83,16 +83,15 @@ Pseudo-code using rcu_barrier() is as follows:
83 2. Execute rcu_barrier(). 83 2. Execute rcu_barrier().
84 3. Allow the module to be unloaded. 84 3. Allow the module to be unloaded.
85 85
86There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier() 86There is also an srcu_barrier() function for SRCU, and you of course
87functions for the other flavors of RCU, and you of course must match 87must match the flavor of rcu_barrier() with that of call_rcu(). If your
88the flavor of rcu_barrier() with that of call_rcu(). If your module 88module uses multiple flavors of call_rcu(), then it must also use multiple
89uses multiple flavors of call_rcu(), then it must also use multiple
90flavors of rcu_barrier() when unloading that module. For example, if 89flavors of rcu_barrier() when unloading that module. For example, if
91it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on 90it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
92srcu_struct_2(), then the following three lines of code will be required 91srcu_struct_2(), then the following three lines of code will be required
93when unloading: 92when unloading:
94 93
95 1 rcu_barrier_bh(); 94 1 rcu_barrier();
96 2 srcu_barrier(&srcu_struct_1); 95 2 srcu_barrier(&srcu_struct_1);
97 3 srcu_barrier(&srcu_struct_2); 96 3 srcu_barrier(&srcu_struct_2);
98 97
@@ -185,12 +184,12 @@ module invokes call_rcu() from timers, you will need to first cancel all
185the timers, and only then invoke rcu_barrier() to wait for any remaining 184the timers, and only then invoke rcu_barrier() to wait for any remaining
186RCU callbacks to complete. 185RCU callbacks to complete.
187 186
188Of course, if you module uses call_rcu_bh(), you will need to invoke 187Of course, if you module uses call_rcu(), you will need to invoke
189rcu_barrier_bh() before unloading. Similarly, if your module uses 188rcu_barrier() before unloading. Similarly, if your module uses
190call_rcu_sched(), you will need to invoke rcu_barrier_sched() before 189call_srcu(), you will need to invoke srcu_barrier() before unloading,
191unloading. If your module uses call_rcu(), call_rcu_bh(), -and- 190and on the same srcu_struct structure. If your module uses call_rcu()
192call_rcu_sched(), then you will need to invoke each of rcu_barrier(), 191-and- call_srcu(), then you will need to invoke rcu_barrier() -and-
193rcu_barrier_bh(), and rcu_barrier_sched(). 192srcu_barrier().
194 193
195 194
196Implementing rcu_barrier() 195Implementing rcu_barrier()
@@ -223,8 +222,8 @@ shown below. Note that the final "1" in on_each_cpu()'s argument list
223ensures that all the calls to rcu_barrier_func() will have completed 222ensures that all the calls to rcu_barrier_func() will have completed
224before on_each_cpu() returns. Line 9 then waits for the completion. 223before on_each_cpu() returns. Line 9 then waits for the completion.
225 224
226This code was rewritten in 2008 to support rcu_barrier_bh() and 225This code was rewritten in 2008 and several times thereafter, but this
227rcu_barrier_sched() in addition to the original rcu_barrier(). 226still gives the general idea.
228 227
229The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() 228The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
230to post an RCU callback, as follows: 229to post an RCU callback, as follows: