aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/RCU/Design/Data-Structures/Data-Structures.html3
-rw-r--r--Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html4
-rw-r--r--Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html5
-rw-r--r--Documentation/RCU/NMI-RCU.txt13
-rw-r--r--Documentation/RCU/UP.txt6
-rw-r--r--Documentation/RCU/checklist.txt91
-rw-r--r--Documentation/RCU/rcu.txt8
-rw-r--r--Documentation/RCU/rcu_dereference.txt103
-rw-r--r--Documentation/RCU/rcubarrier.txt27
-rw-r--r--Documentation/RCU/whatisRCU.txt10
-rw-r--r--Documentation/admin-guide/kernel-parameters.txt4
-rw-r--r--MAINTAINERS16
-rw-r--r--drivers/nvme/host/core.c2
-rw-r--r--include/linux/rcupdate.h6
-rw-r--r--include/linux/srcu.h36
-rw-r--r--kernel/locking/locktorture.c2
-rw-r--r--kernel/rcu/rcu.h1
-rw-r--r--kernel/rcu/rcuperf.c5
-rw-r--r--kernel/rcu/rcutorture.c21
-rw-r--r--kernel/rcu/srcutiny.c9
-rw-r--r--kernel/rcu/srcutree.c32
-rw-r--r--kernel/rcu/tiny.c2
-rw-r--r--kernel/rcu/tree.c508
-rw-r--r--kernel/rcu/tree.h14
-rw-r--r--kernel/rcu/tree_exp.h36
-rw-r--r--kernel/rcu/tree_plugin.h257
-rw-r--r--kernel/rcu/tree_stall.h709
-rw-r--r--kernel/rcu/update.c59
-rw-r--r--kernel/torture.c2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configNR_CPUS.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/config_override.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configcheck.sh19
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configinit.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/cpus2use.sh17
-rw-r--r--tools/testing/selftests/rcutorture/bin/functions.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/jitter.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-build.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-find-errors.sh5
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/mkinitrd.sh15
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/parse-build.sh17
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/parse-console.sh17
-rw-r--r--tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh17
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh17
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh17
51 files changed, 1081 insertions, 1272 deletions
diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
index 18f179807563..c30c1957c7e6 100644
--- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html
+++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
@@ -155,8 +155,7 @@ keeping lock contention under control at all tree levels regardless
155of the level of loading on the system. 155of the level of loading on the system.
156 156
157</p><p>RCU updaters wait for normal grace periods by registering 157</p><p>RCU updaters wait for normal grace periods by registering
158RCU callbacks, either directly via <tt>call_rcu()</tt> and 158RCU callbacks, either directly via <tt>call_rcu()</tt>
159friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>),
160or indirectly via <tt>synchronize_rcu()</tt> and friends. 159or indirectly via <tt>synchronize_rcu()</tt> and friends.
161RCU callbacks are represented by <tt>rcu_head</tt> structures, 160RCU callbacks are represented by <tt>rcu_head</tt> structures,
162which are queued on <tt>rcu_data</tt> structures while they are 161which are queued on <tt>rcu_data</tt> structures while they are
diff --git a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
index 19e7a5fb6b73..57300db4b5ff 100644
--- a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
+++ b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
@@ -56,6 +56,7 @@ sections.
56RCU-preempt Expedited Grace Periods</a></h2> 56RCU-preempt Expedited Grace Periods</a></h2>
57 57
58<p> 58<p>
59<tt>CONFIG_PREEMPT=y</tt> kernels implement RCU-preempt.
59The overall flow of the handling of a given CPU by an RCU-preempt 60The overall flow of the handling of a given CPU by an RCU-preempt
60expedited grace period is shown in the following diagram: 61expedited grace period is shown in the following diagram:
61 62
@@ -139,6 +140,7 @@ or offline, among other things.
139RCU-sched Expedited Grace Periods</a></h2> 140RCU-sched Expedited Grace Periods</a></h2>
140 141
141<p> 142<p>
143<tt>CONFIG_PREEMPT=n</tt> kernels implement RCU-sched.
142The overall flow of the handling of a given CPU by an RCU-sched 144The overall flow of the handling of a given CPU by an RCU-sched
143expedited grace period is shown in the following diagram: 145expedited grace period is shown in the following diagram:
144 146
@@ -146,7 +148,7 @@ expedited grace period is shown in the following diagram:
146 148
147<p> 149<p>
148As with RCU-preempt, RCU-sched's 150As with RCU-preempt, RCU-sched's
149<tt>synchronize_sched_expedited()</tt> ignores offline and 151<tt>synchronize_rcu_expedited()</tt> ignores offline and
150idle CPUs, again because they are in remotely detectable 152idle CPUs, again because they are in remotely detectable
151quiescent states. 153quiescent states.
152However, because the 154However, because the
diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
index 8d21af02b1f0..c64f8d26609f 100644
--- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
+++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
@@ -34,12 +34,11 @@ Similarly, any code that happens before the beginning of a given RCU grace
34period is guaranteed to see the effects of all accesses following the end 34period is guaranteed to see the effects of all accesses following the end
35of that grace period that are within RCU read-side critical sections. 35of that grace period that are within RCU read-side critical sections.
36 36
37<p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>, 37<p>Note well that RCU-sched read-side critical sections include any region
38for which RCU-sched read-side critical sections include any region
39of code for which preemption is disabled. 38of code for which preemption is disabled.
40Given that each individual machine instruction can be thought of as 39Given that each individual machine instruction can be thought of as
41an extremely small region of preemption-disabled code, one can think of 40an extremely small region of preemption-disabled code, one can think of
42<tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids. 41<tt>synchronize_rcu()</tt> as <tt>smp_mb()</tt> on steroids.
43 42
44<p>RCU updaters use this guarantee by splitting their updates into 43<p>RCU updaters use this guarantee by splitting their updates into
45two phases, one of which is executed before the grace period and 44two phases, one of which is executed before the grace period and
diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt
index 687777f83b23..881353fd5bff 100644
--- a/Documentation/RCU/NMI-RCU.txt
+++ b/Documentation/RCU/NMI-RCU.txt
@@ -81,18 +81,19 @@ currently executing on some other CPU. We therefore cannot free
81up any data structures used by the old NMI handler until execution 81up any data structures used by the old NMI handler until execution
82of it completes on all other CPUs. 82of it completes on all other CPUs.
83 83
84One way to accomplish this is via synchronize_sched(), perhaps as 84One way to accomplish this is via synchronize_rcu(), perhaps as
85follows: 85follows:
86 86
87 unset_nmi_callback(); 87 unset_nmi_callback();
88 synchronize_sched(); 88 synchronize_rcu();
89 kfree(my_nmi_data); 89 kfree(my_nmi_data);
90 90
91This works because synchronize_sched() blocks until all CPUs complete 91This works because (as of v4.20) synchronize_rcu() blocks until all
92any preemption-disabled segments of code that they were executing. 92CPUs complete any preemption-disabled segments of code that they were
93Since NMI handlers disable preemption, synchronize_sched() is guaranteed 93executing.
94Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
94not to return until all ongoing NMI handlers exit. It is therefore safe 95not to return until all ongoing NMI handlers exit. It is therefore safe
95to free up the handler's data as soon as synchronize_sched() returns. 96to free up the handler's data as soon as synchronize_rcu() returns.
96 97
97Important note: for this to work, the architecture in question must 98Important note: for this to work, the architecture in question must
98invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively. 99invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
diff --git a/Documentation/RCU/UP.txt b/Documentation/RCU/UP.txt
index 90ec5341ee98..53bde717017b 100644
--- a/Documentation/RCU/UP.txt
+++ b/Documentation/RCU/UP.txt
@@ -86,10 +86,8 @@ even on a UP system. So do not do it! Even on a UP system, the RCU
86infrastructure -must- respect grace periods, and -must- invoke callbacks 86infrastructure -must- respect grace periods, and -must- invoke callbacks
87from a known environment in which no locks are held. 87from a known environment in which no locks are held.
88 88
89It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return 89Note that it -is- safe for synchronize_rcu() to return immediately on
90immediately on an UP system. It is also safe for synchronize_rcu() 90UP systems, including !PREEMPT SMP builds running on UP systems.
91to return immediately on UP systems, except when running preemptable
92RCU.
93 91
94Quick Quiz #3: Why can't synchronize_rcu() return immediately on 92Quick Quiz #3: Why can't synchronize_rcu() return immediately on
95 UP systems running preemptable RCU? 93 UP systems running preemptable RCU?
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index 6f469864d9f5..e98ff261a438 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -182,16 +182,13 @@ over a rather long period of time, but improvements are always welcome!
182 when publicizing a pointer to a structure that can 182 when publicizing a pointer to a structure that can
183 be traversed by an RCU read-side critical section. 183 be traversed by an RCU read-side critical section.
184 184
1855. If call_rcu(), or a related primitive such as call_rcu_bh(), 1855. If call_rcu() or call_srcu() is used, the callback function will
186 call_rcu_sched(), or call_srcu() is used, the callback function 186 be called from softirq context. In particular, it cannot block.
187 will be called from softirq context. In particular, it cannot
188 block.
189 187
1906. Since synchronize_rcu() can block, it cannot be called from 1886. Since synchronize_rcu() can block, it cannot be called
191 any sort of irq context. The same rule applies for 189 from any sort of irq context. The same rule applies
192 synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), 190 for synchronize_srcu(), synchronize_rcu_expedited(), and
193 synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), 191 synchronize_srcu_expedited().
194 synchronize_sched_expedite(), and synchronize_srcu_expedited().
195 192
196 The expedited forms of these primitives have the same semantics 193 The expedited forms of these primitives have the same semantics
197 as the non-expedited forms, but expediting is both expensive and 194 as the non-expedited forms, but expediting is both expensive and
@@ -212,20 +209,20 @@ over a rather long period of time, but improvements are always welcome!
212 of the system, especially to real-time workloads running on 209 of the system, especially to real-time workloads running on
213 the rest of the system. 210 the rest of the system.
214 211
2157. If the updater uses call_rcu() or synchronize_rcu(), then the 2127. As of v4.20, a given kernel implements only one RCU flavor,
216 corresponding readers must use rcu_read_lock() and 213 which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
217 rcu_read_unlock(). If the updater uses call_rcu_bh() or 214 If the updater uses call_rcu() or synchronize_rcu(),
218 synchronize_rcu_bh(), then the corresponding readers must 215 then the corresponding readers my use rcu_read_lock() and
219 use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the 216 rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
220 updater uses call_rcu_sched() or synchronize_sched(), then 217 or any pair of primitives that disables and re-enables preemption,
221 the corresponding readers must disable preemption, possibly 218 for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
222 by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). 219 If the updater uses synchronize_srcu() or call_srcu(),
223 If the updater uses synchronize_srcu() or call_srcu(), then 220 then the corresponding readers must use srcu_read_lock() and
224 the corresponding readers must use srcu_read_lock() and
225 srcu_read_unlock(), and with the same srcu_struct. The rules for 221 srcu_read_unlock(), and with the same srcu_struct. The rules for
226 the expedited primitives are the same as for their non-expedited 222 the expedited primitives are the same as for their non-expedited
227 counterparts. Mixing things up will result in confusion and 223 counterparts. Mixing things up will result in confusion and
228 broken kernels. 224 broken kernels, and has even resulted in an exploitable security
225 issue.
229 226
230 One exception to this rule: rcu_read_lock() and rcu_read_unlock() 227 One exception to this rule: rcu_read_lock() and rcu_read_unlock()
231 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() 228 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
@@ -288,8 +285,7 @@ over a rather long period of time, but improvements are always welcome!
288 d. Periodically invoke synchronize_rcu(), permitting a limited 285 d. Periodically invoke synchronize_rcu(), permitting a limited
289 number of updates per grace period. 286 number of updates per grace period.
290 287
291 The same cautions apply to call_rcu_bh(), call_rcu_sched(), 288 The same cautions apply to call_srcu() and kfree_rcu().
292 call_srcu(), and kfree_rcu().
293 289
294 Note that although these primitives do take action to avoid memory 290 Note that although these primitives do take action to avoid memory
295 exhaustion when any given CPU has too many callbacks, a determined 291 exhaustion when any given CPU has too many callbacks, a determined
@@ -322,7 +318,7 @@ over a rather long period of time, but improvements are always welcome!
322 318
32311. Any lock acquired by an RCU callback must be acquired elsewhere 31911. Any lock acquired by an RCU callback must be acquired elsewhere
324 with softirq disabled, e.g., via spin_lock_irqsave(), 320 with softirq disabled, e.g., via spin_lock_irqsave(),
325 spin_lock_bh(), etc. Failing to disable irq on a given 321 spin_lock_bh(), etc. Failing to disable softirq on a given
326 acquisition of that lock will result in deadlock as soon as 322 acquisition of that lock will result in deadlock as soon as
327 the RCU softirq handler happens to run your RCU callback while 323 the RCU softirq handler happens to run your RCU callback while
328 interrupting that acquisition's critical section. 324 interrupting that acquisition's critical section.
@@ -335,13 +331,16 @@ over a rather long period of time, but improvements are always welcome!
335 must use whatever locking or other synchronization is required 331 must use whatever locking or other synchronization is required
336 to safely access and/or modify that data structure. 332 to safely access and/or modify that data structure.
337 333
338 RCU callbacks are -usually- executed on the same CPU that executed 334 Do not assume that RCU callbacks will be executed on the same
339 the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), 335 CPU that executed the corresponding call_rcu() or call_srcu().
340 but are by -no- means guaranteed to be. For example, if a given 336 For example, if a given CPU goes offline while having an RCU
341 CPU goes offline while having an RCU callback pending, then that 337 callback pending, then that RCU callback will execute on some
342 RCU callback will execute on some surviving CPU. (If this was 338 surviving CPU. (If this was not the case, a self-spawning RCU
343 not the case, a self-spawning RCU callback would prevent the 339 callback would prevent the victim CPU from ever going offline.)
344 victim CPU from ever going offline.) 340 Furthermore, CPUs designated by rcu_nocbs= might well -always-
341 have their RCU callbacks executed on some other CPUs, in fact,
342 for some real-time workloads, this is the whole point of using
343 the rcu_nocbs= kernel boot parameter.
345 344
34613. Unlike other forms of RCU, it -is- permissible to block in an 34513. Unlike other forms of RCU, it -is- permissible to block in an
347 SRCU read-side critical section (demarked by srcu_read_lock() 346 SRCU read-side critical section (demarked by srcu_read_lock()
@@ -381,11 +380,11 @@ over a rather long period of time, but improvements are always welcome!
381 380
382 SRCU's expedited primitive (synchronize_srcu_expedited()) 381 SRCU's expedited primitive (synchronize_srcu_expedited())
383 never sends IPIs to other CPUs, so it is easier on 382 never sends IPIs to other CPUs, so it is easier on
384 real-time workloads than is synchronize_rcu_expedited(), 383 real-time workloads than is synchronize_rcu_expedited().
385 synchronize_rcu_bh_expedited() or synchronize_sched_expedited().
386 384
387 Note that rcu_dereference() and rcu_assign_pointer() relate to 385 Note that rcu_assign_pointer() relates to SRCU just as it does to
388 SRCU just as they do to other forms of RCU. 386 other forms of RCU, but instead of rcu_dereference() you should
387 use srcu_dereference() in order to avoid lockdep splats.
389 388
39014. The whole point of call_rcu(), synchronize_rcu(), and friends 38914. The whole point of call_rcu(), synchronize_rcu(), and friends
391 is to wait until all pre-existing readers have finished before 390 is to wait until all pre-existing readers have finished before
@@ -405,6 +404,9 @@ over a rather long period of time, but improvements are always welcome!
405 read-side critical sections. It is the responsibility of the 404 read-side critical sections. It is the responsibility of the
406 RCU update-side primitives to deal with this. 405 RCU update-side primitives to deal with this.
407 406
407 For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
408 immediately after an srcu_read_unlock() to get a full barrier.
409
40816. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the 41016. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
409 __rcu sparse checks to validate your RCU code. These can help 411 __rcu sparse checks to validate your RCU code. These can help
410 find problems as follows: 412 find problems as follows:
@@ -428,22 +430,19 @@ over a rather long period of time, but improvements are always welcome!
428 These debugging aids can help you find problems that are 430 These debugging aids can help you find problems that are
429 otherwise extremely difficult to spot. 431 otherwise extremely difficult to spot.
430 432
43117. If you register a callback using call_rcu(), call_rcu_bh(), 43317. If you register a callback using call_rcu() or call_srcu(), and
432 call_rcu_sched(), or call_srcu(), and pass in a function defined 434 pass in a function defined within a loadable module, then it in
433 within a loadable module, then it in necessary to wait for 435 necessary to wait for all pending callbacks to be invoked after
434 all pending callbacks to be invoked after the last invocation 436 the last invocation and before unloading that module. Note that
435 and before unloading that module. Note that it is absolutely 437 it is absolutely -not- sufficient to wait for a grace period!
436 -not- sufficient to wait for a grace period! The current (say) 438 The current (say) synchronize_rcu() implementation is -not-
437 synchronize_rcu() implementation waits only for all previous 439 guaranteed to wait for callbacks registered on other CPUs.
438 callbacks registered on the CPU that synchronize_rcu() is running 440 Or even on the current CPU if that CPU recently went offline
439 on, but it is -not- guaranteed to wait for callbacks registered 441 and came back online.
440 on other CPUs.
441 442
442 You instead need to use one of the barrier functions: 443 You instead need to use one of the barrier functions:
443 444
444 o call_rcu() -> rcu_barrier() 445 o call_rcu() -> rcu_barrier()
445 o call_rcu_bh() -> rcu_barrier()
446 o call_rcu_sched() -> rcu_barrier()
447 o call_srcu() -> srcu_barrier() 446 o call_srcu() -> srcu_barrier()
448 447
449 However, these barrier functions are absolutely -not- guaranteed 448 However, these barrier functions are absolutely -not- guaranteed
diff --git a/Documentation/RCU/rcu.txt b/Documentation/RCU/rcu.txt
index 721b3e426515..c818cf65c5a9 100644
--- a/Documentation/RCU/rcu.txt
+++ b/Documentation/RCU/rcu.txt
@@ -52,10 +52,10 @@ o If I am running on a uniprocessor kernel, which can only do one
52o How can I see where RCU is currently used in the Linux kernel? 52o How can I see where RCU is currently used in the Linux kernel?
53 53
54 Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu", 54 Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu",
55 "rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh", 55 "rcu_read_lock_bh", "rcu_read_unlock_bh", "srcu_read_lock",
56 "srcu_read_lock", "srcu_read_unlock", "synchronize_rcu", 56 "srcu_read_unlock", "synchronize_rcu", "synchronize_net",
57 "synchronize_net", "synchronize_srcu", and the other RCU 57 "synchronize_srcu", and the other RCU primitives. Or grab one
58 primitives. Or grab one of the cscope databases from: 58 of the cscope databases from:
59 59
60 http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html 60 http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html
61 61
diff --git a/Documentation/RCU/rcu_dereference.txt b/Documentation/RCU/rcu_dereference.txt
index ab96227bad42..bf699e8cfc75 100644
--- a/Documentation/RCU/rcu_dereference.txt
+++ b/Documentation/RCU/rcu_dereference.txt
@@ -351,3 +351,106 @@ garbage values.
351 351
352In short, rcu_dereference() is -not- optional when you are going to 352In short, rcu_dereference() is -not- optional when you are going to
353dereference the resulting pointer. 353dereference the resulting pointer.
354
355
356WHICH MEMBER OF THE rcu_dereference() FAMILY SHOULD YOU USE?
357
358First, please avoid using rcu_dereference_raw() and also please avoid
359using rcu_dereference_check() and rcu_dereference_protected() with a
360second argument with a constant value of 1 (or true, for that matter).
361With that caution out of the way, here is some guidance for which
362member of the rcu_dereference() to use in various situations:
363
3641. If the access needs to be within an RCU read-side critical
365 section, use rcu_dereference(). With the new consolidated
366 RCU flavors, an RCU read-side critical section is entered
367 using rcu_read_lock(), anything that disables bottom halves,
368 anything that disables interrupts, or anything that disables
369 preemption.
370
3712. If the access might be within an RCU read-side critical section
372 on the one hand, or protected by (say) my_lock on the other,
373 use rcu_dereference_check(), for example:
374
375 p1 = rcu_dereference_check(p->rcu_protected_pointer,
376 lockdep_is_held(&my_lock));
377
378
3793. If the access might be within an RCU read-side critical section
380 on the one hand, or protected by either my_lock or your_lock on
381 the other, again use rcu_dereference_check(), for example:
382
383 p1 = rcu_dereference_check(p->rcu_protected_pointer,
384 lockdep_is_held(&my_lock) ||
385 lockdep_is_held(&your_lock));
386
3874. If the access is on the update side, so that it is always protected
388 by my_lock, use rcu_dereference_protected():
389
390 p1 = rcu_dereference_protected(p->rcu_protected_pointer,
391 lockdep_is_held(&my_lock));
392
393 This can be extended to handle multiple locks as in #3 above,
394 and both can be extended to check other conditions as well.
395
3965. If the protection is supplied by the caller, and is thus unknown
397 to this code, that is the rare case when rcu_dereference_raw()
398 is appropriate. In addition, rcu_dereference_raw() might be
399 appropriate when the lockdep expression would be excessively
400 complex, except that a better approach in that case might be to
401 take a long hard look at your synchronization design. Still,
402 there are data-locking cases where any one of a very large number
403 of locks or reference counters suffices to protect the pointer,
404 so rcu_dereference_raw() does have its place.
405
406 However, its place is probably quite a bit smaller than one
407 might expect given the number of uses in the current kernel.
408 Ditto for its synonym, rcu_dereference_check( ... , 1), and
409 its close relative, rcu_dereference_protected(... , 1).
410
411
412SPARSE CHECKING OF RCU-PROTECTED POINTERS
413
414The sparse static-analysis tool checks for direct access to RCU-protected
415pointers, which can result in "interesting" bugs due to compiler
416optimizations involving invented loads and perhaps also load tearing.
417For example, suppose someone mistakenly does something like this:
418
419 p = q->rcu_protected_pointer;
420 do_something_with(p->a);
421 do_something_else_with(p->b);
422
423If register pressure is high, the compiler might optimize "p" out
424of existence, transforming the code to something like this:
425
426 do_something_with(q->rcu_protected_pointer->a);
427 do_something_else_with(q->rcu_protected_pointer->b);
428
429This could fatally disappoint your code if q->rcu_protected_pointer
430changed in the meantime. Nor is this a theoretical problem: Exactly
431this sort of bug cost Paul E. McKenney (and several of his innocent
432colleagues) a three-day weekend back in the early 1990s.
433
434Load tearing could of course result in dereferencing a mashup of a pair
435of pointers, which also might fatally disappoint your code.
436
437These problems could have been avoided simply by making the code instead
438read as follows:
439
440 p = rcu_dereference(q->rcu_protected_pointer);
441 do_something_with(p->a);
442 do_something_else_with(p->b);
443
444Unfortunately, these sorts of bugs can be extremely hard to spot during
445review. This is where the sparse tool comes into play, along with the
446"__rcu" marker. If you mark a pointer declaration, whether in a structure
447or as a formal parameter, with "__rcu", which tells sparse to complain if
448this pointer is accessed directly. It will also cause sparse to complain
449if a pointer not marked with "__rcu" is accessed using rcu_dereference()
450and friends. For example, ->rcu_protected_pointer might be declared as
451follows:
452
453 struct foo __rcu *rcu_protected_pointer;
454
455Use of "__rcu" is opt-in. If you choose not to use it, then you should
456ignore the sparse warnings.
diff --git a/Documentation/RCU/rcubarrier.txt b/Documentation/RCU/rcubarrier.txt
index 5d7759071a3e..a2782df69732 100644
--- a/Documentation/RCU/rcubarrier.txt
+++ b/Documentation/RCU/rcubarrier.txt
@@ -83,16 +83,15 @@ Pseudo-code using rcu_barrier() is as follows:
83 2. Execute rcu_barrier(). 83 2. Execute rcu_barrier().
84 3. Allow the module to be unloaded. 84 3. Allow the module to be unloaded.
85 85
86There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier() 86There is also an srcu_barrier() function for SRCU, and you of course
87functions for the other flavors of RCU, and you of course must match 87must match the flavor of rcu_barrier() with that of call_rcu(). If your
88the flavor of rcu_barrier() with that of call_rcu(). If your module 88module uses multiple flavors of call_rcu(), then it must also use multiple
89uses multiple flavors of call_rcu(), then it must also use multiple
90flavors of rcu_barrier() when unloading that module. For example, if 89flavors of rcu_barrier() when unloading that module. For example, if
91it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on 90it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
92srcu_struct_2(), then the following three lines of code will be required 91srcu_struct_2(), then the following three lines of code will be required
93when unloading: 92when unloading:
94 93
95 1 rcu_barrier_bh(); 94 1 rcu_barrier();
96 2 srcu_barrier(&srcu_struct_1); 95 2 srcu_barrier(&srcu_struct_1);
97 3 srcu_barrier(&srcu_struct_2); 96 3 srcu_barrier(&srcu_struct_2);
98 97
@@ -185,12 +184,12 @@ module invokes call_rcu() from timers, you will need to first cancel all
185the timers, and only then invoke rcu_barrier() to wait for any remaining 184the timers, and only then invoke rcu_barrier() to wait for any remaining
186RCU callbacks to complete. 185RCU callbacks to complete.
187 186
188Of course, if you module uses call_rcu_bh(), you will need to invoke 187Of course, if you module uses call_rcu(), you will need to invoke
189rcu_barrier_bh() before unloading. Similarly, if your module uses 188rcu_barrier() before unloading. Similarly, if your module uses
190call_rcu_sched(), you will need to invoke rcu_barrier_sched() before 189call_srcu(), you will need to invoke srcu_barrier() before unloading,
191unloading. If your module uses call_rcu(), call_rcu_bh(), -and- 190and on the same srcu_struct structure. If your module uses call_rcu()
192call_rcu_sched(), then you will need to invoke each of rcu_barrier(), 191-and- call_srcu(), then you will need to invoke rcu_barrier() -and-
193rcu_barrier_bh(), and rcu_barrier_sched(). 192srcu_barrier().
194 193
195 194
196Implementing rcu_barrier() 195Implementing rcu_barrier()
@@ -223,8 +222,8 @@ shown below. Note that the final "1" in on_each_cpu()'s argument list
223ensures that all the calls to rcu_barrier_func() will have completed 222ensures that all the calls to rcu_barrier_func() will have completed
224before on_each_cpu() returns. Line 9 then waits for the completion. 223before on_each_cpu() returns. Line 9 then waits for the completion.
225 224
226This code was rewritten in 2008 to support rcu_barrier_bh() and 225This code was rewritten in 2008 and several times thereafter, but this
227rcu_barrier_sched() in addition to the original rcu_barrier(). 226still gives the general idea.
228 227
229The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() 228The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
230to post an RCU callback, as follows: 229to post an RCU callback, as follows:
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index 1ace20815bb1..981651a8b65d 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -310,7 +310,7 @@ reader, updater, and reclaimer.
310 310
311 311
312 rcu_assign_pointer() 312 rcu_assign_pointer()
313 +--------+ 313 +--------+
314 +---------------------->| reader |---------+ 314 +---------------------->| reader |---------+
315 | +--------+ | 315 | +--------+ |
316 | | | 316 | | |
@@ -318,12 +318,12 @@ reader, updater, and reclaimer.
318 | | | rcu_read_lock() 318 | | | rcu_read_lock()
319 | | | rcu_read_unlock() 319 | | | rcu_read_unlock()
320 | rcu_dereference() | | 320 | rcu_dereference() | |
321 +---------+ | | 321 +---------+ | |
322 | updater |<---------------------+ | 322 | updater |<----------------+ |
323 +---------+ V 323 +---------+ V
324 | +-----------+ 324 | +-----------+
325 +----------------------------------->| reclaimer | 325 +----------------------------------->| reclaimer |
326 +-----------+ 326 +-----------+
327 Defer: 327 Defer:
328 synchronize_rcu() & call_rcu() 328 synchronize_rcu() & call_rcu()
329 329
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 2b8ee90bb644..d377a2166b79 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3623,7 +3623,9 @@
3623 see CONFIG_RAS_CEC help text. 3623 see CONFIG_RAS_CEC help text.
3624 3624
3625 rcu_nocbs= [KNL] 3625 rcu_nocbs= [KNL]
3626 The argument is a cpu list, as described above. 3626 The argument is a cpu list, as described above,
3627 except that the string "all" can be used to
3628 specify every CPU on the system.
3627 3629
3628 In kernels built with CONFIG_RCU_NOCB_CPU=y, set 3630 In kernels built with CONFIG_RCU_NOCB_CPU=y, set
3629 the specified list of CPUs to be no-callback CPUs. 3631 the specified list of CPUs to be no-callback CPUs.
diff --git a/MAINTAINERS b/MAINTAINERS
index e17ebf70b548..a9b5270d006e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8983,7 +8983,7 @@ R: Daniel Lustig <dlustig@nvidia.com>
8983L: linux-kernel@vger.kernel.org 8983L: linux-kernel@vger.kernel.org
8984L: linux-arch@vger.kernel.org 8984L: linux-arch@vger.kernel.org
8985S: Supported 8985S: Supported
8986T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 8986T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
8987F: tools/memory-model/ 8987F: tools/memory-model/
8988F: Documentation/atomic_bitops.txt 8988F: Documentation/atomic_bitops.txt
8989F: Documentation/atomic_t.txt 8989F: Documentation/atomic_t.txt
@@ -13031,9 +13031,9 @@ M: Josh Triplett <josh@joshtriplett.org>
13031R: Steven Rostedt <rostedt@goodmis.org> 13031R: Steven Rostedt <rostedt@goodmis.org>
13032R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 13032R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
13033R: Lai Jiangshan <jiangshanlai@gmail.com> 13033R: Lai Jiangshan <jiangshanlai@gmail.com>
13034L: linux-kernel@vger.kernel.org 13034L: rcu@vger.kernel.org
13035S: Supported 13035S: Supported
13036T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 13036T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
13037F: tools/testing/selftests/rcutorture 13037F: tools/testing/selftests/rcutorture
13038 13038
13039RDC R-321X SoC 13039RDC R-321X SoC
@@ -13079,10 +13079,10 @@ R: Steven Rostedt <rostedt@goodmis.org>
13079R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 13079R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
13080R: Lai Jiangshan <jiangshanlai@gmail.com> 13080R: Lai Jiangshan <jiangshanlai@gmail.com>
13081R: Joel Fernandes <joel@joelfernandes.org> 13081R: Joel Fernandes <joel@joelfernandes.org>
13082L: linux-kernel@vger.kernel.org 13082L: rcu@vger.kernel.org
13083W: http://www.rdrop.com/users/paulmck/RCU/ 13083W: http://www.rdrop.com/users/paulmck/RCU/
13084S: Supported 13084S: Supported
13085T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 13085T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
13086F: Documentation/RCU/ 13086F: Documentation/RCU/
13087X: Documentation/RCU/torture.txt 13087X: Documentation/RCU/torture.txt
13088F: include/linux/rcu* 13088F: include/linux/rcu*
@@ -14234,10 +14234,10 @@ M: "Paul E. McKenney" <paulmck@linux.ibm.com>
14234M: Josh Triplett <josh@joshtriplett.org> 14234M: Josh Triplett <josh@joshtriplett.org>
14235R: Steven Rostedt <rostedt@goodmis.org> 14235R: Steven Rostedt <rostedt@goodmis.org>
14236R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 14236R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
14237L: linux-kernel@vger.kernel.org 14237L: rcu@vger.kernel.org
14238W: http://www.rdrop.com/users/paulmck/RCU/ 14238W: http://www.rdrop.com/users/paulmck/RCU/
14239S: Supported 14239S: Supported
14240T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 14240T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
14241F: include/linux/srcu*.h 14241F: include/linux/srcu*.h
14242F: kernel/rcu/srcu*.c 14242F: kernel/rcu/srcu*.c
14243 14243
@@ -15684,7 +15684,7 @@ M: "Paul E. McKenney" <paulmck@linux.ibm.com>
15684M: Josh Triplett <josh@joshtriplett.org> 15684M: Josh Triplett <josh@joshtriplett.org>
15685L: linux-kernel@vger.kernel.org 15685L: linux-kernel@vger.kernel.org
15686S: Supported 15686S: Supported
15687T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 15687T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
15688F: Documentation/RCU/torture.txt 15688F: Documentation/RCU/torture.txt
15689F: kernel/torture.c 15689F: kernel/torture.c
15690F: kernel/rcu/rcutorture.c 15690F: kernel/rcu/rcutorture.c
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 470601980794..739c5b4830d7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -388,7 +388,7 @@ static void nvme_free_ns_head(struct kref *ref)
388 nvme_mpath_remove_disk(head); 388 nvme_mpath_remove_disk(head);
389 ida_simple_remove(&head->subsys->ns_ida, head->instance); 389 ida_simple_remove(&head->subsys->ns_ida, head->instance);
390 list_del_init(&head->entry); 390 list_del_init(&head->entry);
391 cleanup_srcu_struct_quiesced(&head->srcu); 391 cleanup_srcu_struct(&head->srcu);
392 nvme_put_subsystem(head->subsys); 392 nvme_put_subsystem(head->subsys);
393 kfree(head); 393 kfree(head);
394} 394}
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 6cdb1db776cf..922bb6848813 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -878,9 +878,11 @@ static inline void rcu_head_init(struct rcu_head *rhp)
878static inline bool 878static inline bool
879rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f) 879rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
880{ 880{
881 if (READ_ONCE(rhp->func) == f) 881 rcu_callback_t func = READ_ONCE(rhp->func);
882
883 if (func == f)
882 return true; 884 return true;
883 WARN_ON_ONCE(READ_ONCE(rhp->func) != (rcu_callback_t)~0L); 885 WARN_ON_ONCE(func != (rcu_callback_t)~0L);
884 return false; 886 return false;
885} 887}
886 888
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index c495b2d51569..e432cc92c73d 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -56,45 +56,11 @@ struct srcu_struct { };
56 56
57void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, 57void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
58 void (*func)(struct rcu_head *head)); 58 void (*func)(struct rcu_head *head));
59void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced); 59void cleanup_srcu_struct(struct srcu_struct *ssp);
60int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); 60int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
61void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); 61void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
62void synchronize_srcu(struct srcu_struct *ssp); 62void synchronize_srcu(struct srcu_struct *ssp);
63 63
64/**
65 * cleanup_srcu_struct - deconstruct a sleep-RCU structure
66 * @ssp: structure to clean up.
67 *
68 * Must invoke this after you are finished using a given srcu_struct that
69 * was initialized via init_srcu_struct(), else you leak memory.
70 */
71static inline void cleanup_srcu_struct(struct srcu_struct *ssp)
72{
73 _cleanup_srcu_struct(ssp, false);
74}
75
76/**
77 * cleanup_srcu_struct_quiesced - deconstruct a quiesced sleep-RCU structure
78 * @ssp: structure to clean up.
79 *
80 * Must invoke this after you are finished using a given srcu_struct that
81 * was initialized via init_srcu_struct(), else you leak memory. Also,
82 * all grace-period processing must have completed.
83 *
84 * "Completed" means that the last synchronize_srcu() and
85 * synchronize_srcu_expedited() calls must have returned before the call
86 * to cleanup_srcu_struct_quiesced(). It also means that the callback
87 * from the last call_srcu() must have been invoked before the call to
88 * cleanup_srcu_struct_quiesced(), but you can use srcu_barrier() to help
89 * with this last. Violating these rules will get you a WARN_ON() splat
90 * (with high probability, anyway), and will also cause the srcu_struct
91 * to be leaked.
92 */
93static inline void cleanup_srcu_struct_quiesced(struct srcu_struct *ssp)
94{
95 _cleanup_srcu_struct(ssp, true);
96}
97
98#ifdef CONFIG_DEBUG_LOCK_ALLOC 64#ifdef CONFIG_DEBUG_LOCK_ALLOC
99 65
100/** 66/**
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index ad40a2617063..80a463d31a8d 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -829,7 +829,9 @@ static void lock_torture_cleanup(void)
829 "End of test: SUCCESS"); 829 "End of test: SUCCESS");
830 830
831 kfree(cxt.lwsa); 831 kfree(cxt.lwsa);
832 cxt.lwsa = NULL;
832 kfree(cxt.lrsa); 833 kfree(cxt.lrsa);
834 cxt.lrsa = NULL;
833 835
834end: 836end:
835 torture_cleanup_end(); 837 torture_cleanup_end();
diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index acee72c0b24b..4b58c907b4b7 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -233,6 +233,7 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
233#ifdef CONFIG_RCU_STALL_COMMON 233#ifdef CONFIG_RCU_STALL_COMMON
234 234
235extern int rcu_cpu_stall_suppress; 235extern int rcu_cpu_stall_suppress;
236extern int rcu_cpu_stall_timeout;
236int rcu_jiffies_till_stall_check(void); 237int rcu_jiffies_till_stall_check(void);
237 238
238#define rcu_ftrace_dump_stall_suppress() \ 239#define rcu_ftrace_dump_stall_suppress() \
diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
index c29761152874..7a6890b23c5f 100644
--- a/kernel/rcu/rcuperf.c
+++ b/kernel/rcu/rcuperf.c
@@ -494,6 +494,10 @@ rcu_perf_cleanup(void)
494 494
495 if (torture_cleanup_begin()) 495 if (torture_cleanup_begin())
496 return; 496 return;
497 if (!cur_ops) {
498 torture_cleanup_end();
499 return;
500 }
497 501
498 if (reader_tasks) { 502 if (reader_tasks) {
499 for (i = 0; i < nrealreaders; i++) 503 for (i = 0; i < nrealreaders; i++)
@@ -614,6 +618,7 @@ rcu_perf_init(void)
614 pr_cont("\n"); 618 pr_cont("\n");
615 WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST)); 619 WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
616 firsterr = -EINVAL; 620 firsterr = -EINVAL;
621 cur_ops = NULL;
617 goto unwind; 622 goto unwind;
618 } 623 }
619 if (cur_ops->init) 624 if (cur_ops->init)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index f14d1b18a74f..efaa5b3f4d3f 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -299,7 +299,6 @@ struct rcu_torture_ops {
299 int irq_capable; 299 int irq_capable;
300 int can_boost; 300 int can_boost;
301 int extendables; 301 int extendables;
302 int ext_irq_conflict;
303 const char *name; 302 const char *name;
304}; 303};
305 304
@@ -592,12 +591,7 @@ static void srcu_torture_init(void)
592 591
593static void srcu_torture_cleanup(void) 592static void srcu_torture_cleanup(void)
594{ 593{
595 static DEFINE_TORTURE_RANDOM(rand); 594 cleanup_srcu_struct(&srcu_ctld);
596
597 if (torture_random(&rand) & 0x800)
598 cleanup_srcu_struct(&srcu_ctld);
599 else
600 cleanup_srcu_struct_quiesced(&srcu_ctld);
601 srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */ 595 srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */
602} 596}
603 597
@@ -1160,7 +1154,7 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
1160 unsigned long randmask2 = randmask1 >> 3; 1154 unsigned long randmask2 = randmask1 >> 3;
1161 1155
1162 WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT); 1156 WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
1163 /* Most of the time lots of bits, half the time only one bit. */ 1157 /* Mostly only one bit (need preemption!), sometimes lots of bits. */
1164 if (!(randmask1 & 0x7)) 1158 if (!(randmask1 & 0x7))
1165 mask = mask & randmask2; 1159 mask = mask & randmask2;
1166 else 1160 else
@@ -1170,10 +1164,6 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
1170 ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) || 1164 ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
1171 (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH)))) 1165 (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
1172 mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH; 1166 mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
1173 if ((mask & RCUTORTURE_RDR_IRQ) &&
1174 !(mask & cur_ops->ext_irq_conflict) &&
1175 (oldmask & cur_ops->ext_irq_conflict))
1176 mask |= cur_ops->ext_irq_conflict; /* Or if readers object. */
1177 return mask ?: RCUTORTURE_RDR_RCU; 1167 return mask ?: RCUTORTURE_RDR_RCU;
1178} 1168}
1179 1169
@@ -1848,7 +1838,7 @@ static int rcutorture_oom_notify(struct notifier_block *self,
1848 WARN(1, "%s invoked upon OOM during forward-progress testing.\n", 1838 WARN(1, "%s invoked upon OOM during forward-progress testing.\n",
1849 __func__); 1839 __func__);
1850 rcu_torture_fwd_cb_hist(); 1840 rcu_torture_fwd_cb_hist();
1851 rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat) / 2)); 1841 rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat)) / 2);
1852 WRITE_ONCE(rcu_fwd_emergency_stop, true); 1842 WRITE_ONCE(rcu_fwd_emergency_stop, true);
1853 smp_mb(); /* Emergency stop before free and wait to avoid hangs. */ 1843 smp_mb(); /* Emergency stop before free and wait to avoid hangs. */
1854 pr_info("%s: Freed %lu RCU callbacks.\n", 1844 pr_info("%s: Freed %lu RCU callbacks.\n",
@@ -2094,6 +2084,10 @@ rcu_torture_cleanup(void)
2094 cur_ops->cb_barrier(); 2084 cur_ops->cb_barrier();
2095 return; 2085 return;
2096 } 2086 }
2087 if (!cur_ops) {
2088 torture_cleanup_end();
2089 return;
2090 }
2097 2091
2098 rcu_torture_barrier_cleanup(); 2092 rcu_torture_barrier_cleanup();
2099 torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task); 2093 torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
@@ -2267,6 +2261,7 @@ rcu_torture_init(void)
2267 pr_cont("\n"); 2261 pr_cont("\n");
2268 WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST)); 2262 WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
2269 firsterr = -EINVAL; 2263 firsterr = -EINVAL;
2264 cur_ops = NULL;
2270 goto unwind; 2265 goto unwind;
2271 } 2266 }
2272 if (cur_ops->fqs == NULL && fqs_duration != 0) { 2267 if (cur_ops->fqs == NULL && fqs_duration != 0) {
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 5d4a39a6505a..44d6606b8325 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -76,19 +76,16 @@ EXPORT_SYMBOL_GPL(init_srcu_struct);
76 * Must invoke this after you are finished using a given srcu_struct that 76 * Must invoke this after you are finished using a given srcu_struct that
77 * was initialized via init_srcu_struct(), else you leak memory. 77 * was initialized via init_srcu_struct(), else you leak memory.
78 */ 78 */
79void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) 79void cleanup_srcu_struct(struct srcu_struct *ssp)
80{ 80{
81 WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]); 81 WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]);
82 if (quiesced) 82 flush_work(&ssp->srcu_work);
83 WARN_ON(work_pending(&ssp->srcu_work));
84 else
85 flush_work(&ssp->srcu_work);
86 WARN_ON(ssp->srcu_gp_running); 83 WARN_ON(ssp->srcu_gp_running);
87 WARN_ON(ssp->srcu_gp_waiting); 84 WARN_ON(ssp->srcu_gp_waiting);
88 WARN_ON(ssp->srcu_cb_head); 85 WARN_ON(ssp->srcu_cb_head);
89 WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail); 86 WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail);
90} 87}
91EXPORT_SYMBOL_GPL(_cleanup_srcu_struct); 88EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
92 89
93/* 90/*
94 * Removes the count for the old reader from the appropriate element of 91 * Removes the count for the old reader from the appropriate element of
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index a60b8ba9e1ac..9b761e546de8 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -360,8 +360,14 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp)
360 return SRCU_INTERVAL; 360 return SRCU_INTERVAL;
361} 361}
362 362
363/* Helper for cleanup_srcu_struct() and cleanup_srcu_struct_quiesced(). */ 363/**
364void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) 364 * cleanup_srcu_struct - deconstruct a sleep-RCU structure
365 * @ssp: structure to clean up.
366 *
367 * Must invoke this after you are finished using a given srcu_struct that
368 * was initialized via init_srcu_struct(), else you leak memory.
369 */
370void cleanup_srcu_struct(struct srcu_struct *ssp)
365{ 371{
366 int cpu; 372 int cpu;
367 373
@@ -369,24 +375,14 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
369 return; /* Just leak it! */ 375 return; /* Just leak it! */
370 if (WARN_ON(srcu_readers_active(ssp))) 376 if (WARN_ON(srcu_readers_active(ssp)))
371 return; /* Just leak it! */ 377 return; /* Just leak it! */
372 if (quiesced) { 378 flush_delayed_work(&ssp->work);
373 if (WARN_ON(delayed_work_pending(&ssp->work)))
374 return; /* Just leak it! */
375 } else {
376 flush_delayed_work(&ssp->work);
377 }
378 for_each_possible_cpu(cpu) { 379 for_each_possible_cpu(cpu) {
379 struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); 380 struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);
380 381
381 if (quiesced) { 382 del_timer_sync(&sdp->delay_work);
382 if (WARN_ON(timer_pending(&sdp->delay_work))) 383 flush_work(&sdp->work);
383 return; /* Just leak it! */ 384 if (WARN_ON(rcu_segcblist_n_cbs(&sdp->srcu_cblist)))
384 if (WARN_ON(work_pending(&sdp->work))) 385 return; /* Forgot srcu_barrier(), so just leak it! */
385 return; /* Just leak it! */
386 } else {
387 del_timer_sync(&sdp->delay_work);
388 flush_work(&sdp->work);
389 }
390 } 386 }
391 if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) || 387 if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) ||
392 WARN_ON(srcu_readers_active(ssp))) { 388 WARN_ON(srcu_readers_active(ssp))) {
@@ -397,7 +393,7 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
397 free_percpu(ssp->sda); 393 free_percpu(ssp->sda);
398 ssp->sda = NULL; 394 ssp->sda = NULL;
399} 395}
400EXPORT_SYMBOL_GPL(_cleanup_srcu_struct); 396EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
401 397
402/* 398/*
403 * Counts the new reader in the appropriate per-CPU element of the 399 * Counts the new reader in the appropriate per-CPU element of the
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 911bd9076d43..477b4eb44af5 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -52,7 +52,7 @@ void rcu_qs(void)
52 local_irq_save(flags); 52 local_irq_save(flags);
53 if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) { 53 if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) {
54 rcu_ctrlblk.donetail = rcu_ctrlblk.curtail; 54 rcu_ctrlblk.donetail = rcu_ctrlblk.curtail;
55 raise_softirq(RCU_SOFTIRQ); 55 raise_softirq_irqoff(RCU_SOFTIRQ);
56 } 56 }
57 local_irq_restore(flags); 57 local_irq_restore(flags);
58} 58}
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index acd6ccf56faf..ec77ec336f58 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -102,11 +102,6 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
102/* Number of rcu_nodes at specified level. */ 102/* Number of rcu_nodes at specified level. */
103int num_rcu_lvl[] = NUM_RCU_LVL_INIT; 103int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
104int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */ 104int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
105/* panic() on RCU Stall sysctl. */
106int sysctl_panic_on_rcu_stall __read_mostly;
107/* Commandeer a sysrq key to dump RCU's tree. */
108static bool sysrq_rcu;
109module_param(sysrq_rcu, bool, 0444);
110 105
111/* 106/*
112 * The rcu_scheduler_active variable is initialized to the value 107 * The rcu_scheduler_active variable is initialized to the value
@@ -149,7 +144,7 @@ static void sync_sched_exp_online_cleanup(int cpu);
149 144
150/* rcuc/rcub kthread realtime priority */ 145/* rcuc/rcub kthread realtime priority */
151static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0; 146static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0;
152module_param(kthread_prio, int, 0644); 147module_param(kthread_prio, int, 0444);
153 148
154/* Delay in jiffies for grace-period initialization delays, debug only. */ 149/* Delay in jiffies for grace-period initialization delays, debug only. */
155 150
@@ -406,7 +401,7 @@ static bool rcu_kick_kthreads;
406 */ 401 */
407static ulong jiffies_till_sched_qs = ULONG_MAX; 402static ulong jiffies_till_sched_qs = ULONG_MAX;
408module_param(jiffies_till_sched_qs, ulong, 0444); 403module_param(jiffies_till_sched_qs, ulong, 0444);
409static ulong jiffies_to_sched_qs; /* Adjusted version of above if not default */ 404static ulong jiffies_to_sched_qs; /* See adjust_jiffies_till_sched_qs(). */
410module_param(jiffies_to_sched_qs, ulong, 0444); /* Display only! */ 405module_param(jiffies_to_sched_qs, ulong, 0444); /* Display only! */
411 406
412/* 407/*
@@ -424,6 +419,7 @@ static void adjust_jiffies_till_sched_qs(void)
424 WRITE_ONCE(jiffies_to_sched_qs, jiffies_till_sched_qs); 419 WRITE_ONCE(jiffies_to_sched_qs, jiffies_till_sched_qs);
425 return; 420 return;
426 } 421 }
422 /* Otherwise, set to third fqs scan, but bound below on large system. */
427 j = READ_ONCE(jiffies_till_first_fqs) + 423 j = READ_ONCE(jiffies_till_first_fqs) +
428 2 * READ_ONCE(jiffies_till_next_fqs); 424 2 * READ_ONCE(jiffies_till_next_fqs);
429 if (j < HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV) 425 if (j < HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV)
@@ -513,74 +509,6 @@ static const char *gp_state_getname(short gs)
513} 509}
514 510
515/* 511/*
516 * Show the state of the grace-period kthreads.
517 */
518void show_rcu_gp_kthreads(void)
519{
520 int cpu;
521 unsigned long j;
522 unsigned long ja;
523 unsigned long jr;
524 unsigned long jw;
525 struct rcu_data *rdp;
526 struct rcu_node *rnp;
527
528 j = jiffies;
529 ja = j - READ_ONCE(rcu_state.gp_activity);
530 jr = j - READ_ONCE(rcu_state.gp_req_activity);
531 jw = j - READ_ONCE(rcu_state.gp_wake_time);
532 pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n",
533 rcu_state.name, gp_state_getname(rcu_state.gp_state),
534 rcu_state.gp_state,
535 rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL,
536 ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq),
537 (long)READ_ONCE(rcu_state.gp_seq),
538 (long)READ_ONCE(rcu_get_root()->gp_seq_needed),
539 READ_ONCE(rcu_state.gp_flags));
540 rcu_for_each_node_breadth_first(rnp) {
541 if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed))
542 continue;
543 pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n",
544 rnp->grplo, rnp->grphi, (long)rnp->gp_seq,
545 (long)rnp->gp_seq_needed);
546 if (!rcu_is_leaf_node(rnp))
547 continue;
548 for_each_leaf_node_possible_cpu(rnp, cpu) {
549 rdp = per_cpu_ptr(&rcu_data, cpu);
550 if (rdp->gpwrap ||
551 ULONG_CMP_GE(rcu_state.gp_seq,
552 rdp->gp_seq_needed))
553 continue;
554 pr_info("\tcpu %d ->gp_seq_needed %ld\n",
555 cpu, (long)rdp->gp_seq_needed);
556 }
557 }
558 /* sched_show_task(rcu_state.gp_kthread); */
559}
560EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
561
562/* Dump grace-period-request information due to commandeered sysrq. */
563static void sysrq_show_rcu(int key)
564{
565 show_rcu_gp_kthreads();
566}
567
568static struct sysrq_key_op sysrq_rcudump_op = {
569 .handler = sysrq_show_rcu,
570 .help_msg = "show-rcu(y)",
571 .action_msg = "Show RCU tree",
572 .enable_mask = SYSRQ_ENABLE_DUMP,
573};
574
575static int __init rcu_sysrq_init(void)
576{
577 if (sysrq_rcu)
578 return register_sysrq_key('y', &sysrq_rcudump_op);
579 return 0;
580}
581early_initcall(rcu_sysrq_init);
582
583/*
584 * Send along grace-period-related data for rcutorture diagnostics. 512 * Send along grace-period-related data for rcutorture diagnostics.
585 */ 513 */
586void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, 514void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags,
@@ -1034,27 +962,6 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp)
1034} 962}
1035 963
1036/* 964/*
1037 * Handler for the irq_work request posted when a grace period has
1038 * gone on for too long, but not yet long enough for an RCU CPU
1039 * stall warning. Set state appropriately, but just complain if
1040 * there is unexpected state on entry.
1041 */
1042static void rcu_iw_handler(struct irq_work *iwp)
1043{
1044 struct rcu_data *rdp;
1045 struct rcu_node *rnp;
1046
1047 rdp = container_of(iwp, struct rcu_data, rcu_iw);
1048 rnp = rdp->mynode;
1049 raw_spin_lock_rcu_node(rnp);
1050 if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) {
1051 rdp->rcu_iw_gp_seq = rnp->gp_seq;
1052 rdp->rcu_iw_pending = false;
1053 }
1054 raw_spin_unlock_rcu_node(rnp);
1055}
1056
1057/*
1058 * Return true if the specified CPU has passed through a quiescent 965 * Return true if the specified CPU has passed through a quiescent
1059 * state by virtue of being in or having passed through an dynticks 966 * state by virtue of being in or having passed through an dynticks
1060 * idle state since the last call to dyntick_save_progress_counter() 967 * idle state since the last call to dyntick_save_progress_counter()
@@ -1167,295 +1074,6 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
1167 return 0; 1074 return 0;
1168} 1075}
1169 1076
1170static void record_gp_stall_check_time(void)
1171{
1172 unsigned long j = jiffies;
1173 unsigned long j1;
1174
1175 rcu_state.gp_start = j;
1176 j1 = rcu_jiffies_till_stall_check();
1177 /* Record ->gp_start before ->jiffies_stall. */
1178 smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
1179 rcu_state.jiffies_resched = j + j1 / 2;
1180 rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
1181}
1182
1183/*
1184 * Complain about starvation of grace-period kthread.
1185 */
1186static void rcu_check_gp_kthread_starvation(void)
1187{
1188 struct task_struct *gpk = rcu_state.gp_kthread;
1189 unsigned long j;
1190
1191 j = jiffies - READ_ONCE(rcu_state.gp_activity);
1192 if (j > 2 * HZ) {
1193 pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
1194 rcu_state.name, j,
1195 (long)rcu_seq_current(&rcu_state.gp_seq),
1196 READ_ONCE(rcu_state.gp_flags),
1197 gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
1198 gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
1199 if (gpk) {
1200 pr_err("RCU grace-period kthread stack dump:\n");
1201 sched_show_task(gpk);
1202 wake_up_process(gpk);
1203 }
1204 }
1205}
1206
1207/*
1208 * Dump stacks of all tasks running on stalled CPUs. First try using
1209 * NMIs, but fall back to manual remote stack tracing on architectures
1210 * that don't support NMI-based stack dumps. The NMI-triggered stack
1211 * traces are more accurate because they are printed by the target CPU.
1212 */
1213static void rcu_dump_cpu_stacks(void)
1214{
1215 int cpu;
1216 unsigned long flags;
1217 struct rcu_node *rnp;
1218
1219 rcu_for_each_leaf_node(rnp) {
1220 raw_spin_lock_irqsave_rcu_node(rnp, flags);
1221 for_each_leaf_node_possible_cpu(rnp, cpu)
1222 if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu))
1223 if (!trigger_single_cpu_backtrace(cpu))
1224 dump_cpu_task(cpu);
1225 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
1226 }
1227}
1228
1229/*
1230 * If too much time has passed in the current grace period, and if
1231 * so configured, go kick the relevant kthreads.
1232 */
1233static void rcu_stall_kick_kthreads(void)
1234{
1235 unsigned long j;
1236
1237 if (!rcu_kick_kthreads)
1238 return;
1239 j = READ_ONCE(rcu_state.jiffies_kick_kthreads);
1240 if (time_after(jiffies, j) && rcu_state.gp_kthread &&
1241 (rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) {
1242 WARN_ONCE(1, "Kicking %s grace-period kthread\n",
1243 rcu_state.name);
1244 rcu_ftrace_dump(DUMP_ALL);
1245 wake_up_process(rcu_state.gp_kthread);
1246 WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ);
1247 }
1248}
1249
1250static void panic_on_rcu_stall(void)
1251{
1252 if (sysctl_panic_on_rcu_stall)
1253 panic("RCU Stall\n");
1254}
1255
1256static void print_other_cpu_stall(unsigned long gp_seq)
1257{
1258 int cpu;
1259 unsigned long flags;
1260 unsigned long gpa;
1261 unsigned long j;
1262 int ndetected = 0;
1263 struct rcu_node *rnp = rcu_get_root();
1264 long totqlen = 0;
1265
1266 /* Kick and suppress, if so configured. */
1267 rcu_stall_kick_kthreads();
1268 if (rcu_cpu_stall_suppress)
1269 return;
1270
1271 /*
1272 * OK, time to rat on our buddy...
1273 * See Documentation/RCU/stallwarn.txt for info on how to debug
1274 * RCU CPU stall warnings.
1275 */
1276 pr_err("INFO: %s detected stalls on CPUs/tasks:", rcu_state.name);
1277 print_cpu_stall_info_begin();
1278 rcu_for_each_leaf_node(rnp) {
1279 raw_spin_lock_irqsave_rcu_node(rnp, flags);
1280 ndetected += rcu_print_task_stall(rnp);
1281 if (rnp->qsmask != 0) {
1282 for_each_leaf_node_possible_cpu(rnp, cpu)
1283 if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) {
1284 print_cpu_stall_info(cpu);
1285 ndetected++;
1286 }
1287 }
1288 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
1289 }
1290
1291 print_cpu_stall_info_end();
1292 for_each_possible_cpu(cpu)
1293 totqlen += rcu_get_n_cbs_cpu(cpu);
1294 pr_cont("(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n",
1295 smp_processor_id(), (long)(jiffies - rcu_state.gp_start),
1296 (long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
1297 if (ndetected) {
1298 rcu_dump_cpu_stacks();
1299
1300 /* Complain about tasks blocking the grace period. */
1301 rcu_print_detail_task_stall();
1302 } else {
1303 if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) {
1304 pr_err("INFO: Stall ended before state dump start\n");
1305 } else {
1306 j = jiffies;
1307 gpa = READ_ONCE(rcu_state.gp_activity);
1308 pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n",
1309 rcu_state.name, j - gpa, j, gpa,
1310 READ_ONCE(jiffies_till_next_fqs),
1311 rcu_get_root()->qsmask);
1312 /* In this case, the current CPU might be at fault. */
1313 sched_show_task(current);
1314 }
1315 }
1316 /* Rewrite if needed in case of slow consoles. */
1317 if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
1318 WRITE_ONCE(rcu_state.jiffies_stall,
1319 jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
1320
1321 rcu_check_gp_kthread_starvation();
1322
1323 panic_on_rcu_stall();
1324
1325 rcu_force_quiescent_state(); /* Kick them all. */
1326}
1327
1328static void print_cpu_stall(void)
1329{
1330 int cpu;
1331 unsigned long flags;
1332 struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
1333 struct rcu_node *rnp = rcu_get_root();
1334 long totqlen = 0;
1335
1336 /* Kick and suppress, if so configured. */
1337 rcu_stall_kick_kthreads();
1338 if (rcu_cpu_stall_suppress)
1339 return;
1340
1341 /*
1342 * OK, time to rat on ourselves...
1343 * See Documentation/RCU/stallwarn.txt for info on how to debug
1344 * RCU CPU stall warnings.
1345 */
1346 pr_err("INFO: %s self-detected stall on CPU", rcu_state.name);
1347 print_cpu_stall_info_begin();
1348 raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);
1349 print_cpu_stall_info(smp_processor_id());
1350 raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags);
1351 print_cpu_stall_info_end();
1352 for_each_possible_cpu(cpu)
1353 totqlen += rcu_get_n_cbs_cpu(cpu);
1354 pr_cont(" (t=%lu jiffies g=%ld q=%lu)\n",
1355 jiffies - rcu_state.gp_start,
1356 (long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
1357
1358 rcu_check_gp_kthread_starvation();
1359
1360 rcu_dump_cpu_stacks();
1361
1362 raw_spin_lock_irqsave_rcu_node(rnp, flags);
1363 /* Rewrite if needed in case of slow consoles. */
1364 if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
1365 WRITE_ONCE(rcu_state.jiffies_stall,
1366 jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
1367 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
1368
1369 panic_on_rcu_stall();
1370
1371 /*
1372 * Attempt to revive the RCU machinery by forcing a context switch.
1373 *
1374 * A context switch would normally allow the RCU state machine to make
1375 * progress and it could be we're stuck in kernel space without context
1376 * switches for an entirely unreasonable amount of time.
1377 */
1378 set_tsk_need_resched(current);
1379 set_preempt_need_resched();
1380}
1381
1382static void check_cpu_stall(struct rcu_data *rdp)
1383{
1384 unsigned long gs1;
1385 unsigned long gs2;
1386 unsigned long gps;
1387 unsigned long j;
1388 unsigned long jn;
1389 unsigned long js;
1390 struct rcu_node *rnp;
1391
1392 if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) ||
1393 !rcu_gp_in_progress())
1394 return;
1395 rcu_stall_kick_kthreads();
1396 j = jiffies;
1397
1398 /*
1399 * Lots of memory barriers to reject false positives.
1400 *
1401 * The idea is to pick up rcu_state.gp_seq, then
1402 * rcu_state.jiffies_stall, then rcu_state.gp_start, and finally
1403 * another copy of rcu_state.gp_seq. These values are updated in
1404 * the opposite order with memory barriers (or equivalent) during
1405 * grace-period initialization and cleanup. Now, a false positive
1406 * can occur if we get an new value of rcu_state.gp_start and a old
1407 * value of rcu_state.jiffies_stall. But given the memory barriers,
1408 * the only way that this can happen is if one grace period ends
1409 * and another starts between these two fetches. This is detected
1410 * by comparing the second fetch of rcu_state.gp_seq with the
1411 * previous fetch from rcu_state.gp_seq.
1412 *
1413 * Given this check, comparisons of jiffies, rcu_state.jiffies_stall,
1414 * and rcu_state.gp_start suffice to forestall false positives.
1415 */
1416 gs1 = READ_ONCE(rcu_state.gp_seq);
1417 smp_rmb(); /* Pick up ->gp_seq first... */
1418 js = READ_ONCE(rcu_state.jiffies_stall);
1419 smp_rmb(); /* ...then ->jiffies_stall before the rest... */
1420 gps = READ_ONCE(rcu_state.gp_start);
1421 smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */
1422 gs2 = READ_ONCE(rcu_state.gp_seq);
1423 if (gs1 != gs2 ||
1424 ULONG_CMP_LT(j, js) ||
1425 ULONG_CMP_GE(gps, js))
1426 return; /* No stall or GP completed since entering function. */
1427 rnp = rdp->mynode;
1428 jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
1429 if (rcu_gp_in_progress() &&
1430 (READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
1431 cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
1432
1433 /* We haven't checked in, so go dump stack. */
1434 print_cpu_stall();
1435
1436 } else if (rcu_gp_in_progress() &&
1437 ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
1438 cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
1439
1440 /* They had a few time units to dump stack, so complain. */
1441 print_other_cpu_stall(gs2);
1442 }
1443}
1444
1445/**
1446 * rcu_cpu_stall_reset - prevent further stall warnings in current grace period
1447 *
1448 * Set the stall-warning timeout way off into the future, thus preventing
1449 * any RCU CPU stall-warning messages from appearing in the current set of
1450 * RCU grace periods.
1451 *
1452 * The caller must disable hard irqs.
1453 */
1454void rcu_cpu_stall_reset(void)
1455{
1456 WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2);
1457}
1458
1459/* Trace-event wrapper function for trace_rcu_future_grace_period. */ 1077/* Trace-event wrapper function for trace_rcu_future_grace_period. */
1460static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, 1078static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp,
1461 unsigned long gp_seq_req, const char *s) 1079 unsigned long gp_seq_req, const char *s)
@@ -1585,7 +1203,7 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
1585static void rcu_gp_kthread_wake(void) 1203static void rcu_gp_kthread_wake(void)
1586{ 1204{
1587 if ((current == rcu_state.gp_kthread && 1205 if ((current == rcu_state.gp_kthread &&
1588 !in_interrupt() && !in_serving_softirq()) || 1206 !in_irq() && !in_serving_softirq()) ||
1589 !READ_ONCE(rcu_state.gp_flags) || 1207 !READ_ONCE(rcu_state.gp_flags) ||
1590 !rcu_state.gp_kthread) 1208 !rcu_state.gp_kthread)
1591 return; 1209 return;
@@ -2295,11 +1913,10 @@ rcu_report_qs_rdp(int cpu, struct rcu_data *rdp)
2295 return; 1913 return;
2296 } 1914 }
2297 mask = rdp->grpmask; 1915 mask = rdp->grpmask;
1916 rdp->core_needs_qs = false;
2298 if ((rnp->qsmask & mask) == 0) { 1917 if ((rnp->qsmask & mask) == 0) {
2299 raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1918 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
2300 } else { 1919 } else {
2301 rdp->core_needs_qs = false;
2302
2303 /* 1920 /*
2304 * This GP can't end until cpu checks in, so all of our 1921 * This GP can't end until cpu checks in, so all of our
2305 * callbacks can be processed during the next GP. 1922 * callbacks can be processed during the next GP.
@@ -2548,11 +2165,11 @@ void rcu_sched_clock_irq(int user)
2548} 2165}
2549 2166
2550/* 2167/*
2551 * Scan the leaf rcu_node structures, processing dyntick state for any that 2168 * Scan the leaf rcu_node structures. For each structure on which all
2552 * have not yet encountered a quiescent state, using the function specified. 2169 * CPUs have reported a quiescent state and on which there are tasks
2553 * Also initiate boosting for any threads blocked on the root rcu_node. 2170 * blocking the current grace period, initiate RCU priority boosting.
2554 * 2171 * Otherwise, invoke the specified function to check dyntick state for
2555 * The caller must have suppressed start of new grace periods. 2172 * each CPU that has not yet reported a quiescent state.
2556 */ 2173 */
2557static void force_qs_rnp(int (*f)(struct rcu_data *rdp)) 2174static void force_qs_rnp(int (*f)(struct rcu_data *rdp))
2558{ 2175{
@@ -2635,101 +2252,6 @@ void rcu_force_quiescent_state(void)
2635} 2252}
2636EXPORT_SYMBOL_GPL(rcu_force_quiescent_state); 2253EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);
2637 2254
2638/*
2639 * This function checks for grace-period requests that fail to motivate
2640 * RCU to come out of its idle mode.
2641 */
2642void
2643rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
2644 const unsigned long gpssdelay)
2645{
2646 unsigned long flags;
2647 unsigned long j;
2648 struct rcu_node *rnp_root = rcu_get_root();
2649 static atomic_t warned = ATOMIC_INIT(0);
2650
2651 if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() ||
2652 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed))
2653 return;
2654 j = jiffies; /* Expensive access, and in common case don't get here. */
2655 if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
2656 time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
2657 atomic_read(&warned))
2658 return;
2659
2660 raw_spin_lock_irqsave_rcu_node(rnp, flags);
2661 j = jiffies;
2662 if (rcu_gp_in_progress() ||
2663 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
2664 time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
2665 time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
2666 atomic_read(&warned)) {
2667 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
2668 return;
2669 }
2670 /* Hold onto the leaf lock to make others see warned==1. */
2671
2672 if (rnp_root != rnp)
2673 raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */
2674 j = jiffies;
2675 if (rcu_gp_in_progress() ||
2676 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
2677 time_before(j, rcu_state.gp_req_activity + gpssdelay) ||
2678 time_before(j, rcu_state.gp_activity + gpssdelay) ||
2679 atomic_xchg(&warned, 1)) {
2680 raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */
2681 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
2682 return;
2683 }
2684 WARN_ON(1);
2685 if (rnp_root != rnp)
2686 raw_spin_unlock_rcu_node(rnp_root);
2687 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
2688 show_rcu_gp_kthreads();
2689}
2690
2691/*
2692 * Do a forward-progress check for rcutorture. This is normally invoked
2693 * due to an OOM event. The argument "j" gives the time period during
2694 * which rcutorture would like progress to have been made.
2695 */
2696void rcu_fwd_progress_check(unsigned long j)
2697{
2698 unsigned long cbs;
2699 int cpu;
2700 unsigned long max_cbs = 0;
2701 int max_cpu = -1;
2702 struct rcu_data *rdp;
2703
2704 if (rcu_gp_in_progress()) {
2705 pr_info("%s: GP age %lu jiffies\n",
2706 __func__, jiffies - rcu_state.gp_start);
2707 show_rcu_gp_kthreads();
2708 } else {
2709 pr_info("%s: Last GP end %lu jiffies ago\n",
2710 __func__, jiffies - rcu_state.gp_end);
2711 preempt_disable();
2712 rdp = this_cpu_ptr(&rcu_data);
2713 rcu_check_gp_start_stall(rdp->mynode, rdp, j);
2714 preempt_enable();
2715 }
2716 for_each_possible_cpu(cpu) {
2717 cbs = rcu_get_n_cbs_cpu(cpu);
2718 if (!cbs)
2719 continue;
2720 if (max_cpu < 0)
2721 pr_info("%s: callbacks", __func__);
2722 pr_cont(" %d: %lu", cpu, cbs);
2723 if (cbs <= max_cbs)
2724 continue;
2725 max_cbs = cbs;
2726 max_cpu = cpu;
2727 }
2728 if (max_cpu >= 0)
2729 pr_cont("\n");
2730}
2731EXPORT_SYMBOL_GPL(rcu_fwd_progress_check);
2732
2733/* Perform RCU core processing work for the current CPU. */ 2255/* Perform RCU core processing work for the current CPU. */
2734static __latent_entropy void rcu_core(struct softirq_action *unused) 2256static __latent_entropy void rcu_core(struct softirq_action *unused)
2735{ 2257{
@@ -3559,13 +3081,11 @@ static int rcu_pm_notify(struct notifier_block *self,
3559 switch (action) { 3081 switch (action) {
3560 case PM_HIBERNATION_PREPARE: 3082 case PM_HIBERNATION_PREPARE:
3561 case PM_SUSPEND_PREPARE: 3083 case PM_SUSPEND_PREPARE:
3562 if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */ 3084 rcu_expedite_gp();
3563 rcu_expedite_gp();
3564 break; 3085 break;
3565 case PM_POST_HIBERNATION: 3086 case PM_POST_HIBERNATION:
3566 case PM_POST_SUSPEND: 3087 case PM_POST_SUSPEND:
3567 if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */ 3088 rcu_unexpedite_gp();
3568 rcu_unexpedite_gp();
3569 break; 3089 break;
3570 default: 3090 default:
3571 break; 3091 break;
@@ -3742,8 +3262,7 @@ static void __init rcu_init_geometry(void)
3742 jiffies_till_first_fqs = d; 3262 jiffies_till_first_fqs = d;
3743 if (jiffies_till_next_fqs == ULONG_MAX) 3263 if (jiffies_till_next_fqs == ULONG_MAX)
3744 jiffies_till_next_fqs = d; 3264 jiffies_till_next_fqs = d;
3745 if (jiffies_till_sched_qs == ULONG_MAX) 3265 adjust_jiffies_till_sched_qs();
3746 adjust_jiffies_till_sched_qs();
3747 3266
3748 /* If the compile-time values are accurate, just leave. */ 3267 /* If the compile-time values are accurate, just leave. */
3749 if (rcu_fanout_leaf == RCU_FANOUT_LEAF && 3268 if (rcu_fanout_leaf == RCU_FANOUT_LEAF &&
@@ -3858,5 +3377,6 @@ void __init rcu_init(void)
3858 srcu_init(); 3377 srcu_init();
3859} 3378}
3860 3379
3380#include "tree_stall.h"
3861#include "tree_exp.h" 3381#include "tree_exp.h"
3862#include "tree_plugin.h" 3382#include "tree_plugin.h"
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index bb4f995f2d3f..e253d11af3c4 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -393,15 +393,13 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name;
393 393
394int rcu_dynticks_snap(struct rcu_data *rdp); 394int rcu_dynticks_snap(struct rcu_data *rdp);
395 395
396/* Forward declarations for rcutree_plugin.h */ 396/* Forward declarations for tree_plugin.h */
397static void rcu_bootup_announce(void); 397static void rcu_bootup_announce(void);
398static void rcu_qs(void); 398static void rcu_qs(void);
399static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); 399static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
400#ifdef CONFIG_HOTPLUG_CPU 400#ifdef CONFIG_HOTPLUG_CPU
401static bool rcu_preempt_has_tasks(struct rcu_node *rnp); 401static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
402#endif /* #ifdef CONFIG_HOTPLUG_CPU */ 402#endif /* #ifdef CONFIG_HOTPLUG_CPU */
403static void rcu_print_detail_task_stall(void);
404static int rcu_print_task_stall(struct rcu_node *rnp);
405static int rcu_print_task_exp_stall(struct rcu_node *rnp); 403static int rcu_print_task_exp_stall(struct rcu_node *rnp);
406static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp); 404static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
407static void rcu_flavor_sched_clock_irq(int user); 405static void rcu_flavor_sched_clock_irq(int user);
@@ -418,9 +416,6 @@ static void rcu_prepare_for_idle(void);
418static bool rcu_preempt_has_tasks(struct rcu_node *rnp); 416static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
419static bool rcu_preempt_need_deferred_qs(struct task_struct *t); 417static bool rcu_preempt_need_deferred_qs(struct task_struct *t);
420static void rcu_preempt_deferred_qs(struct task_struct *t); 418static void rcu_preempt_deferred_qs(struct task_struct *t);
421static void print_cpu_stall_info_begin(void);
422static void print_cpu_stall_info(int cpu);
423static void print_cpu_stall_info_end(void);
424static void zero_cpu_stall_ticks(struct rcu_data *rdp); 419static void zero_cpu_stall_ticks(struct rcu_data *rdp);
425static bool rcu_nocb_cpu_needs_barrier(int cpu); 420static bool rcu_nocb_cpu_needs_barrier(int cpu);
426static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); 421static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp);
@@ -445,3 +440,10 @@ static void rcu_bind_gp_kthread(void);
445static bool rcu_nohz_full_cpu(void); 440static bool rcu_nohz_full_cpu(void);
446static void rcu_dynticks_task_enter(void); 441static void rcu_dynticks_task_enter(void);
447static void rcu_dynticks_task_exit(void); 442static void rcu_dynticks_task_exit(void);
443
444/* Forward declarations for tree_stall.h */
445static void record_gp_stall_check_time(void);
446static void rcu_iw_handler(struct irq_work *iwp);
447static void check_cpu_stall(struct rcu_data *rdp);
448static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
449 const unsigned long gpssdelay);
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 4c2a0189e748..9c990df880d1 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -10,6 +10,7 @@
10#include <linux/lockdep.h> 10#include <linux/lockdep.h>
11 11
12static void rcu_exp_handler(void *unused); 12static void rcu_exp_handler(void *unused);
13static int rcu_print_task_exp_stall(struct rcu_node *rnp);
13 14
14/* 15/*
15 * Record the start of an expedited grace period. 16 * Record the start of an expedited grace period.
@@ -633,7 +634,7 @@ static void rcu_exp_handler(void *unused)
633 raw_spin_lock_irqsave_rcu_node(rnp, flags); 634 raw_spin_lock_irqsave_rcu_node(rnp, flags);
634 if (rnp->expmask & rdp->grpmask) { 635 if (rnp->expmask & rdp->grpmask) {
635 rdp->deferred_qs = true; 636 rdp->deferred_qs = true;
636 WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true); 637 t->rcu_read_unlock_special.b.exp_hint = true;
637 } 638 }
638 raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 639 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
639 return; 640 return;
@@ -648,7 +649,7 @@ static void rcu_exp_handler(void *unused)
648 * 649 *
649 * If the CPU is fully enabled (or if some buggy RCU-preempt 650 * If the CPU is fully enabled (or if some buggy RCU-preempt
650 * read-side critical section is being used from idle), just 651 * read-side critical section is being used from idle), just
651 * invoke rcu_preempt_defer_qs() to immediately report the 652 * invoke rcu_preempt_deferred_qs() to immediately report the
652 * quiescent state. We cannot use rcu_read_unlock_special() 653 * quiescent state. We cannot use rcu_read_unlock_special()
653 * because we are in an interrupt handler, which will cause that 654 * because we are in an interrupt handler, which will cause that
654 * function to take an early exit without doing anything. 655 * function to take an early exit without doing anything.
@@ -670,6 +671,27 @@ static void sync_sched_exp_online_cleanup(int cpu)
670{ 671{
671} 672}
672 673
674/*
675 * Scan the current list of tasks blocked within RCU read-side critical
676 * sections, printing out the tid of each that is blocking the current
677 * expedited grace period.
678 */
679static int rcu_print_task_exp_stall(struct rcu_node *rnp)
680{
681 struct task_struct *t;
682 int ndetected = 0;
683
684 if (!rnp->exp_tasks)
685 return 0;
686 t = list_entry(rnp->exp_tasks->prev,
687 struct task_struct, rcu_node_entry);
688 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
689 pr_cont(" P%d", t->pid);
690 ndetected++;
691 }
692 return ndetected;
693}
694
673#else /* #ifdef CONFIG_PREEMPT_RCU */ 695#else /* #ifdef CONFIG_PREEMPT_RCU */
674 696
675/* Invoked on each online non-idle CPU for expedited quiescent state. */ 697/* Invoked on each online non-idle CPU for expedited quiescent state. */
@@ -709,6 +731,16 @@ static void sync_sched_exp_online_cleanup(int cpu)
709 WARN_ON_ONCE(ret); 731 WARN_ON_ONCE(ret);
710} 732}
711 733
734/*
735 * Because preemptible RCU does not exist, we never have to check for
736 * tasks blocked within RCU read-side critical sections that are
737 * blocking the current expedited grace period.
738 */
739static int rcu_print_task_exp_stall(struct rcu_node *rnp)
740{
741 return 0;
742}
743
712#endif /* #else #ifdef CONFIG_PREEMPT_RCU */ 744#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
713 745
714/** 746/**
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 97dba50f6fb2..1102765f91fd 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -285,7 +285,7 @@ static void rcu_qs(void)
285 TPS("cpuqs")); 285 TPS("cpuqs"));
286 __this_cpu_write(rcu_data.cpu_no_qs.b.norm, false); 286 __this_cpu_write(rcu_data.cpu_no_qs.b.norm, false);
287 barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */ 287 barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */
288 current->rcu_read_unlock_special.b.need_qs = false; 288 WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, false);
289 } 289 }
290} 290}
291 291
@@ -643,100 +643,6 @@ static void rcu_read_unlock_special(struct task_struct *t)
643} 643}
644 644
645/* 645/*
646 * Dump detailed information for all tasks blocking the current RCU
647 * grace period on the specified rcu_node structure.
648 */
649static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
650{
651 unsigned long flags;
652 struct task_struct *t;
653
654 raw_spin_lock_irqsave_rcu_node(rnp, flags);
655 if (!rcu_preempt_blocked_readers_cgp(rnp)) {
656 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
657 return;
658 }
659 t = list_entry(rnp->gp_tasks->prev,
660 struct task_struct, rcu_node_entry);
661 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
662 /*
663 * We could be printing a lot while holding a spinlock.
664 * Avoid triggering hard lockup.
665 */
666 touch_nmi_watchdog();
667 sched_show_task(t);
668 }
669 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
670}
671
672/*
673 * Dump detailed information for all tasks blocking the current RCU
674 * grace period.
675 */
676static void rcu_print_detail_task_stall(void)
677{
678 struct rcu_node *rnp = rcu_get_root();
679
680 rcu_print_detail_task_stall_rnp(rnp);
681 rcu_for_each_leaf_node(rnp)
682 rcu_print_detail_task_stall_rnp(rnp);
683}
684
685static void rcu_print_task_stall_begin(struct rcu_node *rnp)
686{
687 pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
688 rnp->level, rnp->grplo, rnp->grphi);
689}
690
691static void rcu_print_task_stall_end(void)
692{
693 pr_cont("\n");
694}
695
696/*
697 * Scan the current list of tasks blocked within RCU read-side critical
698 * sections, printing out the tid of each.
699 */
700static int rcu_print_task_stall(struct rcu_node *rnp)
701{
702 struct task_struct *t;
703 int ndetected = 0;
704
705 if (!rcu_preempt_blocked_readers_cgp(rnp))
706 return 0;
707 rcu_print_task_stall_begin(rnp);
708 t = list_entry(rnp->gp_tasks->prev,
709 struct task_struct, rcu_node_entry);
710 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
711 pr_cont(" P%d", t->pid);
712 ndetected++;
713 }
714 rcu_print_task_stall_end();
715 return ndetected;
716}
717
718/*
719 * Scan the current list of tasks blocked within RCU read-side critical
720 * sections, printing out the tid of each that is blocking the current
721 * expedited grace period.
722 */
723static int rcu_print_task_exp_stall(struct rcu_node *rnp)
724{
725 struct task_struct *t;
726 int ndetected = 0;
727
728 if (!rnp->exp_tasks)
729 return 0;
730 t = list_entry(rnp->exp_tasks->prev,
731 struct task_struct, rcu_node_entry);
732 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
733 pr_cont(" P%d", t->pid);
734 ndetected++;
735 }
736 return ndetected;
737}
738
739/*
740 * Check that the list of blocked tasks for the newly completed grace 646 * Check that the list of blocked tasks for the newly completed grace
741 * period is in fact empty. It is a serious bug to complete a grace 647 * period is in fact empty. It is a serious bug to complete a grace
742 * period that still has RCU readers blocked! This function must be 648 * period that still has RCU readers blocked! This function must be
@@ -804,19 +710,25 @@ static void rcu_flavor_sched_clock_irq(int user)
804 710
805/* 711/*
806 * Check for a task exiting while in a preemptible-RCU read-side 712 * Check for a task exiting while in a preemptible-RCU read-side
807 * critical section, clean up if so. No need to issue warnings, 713 * critical section, clean up if so. No need to issue warnings, as
808 * as debug_check_no_locks_held() already does this if lockdep 714 * debug_check_no_locks_held() already does this if lockdep is enabled.
809 * is enabled. 715 * Besides, if this function does anything other than just immediately
716 * return, there was a bug of some sort. Spewing warnings from this
717 * function is like as not to simply obscure important prior warnings.
810 */ 718 */
811void exit_rcu(void) 719void exit_rcu(void)
812{ 720{
813 struct task_struct *t = current; 721 struct task_struct *t = current;
814 722
815 if (likely(list_empty(&current->rcu_node_entry))) 723 if (unlikely(!list_empty(&current->rcu_node_entry))) {
724 t->rcu_read_lock_nesting = 1;
725 barrier();
726 WRITE_ONCE(t->rcu_read_unlock_special.b.blocked, true);
727 } else if (unlikely(t->rcu_read_lock_nesting)) {
728 t->rcu_read_lock_nesting = 1;
729 } else {
816 return; 730 return;
817 t->rcu_read_lock_nesting = 1; 731 }
818 barrier();
819 t->rcu_read_unlock_special.b.blocked = true;
820 __rcu_read_unlock(); 732 __rcu_read_unlock();
821 rcu_preempt_deferred_qs(current); 733 rcu_preempt_deferred_qs(current);
822} 734}
@@ -980,33 +892,6 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
980static void rcu_preempt_deferred_qs(struct task_struct *t) { } 892static void rcu_preempt_deferred_qs(struct task_struct *t) { }
981 893
982/* 894/*
983 * Because preemptible RCU does not exist, we never have to check for
984 * tasks blocked within RCU read-side critical sections.
985 */
986static void rcu_print_detail_task_stall(void)
987{
988}
989
990/*
991 * Because preemptible RCU does not exist, we never have to check for
992 * tasks blocked within RCU read-side critical sections.
993 */
994static int rcu_print_task_stall(struct rcu_node *rnp)
995{
996 return 0;
997}
998
999/*
1000 * Because preemptible RCU does not exist, we never have to check for
1001 * tasks blocked within RCU read-side critical sections that are
1002 * blocking the current expedited grace period.
1003 */
1004static int rcu_print_task_exp_stall(struct rcu_node *rnp)
1005{
1006 return 0;
1007}
1008
1009/*
1010 * Because there is no preemptible RCU, there can be no readers blocked, 895 * Because there is no preemptible RCU, there can be no readers blocked,
1011 * so there is no need to check for blocked tasks. So check only for 896 * so there is no need to check for blocked tasks. So check only for
1012 * bogus qsmask values. 897 * bogus qsmask values.
@@ -1185,8 +1070,6 @@ static int rcu_boost_kthread(void *arg)
1185static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) 1070static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
1186 __releases(rnp->lock) 1071 __releases(rnp->lock)
1187{ 1072{
1188 struct task_struct *t;
1189
1190 raw_lockdep_assert_held_rcu_node(rnp); 1073 raw_lockdep_assert_held_rcu_node(rnp);
1191 if (!rcu_preempt_blocked_readers_cgp(rnp) && rnp->exp_tasks == NULL) { 1074 if (!rcu_preempt_blocked_readers_cgp(rnp) && rnp->exp_tasks == NULL) {
1192 raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1075 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
@@ -1200,9 +1083,8 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
1200 if (rnp->exp_tasks == NULL) 1083 if (rnp->exp_tasks == NULL)
1201 rnp->boost_tasks = rnp->gp_tasks; 1084 rnp->boost_tasks = rnp->gp_tasks;
1202 raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1085 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
1203 t = rnp->boost_kthread_task; 1086 rcu_wake_cond(rnp->boost_kthread_task,
1204 if (t) 1087 rnp->boost_kthread_status);
1205 rcu_wake_cond(t, rnp->boost_kthread_status);
1206 } else { 1088 } else {
1207 raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1089 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
1208 } 1090 }
@@ -1649,98 +1531,6 @@ static void rcu_cleanup_after_idle(void)
1649 1531
1650#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */ 1532#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
1651 1533
1652#ifdef CONFIG_RCU_FAST_NO_HZ
1653
1654static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
1655{
1656 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
1657
1658 sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c",
1659 rdp->last_accelerate & 0xffff, jiffies & 0xffff,
1660 ".l"[rdp->all_lazy],
1661 ".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)],
1662 ".D"[!rdp->tick_nohz_enabled_snap]);
1663}
1664
1665#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
1666
1667static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
1668{
1669 *cp = '\0';
1670}
1671
1672#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
1673
1674/* Initiate the stall-info list. */
1675static void print_cpu_stall_info_begin(void)
1676{
1677 pr_cont("\n");
1678}
1679
1680/*
1681 * Print out diagnostic information for the specified stalled CPU.
1682 *
1683 * If the specified CPU is aware of the current RCU grace period, then
1684 * print the number of scheduling clock interrupts the CPU has taken
1685 * during the time that it has been aware. Otherwise, print the number
1686 * of RCU grace periods that this CPU is ignorant of, for example, "1"
1687 * if the CPU was aware of the previous grace period.
1688 *
1689 * Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info.
1690 */
1691static void print_cpu_stall_info(int cpu)
1692{
1693 unsigned long delta;
1694 char fast_no_hz[72];
1695 struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
1696 char *ticks_title;
1697 unsigned long ticks_value;
1698
1699 /*
1700 * We could be printing a lot while holding a spinlock. Avoid
1701 * triggering hard lockup.
1702 */
1703 touch_nmi_watchdog();
1704
1705 ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq);
1706 if (ticks_value) {
1707 ticks_title = "GPs behind";
1708 } else {
1709 ticks_title = "ticks this GP";
1710 ticks_value = rdp->ticks_this_gp;
1711 }
1712 print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
1713 delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
1714 pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
1715 cpu,
1716 "O."[!!cpu_online(cpu)],
1717 "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
1718 "N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)],
1719 !IS_ENABLED(CONFIG_IRQ_WORK) ? '?' :
1720 rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
1721 "!."[!delta],
1722 ticks_value, ticks_title,
1723 rcu_dynticks_snap(rdp) & 0xfff,
1724 rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
1725 rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
1726 READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
1727 fast_no_hz);
1728}
1729
1730/* Terminate the stall-info list. */
1731static void print_cpu_stall_info_end(void)
1732{
1733 pr_err("\t");
1734}
1735
1736/* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */
1737static void zero_cpu_stall_ticks(struct rcu_data *rdp)
1738{
1739 rdp->ticks_this_gp = 0;
1740 rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id());
1741 WRITE_ONCE(rdp->last_fqs_resched, jiffies);
1742}
1743
1744#ifdef CONFIG_RCU_NOCB_CPU 1534#ifdef CONFIG_RCU_NOCB_CPU
1745 1535
1746/* 1536/*
@@ -1766,11 +1556,22 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp)
1766 */ 1556 */
1767 1557
1768 1558
1769/* Parse the boot-time rcu_nocb_mask CPU list from the kernel parameters. */ 1559/*
1560 * Parse the boot-time rcu_nocb_mask CPU list from the kernel parameters.
1561 * The string after the "rcu_nocbs=" is either "all" for all CPUs, or a
1562 * comma-separated list of CPUs and/or CPU ranges. If an invalid list is
1563 * given, a warning is emitted and all CPUs are offloaded.
1564 */
1770static int __init rcu_nocb_setup(char *str) 1565static int __init rcu_nocb_setup(char *str)
1771{ 1566{
1772 alloc_bootmem_cpumask_var(&rcu_nocb_mask); 1567 alloc_bootmem_cpumask_var(&rcu_nocb_mask);
1773 cpulist_parse(str, rcu_nocb_mask); 1568 if (!strcasecmp(str, "all"))
1569 cpumask_setall(rcu_nocb_mask);
1570 else
1571 if (cpulist_parse(str, rcu_nocb_mask)) {
1572 pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
1573 cpumask_setall(rcu_nocb_mask);
1574 }
1774 return 1; 1575 return 1;
1775} 1576}
1776__setup("rcu_nocbs=", rcu_nocb_setup); 1577__setup("rcu_nocbs=", rcu_nocb_setup);
diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
new file mode 100644
index 000000000000..f65a73a97323
--- /dev/null
+++ b/kernel/rcu/tree_stall.h
@@ -0,0 +1,709 @@
1// SPDX-License-Identifier: GPL-2.0+
2/*
3 * RCU CPU stall warnings for normal RCU grace periods
4 *
5 * Copyright IBM Corporation, 2019
6 *
7 * Author: Paul E. McKenney <paulmck@linux.ibm.com>
8 */
9
10//////////////////////////////////////////////////////////////////////////////
11//
12// Controlling CPU stall warnings, including delay calculation.
13
14/* panic() on RCU Stall sysctl. */
15int sysctl_panic_on_rcu_stall __read_mostly;
16
17#ifdef CONFIG_PROVE_RCU
18#define RCU_STALL_DELAY_DELTA (5 * HZ)
19#else
20#define RCU_STALL_DELAY_DELTA 0
21#endif
22
23/* Limit-check stall timeouts specified at boottime and runtime. */
24int rcu_jiffies_till_stall_check(void)
25{
26 int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout);
27
28 /*
29 * Limit check must be consistent with the Kconfig limits
30 * for CONFIG_RCU_CPU_STALL_TIMEOUT.
31 */
32 if (till_stall_check < 3) {
33 WRITE_ONCE(rcu_cpu_stall_timeout, 3);
34 till_stall_check = 3;
35 } else if (till_stall_check > 300) {
36 WRITE_ONCE(rcu_cpu_stall_timeout, 300);
37 till_stall_check = 300;
38 }
39 return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
40}
41EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
42
43/* Don't do RCU CPU stall warnings during long sysrq printouts. */
44void rcu_sysrq_start(void)
45{
46 if (!rcu_cpu_stall_suppress)
47 rcu_cpu_stall_suppress = 2;
48}
49
50void rcu_sysrq_end(void)
51{
52 if (rcu_cpu_stall_suppress == 2)
53 rcu_cpu_stall_suppress = 0;
54}
55
56/* Don't print RCU CPU stall warnings during a kernel panic. */
57static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr)
58{
59 rcu_cpu_stall_suppress = 1;
60 return NOTIFY_DONE;
61}
62
63static struct notifier_block rcu_panic_block = {
64 .notifier_call = rcu_panic,
65};
66
67static int __init check_cpu_stall_init(void)
68{
69 atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block);
70 return 0;
71}
72early_initcall(check_cpu_stall_init);
73
74/* If so specified via sysctl, panic, yielding cleaner stall-warning output. */
75static void panic_on_rcu_stall(void)
76{
77 if (sysctl_panic_on_rcu_stall)
78 panic("RCU Stall\n");
79}
80
81/**
82 * rcu_cpu_stall_reset - prevent further stall warnings in current grace period
83 *
84 * Set the stall-warning timeout way off into the future, thus preventing
85 * any RCU CPU stall-warning messages from appearing in the current set of
86 * RCU grace periods.
87 *
88 * The caller must disable hard irqs.
89 */
90void rcu_cpu_stall_reset(void)
91{
92 WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2);
93}
94
95//////////////////////////////////////////////////////////////////////////////
96//
97// Interaction with RCU grace periods
98
99/* Start of new grace period, so record stall time (and forcing times). */
100static void record_gp_stall_check_time(void)
101{
102 unsigned long j = jiffies;
103 unsigned long j1;
104
105 rcu_state.gp_start = j;
106 j1 = rcu_jiffies_till_stall_check();
107 /* Record ->gp_start before ->jiffies_stall. */
108 smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
109 rcu_state.jiffies_resched = j + j1 / 2;
110 rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
111}
112
113/* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */
114static void zero_cpu_stall_ticks(struct rcu_data *rdp)
115{
116 rdp->ticks_this_gp = 0;
117 rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id());
118 WRITE_ONCE(rdp->last_fqs_resched, jiffies);
119}
120
121/*
122 * If too much time has passed in the current grace period, and if
123 * so configured, go kick the relevant kthreads.
124 */
125static void rcu_stall_kick_kthreads(void)
126{
127 unsigned long j;
128
129 if (!rcu_kick_kthreads)
130 return;
131 j = READ_ONCE(rcu_state.jiffies_kick_kthreads);
132 if (time_after(jiffies, j) && rcu_state.gp_kthread &&
133 (rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) {
134 WARN_ONCE(1, "Kicking %s grace-period kthread\n",
135 rcu_state.name);
136 rcu_ftrace_dump(DUMP_ALL);
137 wake_up_process(rcu_state.gp_kthread);
138 WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ);
139 }
140}
141
142/*
143 * Handler for the irq_work request posted about halfway into the RCU CPU
144 * stall timeout, and used to detect excessive irq disabling. Set state
145 * appropriately, but just complain if there is unexpected state on entry.
146 */
147static void rcu_iw_handler(struct irq_work *iwp)
148{
149 struct rcu_data *rdp;
150 struct rcu_node *rnp;
151
152 rdp = container_of(iwp, struct rcu_data, rcu_iw);
153 rnp = rdp->mynode;
154 raw_spin_lock_rcu_node(rnp);
155 if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) {
156 rdp->rcu_iw_gp_seq = rnp->gp_seq;
157 rdp->rcu_iw_pending = false;
158 }
159 raw_spin_unlock_rcu_node(rnp);
160}
161
162//////////////////////////////////////////////////////////////////////////////
163//
164// Printing RCU CPU stall warnings
165
166#ifdef CONFIG_PREEMPT
167
168/*
169 * Dump detailed information for all tasks blocking the current RCU
170 * grace period on the specified rcu_node structure.
171 */
172static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
173{
174 unsigned long flags;
175 struct task_struct *t;
176
177 raw_spin_lock_irqsave_rcu_node(rnp, flags);
178 if (!rcu_preempt_blocked_readers_cgp(rnp)) {
179 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
180 return;
181 }
182 t = list_entry(rnp->gp_tasks->prev,
183 struct task_struct, rcu_node_entry);
184 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
185 /*
186 * We could be printing a lot while holding a spinlock.
187 * Avoid triggering hard lockup.
188 */
189 touch_nmi_watchdog();
190 sched_show_task(t);
191 }
192 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
193}
194
195/*
196 * Scan the current list of tasks blocked within RCU read-side critical
197 * sections, printing out the tid of each.
198 */
199static int rcu_print_task_stall(struct rcu_node *rnp)
200{
201 struct task_struct *t;
202 int ndetected = 0;
203
204 if (!rcu_preempt_blocked_readers_cgp(rnp))
205 return 0;
206 pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
207 rnp->level, rnp->grplo, rnp->grphi);
208 t = list_entry(rnp->gp_tasks->prev,
209 struct task_struct, rcu_node_entry);
210 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
211 pr_cont(" P%d", t->pid);
212 ndetected++;
213 }
214 pr_cont("\n");
215 return ndetected;
216}
217
218#else /* #ifdef CONFIG_PREEMPT */
219
220/*
221 * Because preemptible RCU does not exist, we never have to check for
222 * tasks blocked within RCU read-side critical sections.
223 */
224static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
225{
226}
227
228/*
229 * Because preemptible RCU does not exist, we never have to check for
230 * tasks blocked within RCU read-side critical sections.
231 */
232static int rcu_print_task_stall(struct rcu_node *rnp)
233{
234 return 0;
235}
236#endif /* #else #ifdef CONFIG_PREEMPT */
237
238/*
239 * Dump stacks of all tasks running on stalled CPUs. First try using
240 * NMIs, but fall back to manual remote stack tracing on architectures
241 * that don't support NMI-based stack dumps. The NMI-triggered stack
242 * traces are more accurate because they are printed by the target CPU.
243 */
244static void rcu_dump_cpu_stacks(void)
245{
246 int cpu;
247 unsigned long flags;
248 struct rcu_node *rnp;
249
250 rcu_for_each_leaf_node(rnp) {
251 raw_spin_lock_irqsave_rcu_node(rnp, flags);
252 for_each_leaf_node_possible_cpu(rnp, cpu)
253 if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu))
254 if (!trigger_single_cpu_backtrace(cpu))
255 dump_cpu_task(cpu);
256 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
257 }
258}
259
260#ifdef CONFIG_RCU_FAST_NO_HZ
261
262static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
263{
264 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
265
266 sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c",
267 rdp->last_accelerate & 0xffff, jiffies & 0xffff,
268 ".l"[rdp->all_lazy],
269 ".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)],
270 ".D"[!!rdp->tick_nohz_enabled_snap]);
271}
272
273#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
274
275static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
276{
277 *cp = '\0';
278}
279
280#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
281
282/*
283 * Print out diagnostic information for the specified stalled CPU.
284 *
285 * If the specified CPU is aware of the current RCU grace period, then
286 * print the number of scheduling clock interrupts the CPU has taken
287 * during the time that it has been aware. Otherwise, print the number
288 * of RCU grace periods that this CPU is ignorant of, for example, "1"
289 * if the CPU was aware of the previous grace period.
290 *
291 * Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info.
292 */
293static void print_cpu_stall_info(int cpu)
294{
295 unsigned long delta;
296 char fast_no_hz[72];
297 struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
298 char *ticks_title;
299 unsigned long ticks_value;
300
301 /*
302 * We could be printing a lot while holding a spinlock. Avoid
303 * triggering hard lockup.
304 */
305 touch_nmi_watchdog();
306
307 ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq);
308 if (ticks_value) {
309 ticks_title = "GPs behind";
310 } else {
311 ticks_title = "ticks this GP";
312 ticks_value = rdp->ticks_this_gp;
313 }
314 print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
315 delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
316 pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
317 cpu,
318 "O."[!!cpu_online(cpu)],
319 "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
320 "N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)],
321 !IS_ENABLED(CONFIG_IRQ_WORK) ? '?' :
322 rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
323 "!."[!delta],
324 ticks_value, ticks_title,
325 rcu_dynticks_snap(rdp) & 0xfff,
326 rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
327 rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
328 READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
329 fast_no_hz);
330}
331
332/* Complain about starvation of grace-period kthread. */
333static void rcu_check_gp_kthread_starvation(void)
334{
335 struct task_struct *gpk = rcu_state.gp_kthread;
336 unsigned long j;
337
338 j = jiffies - READ_ONCE(rcu_state.gp_activity);
339 if (j > 2 * HZ) {
340 pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
341 rcu_state.name, j,
342 (long)rcu_seq_current(&rcu_state.gp_seq),
343 READ_ONCE(rcu_state.gp_flags),
344 gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
345 gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
346 if (gpk) {
347 pr_err("RCU grace-period kthread stack dump:\n");
348 sched_show_task(gpk);
349 wake_up_process(gpk);
350 }
351 }
352}
353
354static void print_other_cpu_stall(unsigned long gp_seq)
355{
356 int cpu;
357 unsigned long flags;
358 unsigned long gpa;
359 unsigned long j;
360 int ndetected = 0;
361 struct rcu_node *rnp;
362 long totqlen = 0;
363
364 /* Kick and suppress, if so configured. */
365 rcu_stall_kick_kthreads();
366 if (rcu_cpu_stall_suppress)
367 return;
368
369 /*
370 * OK, time to rat on our buddy...
371 * See Documentation/RCU/stallwarn.txt for info on how to debug
372 * RCU CPU stall warnings.
373 */
374 pr_err("INFO: %s detected stalls on CPUs/tasks:\n", rcu_state.name);
375 rcu_for_each_leaf_node(rnp) {
376 raw_spin_lock_irqsave_rcu_node(rnp, flags);
377 ndetected += rcu_print_task_stall(rnp);
378 if (rnp->qsmask != 0) {
379 for_each_leaf_node_possible_cpu(rnp, cpu)
380 if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) {
381 print_cpu_stall_info(cpu);
382 ndetected++;
383 }
384 }
385 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
386 }
387
388 for_each_possible_cpu(cpu)
389 totqlen += rcu_get_n_cbs_cpu(cpu);
390 pr_cont("\t(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n",
391 smp_processor_id(), (long)(jiffies - rcu_state.gp_start),
392 (long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
393 if (ndetected) {
394 rcu_dump_cpu_stacks();
395
396 /* Complain about tasks blocking the grace period. */
397 rcu_for_each_leaf_node(rnp)
398 rcu_print_detail_task_stall_rnp(rnp);
399 } else {
400 if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) {
401 pr_err("INFO: Stall ended before state dump start\n");
402 } else {
403 j = jiffies;
404 gpa = READ_ONCE(rcu_state.gp_activity);
405 pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n",
406 rcu_state.name, j - gpa, j, gpa,
407 READ_ONCE(jiffies_till_next_fqs),
408 rcu_get_root()->qsmask);
409 /* In this case, the current CPU might be at fault. */
410 sched_show_task(current);
411 }
412 }
413 /* Rewrite if needed in case of slow consoles. */
414 if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
415 WRITE_ONCE(rcu_state.jiffies_stall,
416 jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
417
418 rcu_check_gp_kthread_starvation();
419
420 panic_on_rcu_stall();
421
422 rcu_force_quiescent_state(); /* Kick them all. */
423}
424
425static void print_cpu_stall(void)
426{
427 int cpu;
428 unsigned long flags;
429 struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
430 struct rcu_node *rnp = rcu_get_root();
431 long totqlen = 0;
432
433 /* Kick and suppress, if so configured. */
434 rcu_stall_kick_kthreads();
435 if (rcu_cpu_stall_suppress)
436 return;
437
438 /*
439 * OK, time to rat on ourselves...
440 * See Documentation/RCU/stallwarn.txt for info on how to debug
441 * RCU CPU stall warnings.
442 */
443 pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name);
444 raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);
445 print_cpu_stall_info(smp_processor_id());
446 raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags);
447 for_each_possible_cpu(cpu)
448 totqlen += rcu_get_n_cbs_cpu(cpu);
449 pr_cont("\t(t=%lu jiffies g=%ld q=%lu)\n",
450 jiffies - rcu_state.gp_start,
451 (long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
452
453 rcu_check_gp_kthread_starvation();
454
455 rcu_dump_cpu_stacks();
456
457 raw_spin_lock_irqsave_rcu_node(rnp, flags);
458 /* Rewrite if needed in case of slow consoles. */
459 if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
460 WRITE_ONCE(rcu_state.jiffies_stall,
461 jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
462 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
463
464 panic_on_rcu_stall();
465
466 /*
467 * Attempt to revive the RCU machinery by forcing a context switch.
468 *
469 * A context switch would normally allow the RCU state machine to make
470 * progress and it could be we're stuck in kernel space without context
471 * switches for an entirely unreasonable amount of time.
472 */
473 set_tsk_need_resched(current);
474 set_preempt_need_resched();
475}
476
477static void check_cpu_stall(struct rcu_data *rdp)
478{
479 unsigned long gs1;
480 unsigned long gs2;
481 unsigned long gps;
482 unsigned long j;
483 unsigned long jn;
484 unsigned long js;
485 struct rcu_node *rnp;
486
487 if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) ||
488 !rcu_gp_in_progress())
489 return;
490 rcu_stall_kick_kthreads();
491 j = jiffies;
492
493 /*
494 * Lots of memory barriers to reject false positives.
495 *
496 * The idea is to pick up rcu_state.gp_seq, then
497 * rcu_state.jiffies_stall, then rcu_state.gp_start, and finally
498 * another copy of rcu_state.gp_seq. These values are updated in
499 * the opposite order with memory barriers (or equivalent) during
500 * grace-period initialization and cleanup. Now, a false positive
501 * can occur if we get an new value of rcu_state.gp_start and a old
502 * value of rcu_state.jiffies_stall. But given the memory barriers,
503 * the only way that this can happen is if one grace period ends
504 * and another starts between these two fetches. This is detected
505 * by comparing the second fetch of rcu_state.gp_seq with the
506 * previous fetch from rcu_state.gp_seq.
507 *
508 * Given this check, comparisons of jiffies, rcu_state.jiffies_stall,
509 * and rcu_state.gp_start suffice to forestall false positives.
510 */
511 gs1 = READ_ONCE(rcu_state.gp_seq);
512 smp_rmb(); /* Pick up ->gp_seq first... */
513 js = READ_ONCE(rcu_state.jiffies_stall);
514 smp_rmb(); /* ...then ->jiffies_stall before the rest... */
515 gps = READ_ONCE(rcu_state.gp_start);
516 smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */
517 gs2 = READ_ONCE(rcu_state.gp_seq);
518 if (gs1 != gs2 ||
519 ULONG_CMP_LT(j, js) ||
520 ULONG_CMP_GE(gps, js))
521 return; /* No stall or GP completed since entering function. */
522 rnp = rdp->mynode;
523 jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
524 if (rcu_gp_in_progress() &&
525 (READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
526 cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
527
528 /* We haven't checked in, so go dump stack. */
529 print_cpu_stall();
530
531 } else if (rcu_gp_in_progress() &&
532 ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
533 cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
534
535 /* They had a few time units to dump stack, so complain. */
536 print_other_cpu_stall(gs2);
537 }
538}
539
540//////////////////////////////////////////////////////////////////////////////
541//
542// RCU forward-progress mechanisms, including of callback invocation.
543
544
545/*
546 * Show the state of the grace-period kthreads.
547 */
548void show_rcu_gp_kthreads(void)
549{
550 int cpu;
551 unsigned long j;
552 unsigned long ja;
553 unsigned long jr;
554 unsigned long jw;
555 struct rcu_data *rdp;
556 struct rcu_node *rnp;
557
558 j = jiffies;
559 ja = j - READ_ONCE(rcu_state.gp_activity);
560 jr = j - READ_ONCE(rcu_state.gp_req_activity);
561 jw = j - READ_ONCE(rcu_state.gp_wake_time);
562 pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n",
563 rcu_state.name, gp_state_getname(rcu_state.gp_state),
564 rcu_state.gp_state,
565 rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL,
566 ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq),
567 (long)READ_ONCE(rcu_state.gp_seq),
568 (long)READ_ONCE(rcu_get_root()->gp_seq_needed),
569 READ_ONCE(rcu_state.gp_flags));
570 rcu_for_each_node_breadth_first(rnp) {
571 if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed))
572 continue;
573 pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n",
574 rnp->grplo, rnp->grphi, (long)rnp->gp_seq,
575 (long)rnp->gp_seq_needed);
576 if (!rcu_is_leaf_node(rnp))
577 continue;
578 for_each_leaf_node_possible_cpu(rnp, cpu) {
579 rdp = per_cpu_ptr(&rcu_data, cpu);
580 if (rdp->gpwrap ||
581 ULONG_CMP_GE(rcu_state.gp_seq,
582 rdp->gp_seq_needed))
583 continue;
584 pr_info("\tcpu %d ->gp_seq_needed %ld\n",
585 cpu, (long)rdp->gp_seq_needed);
586 }
587 }
588 /* sched_show_task(rcu_state.gp_kthread); */
589}
590EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
591
592/*
593 * This function checks for grace-period requests that fail to motivate
594 * RCU to come out of its idle mode.
595 */
596static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
597 const unsigned long gpssdelay)
598{
599 unsigned long flags;
600 unsigned long j;
601 struct rcu_node *rnp_root = rcu_get_root();
602 static atomic_t warned = ATOMIC_INIT(0);
603
604 if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() ||
605 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed))
606 return;
607 j = jiffies; /* Expensive access, and in common case don't get here. */
608 if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
609 time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
610 atomic_read(&warned))
611 return;
612
613 raw_spin_lock_irqsave_rcu_node(rnp, flags);
614 j = jiffies;
615 if (rcu_gp_in_progress() ||
616 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
617 time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
618 time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
619 atomic_read(&warned)) {
620 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
621 return;
622 }
623 /* Hold onto the leaf lock to make others see warned==1. */
624
625 if (rnp_root != rnp)
626 raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */
627 j = jiffies;
628 if (rcu_gp_in_progress() ||
629 ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
630 time_before(j, rcu_state.gp_req_activity + gpssdelay) ||
631 time_before(j, rcu_state.gp_activity + gpssdelay) ||
632 atomic_xchg(&warned, 1)) {
633 raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */
634 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
635 return;
636 }
637 WARN_ON(1);
638 if (rnp_root != rnp)
639 raw_spin_unlock_rcu_node(rnp_root);
640 raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
641 show_rcu_gp_kthreads();
642}
643
644/*
645 * Do a forward-progress check for rcutorture. This is normally invoked
646 * due to an OOM event. The argument "j" gives the time period during
647 * which rcutorture would like progress to have been made.
648 */
649void rcu_fwd_progress_check(unsigned long j)
650{
651 unsigned long cbs;
652 int cpu;
653 unsigned long max_cbs = 0;
654 int max_cpu = -1;
655 struct rcu_data *rdp;
656
657 if (rcu_gp_in_progress()) {
658 pr_info("%s: GP age %lu jiffies\n",
659 __func__, jiffies - rcu_state.gp_start);
660 show_rcu_gp_kthreads();
661 } else {
662 pr_info("%s: Last GP end %lu jiffies ago\n",
663 __func__, jiffies - rcu_state.gp_end);
664 preempt_disable();
665 rdp = this_cpu_ptr(&rcu_data);
666 rcu_check_gp_start_stall(rdp->mynode, rdp, j);
667 preempt_enable();
668 }
669 for_each_possible_cpu(cpu) {
670 cbs = rcu_get_n_cbs_cpu(cpu);
671 if (!cbs)
672 continue;
673 if (max_cpu < 0)
674 pr_info("%s: callbacks", __func__);
675 pr_cont(" %d: %lu", cpu, cbs);
676 if (cbs <= max_cbs)
677 continue;
678 max_cbs = cbs;
679 max_cpu = cpu;
680 }
681 if (max_cpu >= 0)
682 pr_cont("\n");
683}
684EXPORT_SYMBOL_GPL(rcu_fwd_progress_check);
685
686/* Commandeer a sysrq key to dump RCU's tree. */
687static bool sysrq_rcu;
688module_param(sysrq_rcu, bool, 0444);
689
690/* Dump grace-period-request information due to commandeered sysrq. */
691static void sysrq_show_rcu(int key)
692{
693 show_rcu_gp_kthreads();
694}
695
696static struct sysrq_key_op sysrq_rcudump_op = {
697 .handler = sysrq_show_rcu,
698 .help_msg = "show-rcu(y)",
699 .action_msg = "Show RCU tree",
700 .enable_mask = SYSRQ_ENABLE_DUMP,
701};
702
703static int __init rcu_sysrq_init(void)
704{
705 if (sysrq_rcu)
706 return register_sysrq_key('y', &sysrq_rcudump_op);
707 return 0;
708}
709early_initcall(rcu_sysrq_init);
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index cbaa976c5945..c3bf44ba42e5 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -424,68 +424,11 @@ EXPORT_SYMBOL_GPL(do_trace_rcu_torture_read);
424#endif 424#endif
425 425
426#ifdef CONFIG_RCU_STALL_COMMON 426#ifdef CONFIG_RCU_STALL_COMMON
427
428#ifdef CONFIG_PROVE_RCU
429#define RCU_STALL_DELAY_DELTA (5 * HZ)
430#else
431#define RCU_STALL_DELAY_DELTA 0
432#endif
433
434int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */ 427int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */
435EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress); 428EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress);
436static int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
437
438module_param(rcu_cpu_stall_suppress, int, 0644); 429module_param(rcu_cpu_stall_suppress, int, 0644);
430int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
439module_param(rcu_cpu_stall_timeout, int, 0644); 431module_param(rcu_cpu_stall_timeout, int, 0644);
440
441int rcu_jiffies_till_stall_check(void)
442{
443 int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout);
444
445 /*
446 * Limit check must be consistent with the Kconfig limits
447 * for CONFIG_RCU_CPU_STALL_TIMEOUT.
448 */
449 if (till_stall_check < 3) {
450 WRITE_ONCE(rcu_cpu_stall_timeout, 3);
451 till_stall_check = 3;
452 } else if (till_stall_check > 300) {
453 WRITE_ONCE(rcu_cpu_stall_timeout, 300);
454 till_stall_check = 300;
455 }
456 return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
457}
458EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
459
460void rcu_sysrq_start(void)
461{
462 if (!rcu_cpu_stall_suppress)
463 rcu_cpu_stall_suppress = 2;
464}
465
466void rcu_sysrq_end(void)
467{
468 if (rcu_cpu_stall_suppress == 2)
469 rcu_cpu_stall_suppress = 0;
470}
471
472static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr)
473{
474 rcu_cpu_stall_suppress = 1;
475 return NOTIFY_DONE;
476}
477
478static struct notifier_block rcu_panic_block = {
479 .notifier_call = rcu_panic,
480};
481
482static int __init check_cpu_stall_init(void)
483{
484 atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block);
485 return 0;
486}
487early_initcall(check_cpu_stall_init);
488
489#endif /* #ifdef CONFIG_RCU_STALL_COMMON */ 432#endif /* #ifdef CONFIG_RCU_STALL_COMMON */
490 433
491#ifdef CONFIG_TASKS_RCU 434#ifdef CONFIG_TASKS_RCU
diff --git a/kernel/torture.c b/kernel/torture.c
index 8faa1a9aaeb9..17b2be9bde12 100644
--- a/kernel/torture.c
+++ b/kernel/torture.c
@@ -88,6 +88,8 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes,
88 88
89 if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu)) 89 if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu))
90 return false; 90 return false;
91 if (num_online_cpus() <= 1)
92 return false; /* Can't offline the last CPU. */
91 93
92 if (verbose > 1) 94 if (verbose > 1)
93 pr_alert("%s" TORTURE_FLAG 95 pr_alert("%s" TORTURE_FLAG
diff --git a/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh b/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh
index 43540f1828cc..2deea2169fc2 100755
--- a/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh
+++ b/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Extract the number of CPUs expected from the specified Kconfig-file 4# Extract the number of CPUs expected from the specified Kconfig-file
4# fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified 5# fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified
@@ -7,23 +8,9 @@
7# 8#
8# Usage: configNR_CPUS.sh config-frag 9# Usage: configNR_CPUS.sh config-frag
9# 10#
10# This program is free software; you can redistribute it and/or modify
11# it under the terms of the GNU General Public License as published by
12# the Free Software Foundation; either version 2 of the License, or
13# (at your option) any later version.
14#
15# This program is distributed in the hope that it will be useful,
16# but WITHOUT ANY WARRANTY; without even the implied warranty of
17# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18# GNU General Public License for more details.
19#
20# You should have received a copy of the GNU General Public License
21# along with this program; if not, you can access it online at
22# http://www.gnu.org/licenses/gpl-2.0.html.
23#
24# Copyright (C) IBM Corporation, 2013 11# Copyright (C) IBM Corporation, 2013
25# 12#
26# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 13# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
27 14
28cf=$1 15cf=$1
29if test ! -r $cf 16if test ! -r $cf
diff --git a/tools/testing/selftests/rcutorture/bin/config_override.sh b/tools/testing/selftests/rcutorture/bin/config_override.sh
index ef7fcbac3d42..90016c359e83 100755
--- a/tools/testing/selftests/rcutorture/bin/config_override.sh
+++ b/tools/testing/selftests/rcutorture/bin/config_override.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# config_override.sh base override 4# config_override.sh base override
4# 5#
@@ -6,23 +7,9 @@
6# that conflict with any in override, concatenating what remains and 7# that conflict with any in override, concatenating what remains and
7# sending the result to standard output. 8# sending the result to standard output.
8# 9#
9# This program is free software; you can redistribute it and/or modify
10# it under the terms of the GNU General Public License as published by
11# the Free Software Foundation; either version 2 of the License, or
12# (at your option) any later version.
13#
14# This program is distributed in the hope that it will be useful,
15# but WITHOUT ANY WARRANTY; without even the implied warranty of
16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17# GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License
20# along with this program; if not, you can access it online at
21# http://www.gnu.org/licenses/gpl-2.0.html.
22#
23# Copyright (C) IBM Corporation, 2017 10# Copyright (C) IBM Corporation, 2017
24# 11#
25# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 12# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
26 13
27base=$1 14base=$1
28if test -r $base 15if test -r $base
diff --git a/tools/testing/selftests/rcutorture/bin/configcheck.sh b/tools/testing/selftests/rcutorture/bin/configcheck.sh
index 197deece7c7c..31584cee84d7 100755
--- a/tools/testing/selftests/rcutorture/bin/configcheck.sh
+++ b/tools/testing/selftests/rcutorture/bin/configcheck.sh
@@ -1,23 +1,11 @@
1#!/bin/bash 1#!/bin/bash
2# Usage: configcheck.sh .config .config-template 2# SPDX-License-Identifier: GPL-2.0+
3#
4# This program is free software; you can redistribute it and/or modify
5# it under the terms of the GNU General Public License as published by
6# the Free Software Foundation; either version 2 of the License, or
7# (at your option) any later version.
8# 3#
9# This program is distributed in the hope that it will be useful, 4# Usage: configcheck.sh .config .config-template
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU General Public License for more details.
13#
14# You should have received a copy of the GNU General Public License
15# along with this program; if not, you can access it online at
16# http://www.gnu.org/licenses/gpl-2.0.html.
17# 5#
18# Copyright (C) IBM Corporation, 2011 6# Copyright (C) IBM Corporation, 2011
19# 7#
20# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 8# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
21 9
22T=${TMPDIR-/tmp}/abat-chk-config.sh.$$ 10T=${TMPDIR-/tmp}/abat-chk-config.sh.$$
23trap 'rm -rf $T' 0 11trap 'rm -rf $T' 0
@@ -26,6 +14,7 @@ mkdir $T
26cat $1 > $T/.config 14cat $1 > $T/.config
27 15
28cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' | 16cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' |
17grep -v '^CONFIG_INITRAMFS_SOURCE' |
29awk ' 18awk '
30{ 19{
31 print "if grep -q \"" $0 "\" < '"$T/.config"'"; 20 print "if grep -q \"" $0 "\" < '"$T/.config"'";
diff --git a/tools/testing/selftests/rcutorture/bin/configinit.sh b/tools/testing/selftests/rcutorture/bin/configinit.sh
index 65541c21a544..40359486b3a8 100755
--- a/tools/testing/selftests/rcutorture/bin/configinit.sh
+++ b/tools/testing/selftests/rcutorture/bin/configinit.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Usage: configinit.sh config-spec-file build-output-dir results-dir 4# Usage: configinit.sh config-spec-file build-output-dir results-dir
4# 5#
@@ -14,23 +15,9 @@
14# for example, "O=/tmp/foo". If this argument is omitted, the .config 15# for example, "O=/tmp/foo". If this argument is omitted, the .config
15# file will be generated directly in the current directory. 16# file will be generated directly in the current directory.
16# 17#
17# This program is free software; you can redistribute it and/or modify
18# it under the terms of the GNU General Public License as published by
19# the Free Software Foundation; either version 2 of the License, or
20# (at your option) any later version.
21#
22# This program is distributed in the hope that it will be useful,
23# but WITHOUT ANY WARRANTY; without even the implied warranty of
24# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
25# GNU General Public License for more details.
26#
27# You should have received a copy of the GNU General Public License
28# along with this program; if not, you can access it online at
29# http://www.gnu.org/licenses/gpl-2.0.html.
30#
31# Copyright (C) IBM Corporation, 2013 18# Copyright (C) IBM Corporation, 2013
32# 19#
33# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 20# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
34 21
35T=${TMPDIR-/tmp}/configinit.sh.$$ 22T=${TMPDIR-/tmp}/configinit.sh.$$
36trap 'rm -rf $T' 0 23trap 'rm -rf $T' 0
diff --git a/tools/testing/selftests/rcutorture/bin/cpus2use.sh b/tools/testing/selftests/rcutorture/bin/cpus2use.sh
index bb99cde3f5f9..ff7102212703 100755
--- a/tools/testing/selftests/rcutorture/bin/cpus2use.sh
+++ b/tools/testing/selftests/rcutorture/bin/cpus2use.sh
@@ -1,26 +1,13 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Get an estimate of how CPU-hoggy to be. 4# Get an estimate of how CPU-hoggy to be.
4# 5#
5# Usage: cpus2use.sh 6# Usage: cpus2use.sh
6# 7#
7# This program is free software; you can redistribute it and/or modify
8# it under the terms of the GNU General Public License as published by
9# the Free Software Foundation; either version 2 of the License, or
10# (at your option) any later version.
11#
12# This program is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU General Public License for more details.
16#
17# You should have received a copy of the GNU General Public License
18# along with this program; if not, you can access it online at
19# http://www.gnu.org/licenses/gpl-2.0.html.
20#
21# Copyright (C) IBM Corporation, 2013 8# Copyright (C) IBM Corporation, 2013
22# 9#
23# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 10# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
24 11
25ncpus=`grep '^processor' /proc/cpuinfo | wc -l` 12ncpus=`grep '^processor' /proc/cpuinfo | wc -l`
26idlecpus=`mpstat | tail -1 | \ 13idlecpus=`mpstat | tail -1 | \
diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh b/tools/testing/selftests/rcutorture/bin/functions.sh
index 65f6655026f0..6bcb8b5b2ff2 100644
--- a/tools/testing/selftests/rcutorture/bin/functions.sh
+++ b/tools/testing/selftests/rcutorture/bin/functions.sh
@@ -1,24 +1,11 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Shell functions for the rest of the scripts. 4# Shell functions for the rest of the scripts.
4# 5#
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9#
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14#
15# You should have received a copy of the GNU General Public License
16# along with this program; if not, you can access it online at
17# http://www.gnu.org/licenses/gpl-2.0.html.
18#
19# Copyright (C) IBM Corporation, 2013 6# Copyright (C) IBM Corporation, 2013
20# 7#
21# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 8# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
22 9
23# bootparam_hotplug_cpu bootparam-string 10# bootparam_hotplug_cpu bootparam-string
24# 11#
diff --git a/tools/testing/selftests/rcutorture/bin/jitter.sh b/tools/testing/selftests/rcutorture/bin/jitter.sh
index 3633828375e3..435b60933985 100755
--- a/tools/testing/selftests/rcutorture/bin/jitter.sh
+++ b/tools/testing/selftests/rcutorture/bin/jitter.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Alternate sleeping and spinning on randomly selected CPUs. The purpose 4# Alternate sleeping and spinning on randomly selected CPUs. The purpose
4# of this script is to inflict random OS jitter on a concurrently running 5# of this script is to inflict random OS jitter on a concurrently running
@@ -11,23 +12,9 @@
11# sleepmax: Maximum microseconds to sleep, defaults to one second. 12# sleepmax: Maximum microseconds to sleep, defaults to one second.
12# spinmax: Maximum microseconds to spin, defaults to one millisecond. 13# spinmax: Maximum microseconds to spin, defaults to one millisecond.
13# 14#
14# This program is free software; you can redistribute it and/or modify
15# it under the terms of the GNU General Public License as published by
16# the Free Software Foundation; either version 2 of the License, or
17# (at your option) any later version.
18#
19# This program is distributed in the hope that it will be useful,
20# but WITHOUT ANY WARRANTY; without even the implied warranty of
21# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
22# GNU General Public License for more details.
23#
24# You should have received a copy of the GNU General Public License
25# along with this program; if not, you can access it online at
26# http://www.gnu.org/licenses/gpl-2.0.html.
27#
28# Copyright (C) IBM Corporation, 2016 15# Copyright (C) IBM Corporation, 2016
29# 16#
30# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 17# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
31 18
32me=$(($1 * 1000)) 19me=$(($1 * 1000))
33duration=$2 20duration=$2
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-build.sh b/tools/testing/selftests/rcutorture/bin/kvm-build.sh
index 9115fcdb5617..c27a0bbb9c02 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-build.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-build.sh
@@ -1,26 +1,13 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Build a kvm-ready Linux kernel from the tree in the current directory. 4# Build a kvm-ready Linux kernel from the tree in the current directory.
4# 5#
5# Usage: kvm-build.sh config-template build-dir resdir 6# Usage: kvm-build.sh config-template build-dir resdir
6# 7#
7# This program is free software; you can redistribute it and/or modify
8# it under the terms of the GNU General Public License as published by
9# the Free Software Foundation; either version 2 of the License, or
10# (at your option) any later version.
11#
12# This program is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU General Public License for more details.
16#
17# You should have received a copy of the GNU General Public License
18# along with this program; if not, you can access it online at
19# http://www.gnu.org/licenses/gpl-2.0.html.
20#
21# Copyright (C) IBM Corporation, 2011 8# Copyright (C) IBM Corporation, 2011
22# 9#
23# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 10# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
24 11
25config_template=${1} 12config_template=${1}
26if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template" 13if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template"
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh b/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh
index 98f650c9bf54..8426fe1f15ee 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh
@@ -1,4 +1,5 @@
1#!/bin/sh 1#!/bin/sh
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Invoke a text editor on all console.log files for all runs with diagnostics, 4# Invoke a text editor on all console.log files for all runs with diagnostics,
4# that is, on all such files having a console.log.diags counterpart. 5# that is, on all such files having a console.log.diags counterpart.
@@ -10,6 +11,10 @@
10# 11#
11# The "directory" above should end with the date/time directory, for example, 12# The "directory" above should end with the date/time directory, for example,
12# "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27". 13# "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27".
14#
15# Copyright (C) IBM Corporation, 2018
16#
17# Author: Paul E. McKenney <paulmck@linux.ibm.com>
13 18
14rundir="${1}" 19rundir="${1}"
15if test -z "$rundir" -o ! -d "$rundir" 20if test -z "$rundir" -o ! -d "$rundir"
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh
index 2de92f43ee8c..f3a7a5e2b89d 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh
@@ -1,26 +1,13 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Analyze a given results directory for locktorture progress. 4# Analyze a given results directory for locktorture progress.
4# 5#
5# Usage: kvm-recheck-lock.sh resdir 6# Usage: kvm-recheck-lock.sh resdir
6# 7#
7# This program is free software; you can redistribute it and/or modify
8# it under the terms of the GNU General Public License as published by
9# the Free Software Foundation; either version 2 of the License, or
10# (at your option) any later version.
11#
12# This program is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU General Public License for more details.
16#
17# You should have received a copy of the GNU General Public License
18# along with this program; if not, you can access it online at
19# http://www.gnu.org/licenses/gpl-2.0.html.
20#
21# Copyright (C) IBM Corporation, 2014 8# Copyright (C) IBM Corporation, 2014
22# 9#
23# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 10# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
24 11
25i="$1" 12i="$1"
26if test -d "$i" -a -r "$i" 13if test -d "$i" -a -r "$i"
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh
index 0fa8a61ccb7b..2a7f3f4756a7 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh
@@ -1,26 +1,13 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Analyze a given results directory for rcutorture progress. 4# Analyze a given results directory for rcutorture progress.
4# 5#
5# Usage: kvm-recheck-rcu.sh resdir 6# Usage: kvm-recheck-rcu.sh resdir
6# 7#
7# This program is free software; you can redistribute it and/or modify
8# it under the terms of the GNU General Public License as published by
9# the Free Software Foundation; either version 2 of the License, or
10# (at your option) any later version.
11#
12# This program is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU General Public License for more details.
16#
17# You should have received a copy of the GNU General Public License
18# along with this program; if not, you can access it online at
19# http://www.gnu.org/licenses/gpl-2.0.html.
20#
21# Copyright (C) IBM Corporation, 2014 8# Copyright (C) IBM Corporation, 2014
22# 9#
23# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 10# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
24 11
25i="$1" 12i="$1"
26if test -d "$i" -a -r "$i" 13if test -d "$i" -a -r "$i"
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh
index 8948f7926b21..7d3c2be66c64 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Analyze a given results directory for rcuperf performance measurements, 4# Analyze a given results directory for rcuperf performance measurements,
4# looking for ftrace data. Exits with 0 if data was found, analyzed, and 5# looking for ftrace data. Exits with 0 if data was found, analyzed, and
@@ -7,23 +8,9 @@
7# 8#
8# Usage: kvm-recheck-rcuperf-ftrace.sh resdir 9# Usage: kvm-recheck-rcuperf-ftrace.sh resdir
9# 10#
10# This program is free software; you can redistribute it and/or modify
11# it under the terms of the GNU General Public License as published by
12# the Free Software Foundation; either version 2 of the License, or
13# (at your option) any later version.
14#
15# This program is distributed in the hope that it will be useful,
16# but WITHOUT ANY WARRANTY; without even the implied warranty of
17# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18# GNU General Public License for more details.
19#
20# You should have received a copy of the GNU General Public License
21# along with this program; if not, you can access it online at
22# http://www.gnu.org/licenses/gpl-2.0.html.
23#
24# Copyright (C) IBM Corporation, 2016 11# Copyright (C) IBM Corporation, 2016
25# 12#
26# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 13# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
27 14
28i="$1" 15i="$1"
29. functions.sh 16. functions.sh
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh
index ccebf772fa1e..db0375a57f28 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh
@@ -1,26 +1,13 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Analyze a given results directory for rcuperf performance measurements. 4# Analyze a given results directory for rcuperf performance measurements.
4# 5#
5# Usage: kvm-recheck-rcuperf.sh resdir 6# Usage: kvm-recheck-rcuperf.sh resdir
6# 7#
7# This program is free software; you can redistribute it and/or modify
8# it under the terms of the GNU General Public License as published by
9# the Free Software Foundation; either version 2 of the License, or
10# (at your option) any later version.
11#
12# This program is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU General Public License for more details.
16#
17# You should have received a copy of the GNU General Public License
18# along with this program; if not, you can access it online at
19# http://www.gnu.org/licenses/gpl-2.0.html.
20#
21# Copyright (C) IBM Corporation, 2016 8# Copyright (C) IBM Corporation, 2016
22# 9#
23# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 10# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
24 11
25i="$1" 12i="$1"
26if test -d "$i" -a -r "$i" 13if test -d "$i" -a -r "$i"
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
index c9bab57a77eb..2adde6aaafdb 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Given the results directories for previous KVM-based torture runs, 4# Given the results directories for previous KVM-based torture runs,
4# check the build and console output for errors. Given a directory 5# check the build and console output for errors. Given a directory
@@ -6,23 +7,9 @@
6# 7#
7# Usage: kvm-recheck.sh resdir ... 8# Usage: kvm-recheck.sh resdir ...
8# 9#
9# This program is free software; you can redistribute it and/or modify
10# it under the terms of the GNU General Public License as published by
11# the Free Software Foundation; either version 2 of the License, or
12# (at your option) any later version.
13#
14# This program is distributed in the hope that it will be useful,
15# but WITHOUT ANY WARRANTY; without even the implied warranty of
16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17# GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License
20# along with this program; if not, you can access it online at
21# http://www.gnu.org/licenses/gpl-2.0.html.
22#
23# Copyright (C) IBM Corporation, 2011 10# Copyright (C) IBM Corporation, 2011
24# 11#
25# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 12# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
26 13
27PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH 14PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
28. functions.sh 15. functions.sh
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh
index 58ca758a5786..0eb1ec16d78a 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Run a kvm-based test of the specified tree on the specified configs. 4# Run a kvm-based test of the specified tree on the specified configs.
4# Fully automated run and error checking, no graphics console. 5# Fully automated run and error checking, no graphics console.
@@ -20,23 +21,9 @@
20# 21#
21# More sophisticated argument parsing is clearly needed. 22# More sophisticated argument parsing is clearly needed.
22# 23#
23# This program is free software; you can redistribute it and/or modify
24# it under the terms of the GNU General Public License as published by
25# the Free Software Foundation; either version 2 of the License, or
26# (at your option) any later version.
27#
28# This program is distributed in the hope that it will be useful,
29# but WITHOUT ANY WARRANTY; without even the implied warranty of
30# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
31# GNU General Public License for more details.
32#
33# You should have received a copy of the GNU General Public License
34# along with this program; if not, you can access it online at
35# http://www.gnu.org/licenses/gpl-2.0.html.
36#
37# Copyright (C) IBM Corporation, 2011 24# Copyright (C) IBM Corporation, 2011
38# 25#
39# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 26# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
40 27
41T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$ 28T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$
42trap 'rm -rf $T' 0 29trap 'rm -rf $T' 0
diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh
index 19864f1cb27a..8f1e337b9b54 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Run a series of tests under KVM. By default, this series is specified 4# Run a series of tests under KVM. By default, this series is specified
4# by the relevant CFLIST file, but can be overridden by the --configs 5# by the relevant CFLIST file, but can be overridden by the --configs
@@ -6,23 +7,9 @@
6# 7#
7# Usage: kvm.sh [ options ] 8# Usage: kvm.sh [ options ]
8# 9#
9# This program is free software; you can redistribute it and/or modify
10# it under the terms of the GNU General Public License as published by
11# the Free Software Foundation; either version 2 of the License, or
12# (at your option) any later version.
13#
14# This program is distributed in the hope that it will be useful,
15# but WITHOUT ANY WARRANTY; without even the implied warranty of
16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17# GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License
20# along with this program; if not, you can access it online at
21# http://www.gnu.org/licenses/gpl-2.0.html.
22#
23# Copyright (C) IBM Corporation, 2011 10# Copyright (C) IBM Corporation, 2011
24# 11#
25# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 12# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
26 13
27scriptname=$0 14scriptname=$0
28args="$*" 15args="$*"
diff --git a/tools/testing/selftests/rcutorture/bin/mkinitrd.sh b/tools/testing/selftests/rcutorture/bin/mkinitrd.sh
index 83552bb007b4..6fa9bd1ddc09 100755
--- a/tools/testing/selftests/rcutorture/bin/mkinitrd.sh
+++ b/tools/testing/selftests/rcutorture/bin/mkinitrd.sh
@@ -1,21 +1,8 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Create an initrd directory if one does not already exist. 4# Create an initrd directory if one does not already exist.
4# 5#
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9#
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14#
15# You should have received a copy of the GNU General Public License
16# along with this program; if not, you can access it online at
17# http://www.gnu.org/licenses/gpl-2.0.html.
18#
19# Copyright (C) IBM Corporation, 2013 6# Copyright (C) IBM Corporation, 2013
20# 7#
21# Author: Connor Shu <Connor.Shu@ibm.com> 8# Author: Connor Shu <Connor.Shu@ibm.com>
diff --git a/tools/testing/selftests/rcutorture/bin/parse-build.sh b/tools/testing/selftests/rcutorture/bin/parse-build.sh
index 24fe5f822b28..0701b3bf6ade 100755
--- a/tools/testing/selftests/rcutorture/bin/parse-build.sh
+++ b/tools/testing/selftests/rcutorture/bin/parse-build.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Check the build output from an rcutorture run for goodness. 4# Check the build output from an rcutorture run for goodness.
4# The "file" is a pathname on the local system, and "title" is 5# The "file" is a pathname on the local system, and "title" is
@@ -8,23 +9,9 @@
8# 9#
9# Usage: parse-build.sh file title 10# Usage: parse-build.sh file title
10# 11#
11# This program is free software; you can redistribute it and/or modify
12# it under the terms of the GNU General Public License as published by
13# the Free Software Foundation; either version 2 of the License, or
14# (at your option) any later version.
15#
16# This program is distributed in the hope that it will be useful,
17# but WITHOUT ANY WARRANTY; without even the implied warranty of
18# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19# GNU General Public License for more details.
20#
21# You should have received a copy of the GNU General Public License
22# along with this program; if not, you can access it online at
23# http://www.gnu.org/licenses/gpl-2.0.html.
24#
25# Copyright (C) IBM Corporation, 2011 12# Copyright (C) IBM Corporation, 2011
26# 13#
27# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 14# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
28 15
29F=$1 16F=$1
30title=$2 17title=$2
diff --git a/tools/testing/selftests/rcutorture/bin/parse-console.sh b/tools/testing/selftests/rcutorture/bin/parse-console.sh
index 84933f6aed77..4508373a922f 100755
--- a/tools/testing/selftests/rcutorture/bin/parse-console.sh
+++ b/tools/testing/selftests/rcutorture/bin/parse-console.sh
@@ -1,4 +1,5 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Check the console output from an rcutorture run for oopses. 4# Check the console output from an rcutorture run for oopses.
4# The "file" is a pathname on the local system, and "title" is 5# The "file" is a pathname on the local system, and "title" is
@@ -6,23 +7,9 @@
6# 7#
7# Usage: parse-console.sh file title 8# Usage: parse-console.sh file title
8# 9#
9# This program is free software; you can redistribute it and/or modify
10# it under the terms of the GNU General Public License as published by
11# the Free Software Foundation; either version 2 of the License, or
12# (at your option) any later version.
13#
14# This program is distributed in the hope that it will be useful,
15# but WITHOUT ANY WARRANTY; without even the implied warranty of
16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17# GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License
20# along with this program; if not, you can access it online at
21# http://www.gnu.org/licenses/gpl-2.0.html.
22#
23# Copyright (C) IBM Corporation, 2011 10# Copyright (C) IBM Corporation, 2011
24# 11#
25# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 12# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
26 13
27T=${TMPDIR-/tmp}/parse-console.sh.$$ 14T=${TMPDIR-/tmp}/parse-console.sh.$$
28file="$1" 15file="$1"
diff --git a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
index 80eb646e1319..d3e4b2971f92 100644
--- a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
+++ b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
@@ -1,24 +1,11 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Kernel-version-dependent shell functions for the rest of the scripts. 4# Kernel-version-dependent shell functions for the rest of the scripts.
4# 5#
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9#
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14#
15# You should have received a copy of the GNU General Public License
16# along with this program; if not, you can access it online at
17# http://www.gnu.org/licenses/gpl-2.0.html.
18#
19# Copyright (C) IBM Corporation, 2014 6# Copyright (C) IBM Corporation, 2014
20# 7#
21# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 8# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
22 9
23# locktorture_param_onoff bootparam-string config-file 10# locktorture_param_onoff bootparam-string config-file
24# 11#
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh
index 7bab8246392b..effa415f9b92 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh
+++ b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh
@@ -1,24 +1,11 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Kernel-version-dependent shell functions for the rest of the scripts. 4# Kernel-version-dependent shell functions for the rest of the scripts.
4# 5#
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9#
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14#
15# You should have received a copy of the GNU General Public License
16# along with this program; if not, you can access it online at
17# http://www.gnu.org/licenses/gpl-2.0.html.
18#
19# Copyright (C) IBM Corporation, 2013 6# Copyright (C) IBM Corporation, 2013
20# 7#
21# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 8# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
22 9
23# rcutorture_param_n_barrier_cbs bootparam-string 10# rcutorture_param_n_barrier_cbs bootparam-string
24# 11#
diff --git a/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh
index d36b8fd6f0fc..777d5b0c190f 100644
--- a/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh
+++ b/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh
@@ -1,24 +1,11 @@
1#!/bin/bash 1#!/bin/bash
2# SPDX-License-Identifier: GPL-2.0+
2# 3#
3# Torture-suite-dependent shell functions for the rest of the scripts. 4# Torture-suite-dependent shell functions for the rest of the scripts.
4# 5#
5# This program is free software; you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by
7# the Free Software Foundation; either version 2 of the License, or
8# (at your option) any later version.
9#
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY; without even the implied warranty of
12# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13# GNU General Public License for more details.
14#
15# You should have received a copy of the GNU General Public License
16# along with this program; if not, you can access it online at
17# http://www.gnu.org/licenses/gpl-2.0.html.
18#
19# Copyright (C) IBM Corporation, 2015 6# Copyright (C) IBM Corporation, 2015
20# 7#
21# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> 8# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
22 9
23# per_version_boot_params bootparam-string config-file seconds 10# per_version_boot_params bootparam-string config-file seconds
24# 11#