diff options
49 files changed, 1025 insertions, 1231 deletions
diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html index 18f179807563..c30c1957c7e6 100644 --- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html +++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html | |||
@@ -155,8 +155,7 @@ keeping lock contention under control at all tree levels regardless | |||
155 | of the level of loading on the system. | 155 | of the level of loading on the system. |
156 | 156 | ||
157 | </p><p>RCU updaters wait for normal grace periods by registering | 157 | </p><p>RCU updaters wait for normal grace periods by registering |
158 | RCU callbacks, either directly via <tt>call_rcu()</tt> and | 158 | RCU callbacks, either directly via <tt>call_rcu()</tt> |
159 | friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>), | ||
160 | or indirectly via <tt>synchronize_rcu()</tt> and friends. | 159 | or indirectly via <tt>synchronize_rcu()</tt> and friends. |
161 | RCU callbacks are represented by <tt>rcu_head</tt> structures, | 160 | RCU callbacks are represented by <tt>rcu_head</tt> structures, |
162 | which are queued on <tt>rcu_data</tt> structures while they are | 161 | which are queued on <tt>rcu_data</tt> structures while they are |
diff --git a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html index 19e7a5fb6b73..57300db4b5ff 100644 --- a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html +++ b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html | |||
@@ -56,6 +56,7 @@ sections. | |||
56 | RCU-preempt Expedited Grace Periods</a></h2> | 56 | RCU-preempt Expedited Grace Periods</a></h2> |
57 | 57 | ||
58 | <p> | 58 | <p> |
59 | <tt>CONFIG_PREEMPT=y</tt> kernels implement RCU-preempt. | ||
59 | The overall flow of the handling of a given CPU by an RCU-preempt | 60 | The overall flow of the handling of a given CPU by an RCU-preempt |
60 | expedited grace period is shown in the following diagram: | 61 | expedited grace period is shown in the following diagram: |
61 | 62 | ||
@@ -139,6 +140,7 @@ or offline, among other things. | |||
139 | RCU-sched Expedited Grace Periods</a></h2> | 140 | RCU-sched Expedited Grace Periods</a></h2> |
140 | 141 | ||
141 | <p> | 142 | <p> |
143 | <tt>CONFIG_PREEMPT=n</tt> kernels implement RCU-sched. | ||
142 | The overall flow of the handling of a given CPU by an RCU-sched | 144 | The overall flow of the handling of a given CPU by an RCU-sched |
143 | expedited grace period is shown in the following diagram: | 145 | expedited grace period is shown in the following diagram: |
144 | 146 | ||
@@ -146,7 +148,7 @@ expedited grace period is shown in the following diagram: | |||
146 | 148 | ||
147 | <p> | 149 | <p> |
148 | As with RCU-preempt, RCU-sched's | 150 | As with RCU-preempt, RCU-sched's |
149 | <tt>synchronize_sched_expedited()</tt> ignores offline and | 151 | <tt>synchronize_rcu_expedited()</tt> ignores offline and |
150 | idle CPUs, again because they are in remotely detectable | 152 | idle CPUs, again because they are in remotely detectable |
151 | quiescent states. | 153 | quiescent states. |
152 | However, because the | 154 | However, because the |
diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html index 8d21af02b1f0..c64f8d26609f 100644 --- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html +++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html | |||
@@ -34,12 +34,11 @@ Similarly, any code that happens before the beginning of a given RCU grace | |||
34 | period is guaranteed to see the effects of all accesses following the end | 34 | period is guaranteed to see the effects of all accesses following the end |
35 | of that grace period that are within RCU read-side critical sections. | 35 | of that grace period that are within RCU read-side critical sections. |
36 | 36 | ||
37 | <p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>, | 37 | <p>Note well that RCU-sched read-side critical sections include any region |
38 | for which RCU-sched read-side critical sections include any region | ||
39 | of code for which preemption is disabled. | 38 | of code for which preemption is disabled. |
40 | Given that each individual machine instruction can be thought of as | 39 | Given that each individual machine instruction can be thought of as |
41 | an extremely small region of preemption-disabled code, one can think of | 40 | an extremely small region of preemption-disabled code, one can think of |
42 | <tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids. | 41 | <tt>synchronize_rcu()</tt> as <tt>smp_mb()</tt> on steroids. |
43 | 42 | ||
44 | <p>RCU updaters use this guarantee by splitting their updates into | 43 | <p>RCU updaters use this guarantee by splitting their updates into |
45 | two phases, one of which is executed before the grace period and | 44 | two phases, one of which is executed before the grace period and |
diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt index 687777f83b23..881353fd5bff 100644 --- a/Documentation/RCU/NMI-RCU.txt +++ b/Documentation/RCU/NMI-RCU.txt | |||
@@ -81,18 +81,19 @@ currently executing on some other CPU. We therefore cannot free | |||
81 | up any data structures used by the old NMI handler until execution | 81 | up any data structures used by the old NMI handler until execution |
82 | of it completes on all other CPUs. | 82 | of it completes on all other CPUs. |
83 | 83 | ||
84 | One way to accomplish this is via synchronize_sched(), perhaps as | 84 | One way to accomplish this is via synchronize_rcu(), perhaps as |
85 | follows: | 85 | follows: |
86 | 86 | ||
87 | unset_nmi_callback(); | 87 | unset_nmi_callback(); |
88 | synchronize_sched(); | 88 | synchronize_rcu(); |
89 | kfree(my_nmi_data); | 89 | kfree(my_nmi_data); |
90 | 90 | ||
91 | This works because synchronize_sched() blocks until all CPUs complete | 91 | This works because (as of v4.20) synchronize_rcu() blocks until all |
92 | any preemption-disabled segments of code that they were executing. | 92 | CPUs complete any preemption-disabled segments of code that they were |
93 | Since NMI handlers disable preemption, synchronize_sched() is guaranteed | 93 | executing. |
94 | Since NMI handlers disable preemption, synchronize_rcu() is guaranteed | ||
94 | not to return until all ongoing NMI handlers exit. It is therefore safe | 95 | not to return until all ongoing NMI handlers exit. It is therefore safe |
95 | to free up the handler's data as soon as synchronize_sched() returns. | 96 | to free up the handler's data as soon as synchronize_rcu() returns. |
96 | 97 | ||
97 | Important note: for this to work, the architecture in question must | 98 | Important note: for this to work, the architecture in question must |
98 | invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively. | 99 | invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively. |
diff --git a/Documentation/RCU/UP.txt b/Documentation/RCU/UP.txt index 90ec5341ee98..53bde717017b 100644 --- a/Documentation/RCU/UP.txt +++ b/Documentation/RCU/UP.txt | |||
@@ -86,10 +86,8 @@ even on a UP system. So do not do it! Even on a UP system, the RCU | |||
86 | infrastructure -must- respect grace periods, and -must- invoke callbacks | 86 | infrastructure -must- respect grace periods, and -must- invoke callbacks |
87 | from a known environment in which no locks are held. | 87 | from a known environment in which no locks are held. |
88 | 88 | ||
89 | It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return | 89 | Note that it -is- safe for synchronize_rcu() to return immediately on |
90 | immediately on an UP system. It is also safe for synchronize_rcu() | 90 | UP systems, including !PREEMPT SMP builds running on UP systems. |
91 | to return immediately on UP systems, except when running preemptable | ||
92 | RCU. | ||
93 | 91 | ||
94 | Quick Quiz #3: Why can't synchronize_rcu() return immediately on | 92 | Quick Quiz #3: Why can't synchronize_rcu() return immediately on |
95 | UP systems running preemptable RCU? | 93 | UP systems running preemptable RCU? |
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt index 6f469864d9f5..e98ff261a438 100644 --- a/Documentation/RCU/checklist.txt +++ b/Documentation/RCU/checklist.txt | |||
@@ -182,16 +182,13 @@ over a rather long period of time, but improvements are always welcome! | |||
182 | when publicizing a pointer to a structure that can | 182 | when publicizing a pointer to a structure that can |
183 | be traversed by an RCU read-side critical section. | 183 | be traversed by an RCU read-side critical section. |
184 | 184 | ||
185 | 5. If call_rcu(), or a related primitive such as call_rcu_bh(), | 185 | 5. If call_rcu() or call_srcu() is used, the callback function will |
186 | call_rcu_sched(), or call_srcu() is used, the callback function | 186 | be called from softirq context. In particular, it cannot block. |
187 | will be called from softirq context. In particular, it cannot | ||
188 | block. | ||
189 | 187 | ||
190 | 6. Since synchronize_rcu() can block, it cannot be called from | 188 | 6. Since synchronize_rcu() can block, it cannot be called |
191 | any sort of irq context. The same rule applies for | 189 | from any sort of irq context. The same rule applies |
192 | synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), | 190 | for synchronize_srcu(), synchronize_rcu_expedited(), and |
193 | synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), | 191 | synchronize_srcu_expedited(). |
194 | synchronize_sched_expedite(), and synchronize_srcu_expedited(). | ||
195 | 192 | ||
196 | The expedited forms of these primitives have the same semantics | 193 | The expedited forms of these primitives have the same semantics |
197 | as the non-expedited forms, but expediting is both expensive and | 194 | as the non-expedited forms, but expediting is both expensive and |
@@ -212,20 +209,20 @@ over a rather long period of time, but improvements are always welcome! | |||
212 | of the system, especially to real-time workloads running on | 209 | of the system, especially to real-time workloads running on |
213 | the rest of the system. | 210 | the rest of the system. |
214 | 211 | ||
215 | 7. If the updater uses call_rcu() or synchronize_rcu(), then the | 212 | 7. As of v4.20, a given kernel implements only one RCU flavor, |
216 | corresponding readers must use rcu_read_lock() and | 213 | which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y. |
217 | rcu_read_unlock(). If the updater uses call_rcu_bh() or | 214 | If the updater uses call_rcu() or synchronize_rcu(), |
218 | synchronize_rcu_bh(), then the corresponding readers must | 215 | then the corresponding readers my use rcu_read_lock() and |
219 | use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the | 216 | rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(), |
220 | updater uses call_rcu_sched() or synchronize_sched(), then | 217 | or any pair of primitives that disables and re-enables preemption, |
221 | the corresponding readers must disable preemption, possibly | 218 | for example, rcu_read_lock_sched() and rcu_read_unlock_sched(). |
222 | by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). | 219 | If the updater uses synchronize_srcu() or call_srcu(), |
223 | If the updater uses synchronize_srcu() or call_srcu(), then | 220 | then the corresponding readers must use srcu_read_lock() and |
224 | the corresponding readers must use srcu_read_lock() and | ||
225 | srcu_read_unlock(), and with the same srcu_struct. The rules for | 221 | srcu_read_unlock(), and with the same srcu_struct. The rules for |
226 | the expedited primitives are the same as for their non-expedited | 222 | the expedited primitives are the same as for their non-expedited |
227 | counterparts. Mixing things up will result in confusion and | 223 | counterparts. Mixing things up will result in confusion and |
228 | broken kernels. | 224 | broken kernels, and has even resulted in an exploitable security |
225 | issue. | ||
229 | 226 | ||
230 | One exception to this rule: rcu_read_lock() and rcu_read_unlock() | 227 | One exception to this rule: rcu_read_lock() and rcu_read_unlock() |
231 | may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() | 228 | may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() |
@@ -288,8 +285,7 @@ over a rather long period of time, but improvements are always welcome! | |||
288 | d. Periodically invoke synchronize_rcu(), permitting a limited | 285 | d. Periodically invoke synchronize_rcu(), permitting a limited |
289 | number of updates per grace period. | 286 | number of updates per grace period. |
290 | 287 | ||
291 | The same cautions apply to call_rcu_bh(), call_rcu_sched(), | 288 | The same cautions apply to call_srcu() and kfree_rcu(). |
292 | call_srcu(), and kfree_rcu(). | ||
293 | 289 | ||
294 | Note that although these primitives do take action to avoid memory | 290 | Note that although these primitives do take action to avoid memory |
295 | exhaustion when any given CPU has too many callbacks, a determined | 291 | exhaustion when any given CPU has too many callbacks, a determined |
@@ -322,7 +318,7 @@ over a rather long period of time, but improvements are always welcome! | |||
322 | 318 | ||
323 | 11. Any lock acquired by an RCU callback must be acquired elsewhere | 319 | 11. Any lock acquired by an RCU callback must be acquired elsewhere |
324 | with softirq disabled, e.g., via spin_lock_irqsave(), | 320 | with softirq disabled, e.g., via spin_lock_irqsave(), |
325 | spin_lock_bh(), etc. Failing to disable irq on a given | 321 | spin_lock_bh(), etc. Failing to disable softirq on a given |
326 | acquisition of that lock will result in deadlock as soon as | 322 | acquisition of that lock will result in deadlock as soon as |
327 | the RCU softirq handler happens to run your RCU callback while | 323 | the RCU softirq handler happens to run your RCU callback while |
328 | interrupting that acquisition's critical section. | 324 | interrupting that acquisition's critical section. |
@@ -335,13 +331,16 @@ over a rather long period of time, but improvements are always welcome! | |||
335 | must use whatever locking or other synchronization is required | 331 | must use whatever locking or other synchronization is required |
336 | to safely access and/or modify that data structure. | 332 | to safely access and/or modify that data structure. |
337 | 333 | ||
338 | RCU callbacks are -usually- executed on the same CPU that executed | 334 | Do not assume that RCU callbacks will be executed on the same |
339 | the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), | 335 | CPU that executed the corresponding call_rcu() or call_srcu(). |
340 | but are by -no- means guaranteed to be. For example, if a given | 336 | For example, if a given CPU goes offline while having an RCU |
341 | CPU goes offline while having an RCU callback pending, then that | 337 | callback pending, then that RCU callback will execute on some |
342 | RCU callback will execute on some surviving CPU. (If this was | 338 | surviving CPU. (If this was not the case, a self-spawning RCU |
343 | not the case, a self-spawning RCU callback would prevent the | 339 | callback would prevent the victim CPU from ever going offline.) |
344 | victim CPU from ever going offline.) | 340 | Furthermore, CPUs designated by rcu_nocbs= might well -always- |
341 | have their RCU callbacks executed on some other CPUs, in fact, | ||
342 | for some real-time workloads, this is the whole point of using | ||
343 | the rcu_nocbs= kernel boot parameter. | ||
345 | 344 | ||
346 | 13. Unlike other forms of RCU, it -is- permissible to block in an | 345 | 13. Unlike other forms of RCU, it -is- permissible to block in an |
347 | SRCU read-side critical section (demarked by srcu_read_lock() | 346 | SRCU read-side critical section (demarked by srcu_read_lock() |
@@ -381,11 +380,11 @@ over a rather long period of time, but improvements are always welcome! | |||
381 | 380 | ||
382 | SRCU's expedited primitive (synchronize_srcu_expedited()) | 381 | SRCU's expedited primitive (synchronize_srcu_expedited()) |
383 | never sends IPIs to other CPUs, so it is easier on | 382 | never sends IPIs to other CPUs, so it is easier on |
384 | real-time workloads than is synchronize_rcu_expedited(), | 383 | real-time workloads than is synchronize_rcu_expedited(). |
385 | synchronize_rcu_bh_expedited() or synchronize_sched_expedited(). | ||
386 | 384 | ||
387 | Note that rcu_dereference() and rcu_assign_pointer() relate to | 385 | Note that rcu_assign_pointer() relates to SRCU just as it does to |
388 | SRCU just as they do to other forms of RCU. | 386 | other forms of RCU, but instead of rcu_dereference() you should |
387 | use srcu_dereference() in order to avoid lockdep splats. | ||
389 | 388 | ||
390 | 14. The whole point of call_rcu(), synchronize_rcu(), and friends | 389 | 14. The whole point of call_rcu(), synchronize_rcu(), and friends |
391 | is to wait until all pre-existing readers have finished before | 390 | is to wait until all pre-existing readers have finished before |
@@ -405,6 +404,9 @@ over a rather long period of time, but improvements are always welcome! | |||
405 | read-side critical sections. It is the responsibility of the | 404 | read-side critical sections. It is the responsibility of the |
406 | RCU update-side primitives to deal with this. | 405 | RCU update-side primitives to deal with this. |
407 | 406 | ||
407 | For SRCU readers, you can use smp_mb__after_srcu_read_unlock() | ||
408 | immediately after an srcu_read_unlock() to get a full barrier. | ||
409 | |||
408 | 16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the | 410 | 16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the |
409 | __rcu sparse checks to validate your RCU code. These can help | 411 | __rcu sparse checks to validate your RCU code. These can help |
410 | find problems as follows: | 412 | find problems as follows: |
@@ -428,22 +430,19 @@ over a rather long period of time, but improvements are always welcome! | |||
428 | These debugging aids can help you find problems that are | 430 | These debugging aids can help you find problems that are |
429 | otherwise extremely difficult to spot. | 431 | otherwise extremely difficult to spot. |
430 | 432 | ||
431 | 17. If you register a callback using call_rcu(), call_rcu_bh(), | 433 | 17. If you register a callback using call_rcu() or call_srcu(), and |
432 | call_rcu_sched(), or call_srcu(), and pass in a function defined | 434 | pass in a function defined within a loadable module, then it in |
433 | within a loadable module, then it in necessary to wait for | 435 | necessary to wait for all pending callbacks to be invoked after |
434 | all pending callbacks to be invoked after the last invocation | 436 | the last invocation and before unloading that module. Note that |
435 | and before unloading that module. Note that it is absolutely | 437 | it is absolutely -not- sufficient to wait for a grace period! |
436 | -not- sufficient to wait for a grace period! The current (say) | 438 | The current (say) synchronize_rcu() implementation is -not- |
437 | synchronize_rcu() implementation waits only for all previous | 439 | guaranteed to wait for callbacks registered on other CPUs. |
438 | callbacks registered on the CPU that synchronize_rcu() is running | 440 | Or even on the current CPU if that CPU recently went offline |
439 | on, but it is -not- guaranteed to wait for callbacks registered | 441 | and came back online. |
440 | on other CPUs. | ||
441 | 442 | ||
442 | You instead need to use one of the barrier functions: | 443 | You instead need to use one of the barrier functions: |
443 | 444 | ||
444 | o call_rcu() -> rcu_barrier() | 445 | o call_rcu() -> rcu_barrier() |
445 | o call_rcu_bh() -> rcu_barrier() | ||
446 | o call_rcu_sched() -> rcu_barrier() | ||
447 | o call_srcu() -> srcu_barrier() | 446 | o call_srcu() -> srcu_barrier() |
448 | 447 | ||
449 | However, these barrier functions are absolutely -not- guaranteed | 448 | However, these barrier functions are absolutely -not- guaranteed |
diff --git a/Documentation/RCU/rcu.txt b/Documentation/RCU/rcu.txt index 721b3e426515..c818cf65c5a9 100644 --- a/Documentation/RCU/rcu.txt +++ b/Documentation/RCU/rcu.txt | |||
@@ -52,10 +52,10 @@ o If I am running on a uniprocessor kernel, which can only do one | |||
52 | o How can I see where RCU is currently used in the Linux kernel? | 52 | o How can I see where RCU is currently used in the Linux kernel? |
53 | 53 | ||
54 | Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu", | 54 | Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu", |
55 | "rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh", | 55 | "rcu_read_lock_bh", "rcu_read_unlock_bh", "srcu_read_lock", |
56 | "srcu_read_lock", "srcu_read_unlock", "synchronize_rcu", | 56 | "srcu_read_unlock", "synchronize_rcu", "synchronize_net", |
57 | "synchronize_net", "synchronize_srcu", and the other RCU | 57 | "synchronize_srcu", and the other RCU primitives. Or grab one |
58 | primitives. Or grab one of the cscope databases from: | 58 | of the cscope databases from: |
59 | 59 | ||
60 | http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html | 60 | http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html |
61 | 61 | ||
diff --git a/Documentation/RCU/rcu_dereference.txt b/Documentation/RCU/rcu_dereference.txt index ab96227bad42..bf699e8cfc75 100644 --- a/Documentation/RCU/rcu_dereference.txt +++ b/Documentation/RCU/rcu_dereference.txt | |||
@@ -351,3 +351,106 @@ garbage values. | |||
351 | 351 | ||
352 | In short, rcu_dereference() is -not- optional when you are going to | 352 | In short, rcu_dereference() is -not- optional when you are going to |
353 | dereference the resulting pointer. | 353 | dereference the resulting pointer. |
354 | |||
355 | |||
356 | WHICH MEMBER OF THE rcu_dereference() FAMILY SHOULD YOU USE? | ||
357 | |||
358 | First, please avoid using rcu_dereference_raw() and also please avoid | ||
359 | using rcu_dereference_check() and rcu_dereference_protected() with a | ||
360 | second argument with a constant value of 1 (or true, for that matter). | ||
361 | With that caution out of the way, here is some guidance for which | ||
362 | member of the rcu_dereference() to use in various situations: | ||
363 | |||
364 | 1. If the access needs to be within an RCU read-side critical | ||
365 | section, use rcu_dereference(). With the new consolidated | ||
366 | RCU flavors, an RCU read-side critical section is entered | ||
367 | using rcu_read_lock(), anything that disables bottom halves, | ||
368 | anything that disables interrupts, or anything that disables | ||
369 | preemption. | ||
370 | |||
371 | 2. If the access might be within an RCU read-side critical section | ||
372 | on the one hand, or protected by (say) my_lock on the other, | ||
373 | use rcu_dereference_check(), for example: | ||
374 | |||
375 | p1 = rcu_dereference_check(p->rcu_protected_pointer, | ||
376 | lockdep_is_held(&my_lock)); | ||
377 | |||
378 | |||
379 | 3. If the access might be within an RCU read-side critical section | ||
380 | on the one hand, or protected by either my_lock or your_lock on | ||
381 | the other, again use rcu_dereference_check(), for example: | ||
382 | |||
383 | p1 = rcu_dereference_check(p->rcu_protected_pointer, | ||
384 | lockdep_is_held(&my_lock) || | ||
385 | lockdep_is_held(&your_lock)); | ||
386 | |||
387 | 4. If the access is on the update side, so that it is always protected | ||
388 | by my_lock, use rcu_dereference_protected(): | ||
389 | |||
390 | p1 = rcu_dereference_protected(p->rcu_protected_pointer, | ||
391 | lockdep_is_held(&my_lock)); | ||
392 | |||
393 | This can be extended to handle multiple locks as in #3 above, | ||
394 | and both can be extended to check other conditions as well. | ||
395 | |||
396 | 5. If the protection is supplied by the caller, and is thus unknown | ||
397 | to this code, that is the rare case when rcu_dereference_raw() | ||
398 | is appropriate. In addition, rcu_dereference_raw() might be | ||
399 | appropriate when the lockdep expression would be excessively | ||
400 | complex, except that a better approach in that case might be to | ||
401 | take a long hard look at your synchronization design. Still, | ||
402 | there are data-locking cases where any one of a very large number | ||
403 | of locks or reference counters suffices to protect the pointer, | ||
404 | so rcu_dereference_raw() does have its place. | ||
405 | |||
406 | However, its place is probably quite a bit smaller than one | ||
407 | might expect given the number of uses in the current kernel. | ||
408 | Ditto for its synonym, rcu_dereference_check( ... , 1), and | ||
409 | its close relative, rcu_dereference_protected(... , 1). | ||
410 | |||
411 | |||
412 | SPARSE CHECKING OF RCU-PROTECTED POINTERS | ||
413 | |||
414 | The sparse static-analysis tool checks for direct access to RCU-protected | ||
415 | pointers, which can result in "interesting" bugs due to compiler | ||
416 | optimizations involving invented loads and perhaps also load tearing. | ||
417 | For example, suppose someone mistakenly does something like this: | ||
418 | |||
419 | p = q->rcu_protected_pointer; | ||
420 | do_something_with(p->a); | ||
421 | do_something_else_with(p->b); | ||
422 | |||
423 | If register pressure is high, the compiler might optimize "p" out | ||
424 | of existence, transforming the code to something like this: | ||
425 | |||
426 | do_something_with(q->rcu_protected_pointer->a); | ||
427 | do_something_else_with(q->rcu_protected_pointer->b); | ||
428 | |||
429 | This could fatally disappoint your code if q->rcu_protected_pointer | ||
430 | changed in the meantime. Nor is this a theoretical problem: Exactly | ||
431 | this sort of bug cost Paul E. McKenney (and several of his innocent | ||
432 | colleagues) a three-day weekend back in the early 1990s. | ||
433 | |||
434 | Load tearing could of course result in dereferencing a mashup of a pair | ||
435 | of pointers, which also might fatally disappoint your code. | ||
436 | |||
437 | These problems could have been avoided simply by making the code instead | ||
438 | read as follows: | ||
439 | |||
440 | p = rcu_dereference(q->rcu_protected_pointer); | ||
441 | do_something_with(p->a); | ||
442 | do_something_else_with(p->b); | ||
443 | |||
444 | Unfortunately, these sorts of bugs can be extremely hard to spot during | ||
445 | review. This is where the sparse tool comes into play, along with the | ||
446 | "__rcu" marker. If you mark a pointer declaration, whether in a structure | ||
447 | or as a formal parameter, with "__rcu", which tells sparse to complain if | ||
448 | this pointer is accessed directly. It will also cause sparse to complain | ||
449 | if a pointer not marked with "__rcu" is accessed using rcu_dereference() | ||
450 | and friends. For example, ->rcu_protected_pointer might be declared as | ||
451 | follows: | ||
452 | |||
453 | struct foo __rcu *rcu_protected_pointer; | ||
454 | |||
455 | Use of "__rcu" is opt-in. If you choose not to use it, then you should | ||
456 | ignore the sparse warnings. | ||
diff --git a/Documentation/RCU/rcubarrier.txt b/Documentation/RCU/rcubarrier.txt index 5d7759071a3e..a2782df69732 100644 --- a/Documentation/RCU/rcubarrier.txt +++ b/Documentation/RCU/rcubarrier.txt | |||
@@ -83,16 +83,15 @@ Pseudo-code using rcu_barrier() is as follows: | |||
83 | 2. Execute rcu_barrier(). | 83 | 2. Execute rcu_barrier(). |
84 | 3. Allow the module to be unloaded. | 84 | 3. Allow the module to be unloaded. |
85 | 85 | ||
86 | There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier() | 86 | There is also an srcu_barrier() function for SRCU, and you of course |
87 | functions for the other flavors of RCU, and you of course must match | 87 | must match the flavor of rcu_barrier() with that of call_rcu(). If your |
88 | the flavor of rcu_barrier() with that of call_rcu(). If your module | 88 | module uses multiple flavors of call_rcu(), then it must also use multiple |
89 | uses multiple flavors of call_rcu(), then it must also use multiple | ||
90 | flavors of rcu_barrier() when unloading that module. For example, if | 89 | flavors of rcu_barrier() when unloading that module. For example, if |
91 | it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on | 90 | it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on |
92 | srcu_struct_2(), then the following three lines of code will be required | 91 | srcu_struct_2(), then the following three lines of code will be required |
93 | when unloading: | 92 | when unloading: |
94 | 93 | ||
95 | 1 rcu_barrier_bh(); | 94 | 1 rcu_barrier(); |
96 | 2 srcu_barrier(&srcu_struct_1); | 95 | 2 srcu_barrier(&srcu_struct_1); |
97 | 3 srcu_barrier(&srcu_struct_2); | 96 | 3 srcu_barrier(&srcu_struct_2); |
98 | 97 | ||
@@ -185,12 +184,12 @@ module invokes call_rcu() from timers, you will need to first cancel all | |||
185 | the timers, and only then invoke rcu_barrier() to wait for any remaining | 184 | the timers, and only then invoke rcu_barrier() to wait for any remaining |
186 | RCU callbacks to complete. | 185 | RCU callbacks to complete. |
187 | 186 | ||
188 | Of course, if you module uses call_rcu_bh(), you will need to invoke | 187 | Of course, if you module uses call_rcu(), you will need to invoke |
189 | rcu_barrier_bh() before unloading. Similarly, if your module uses | 188 | rcu_barrier() before unloading. Similarly, if your module uses |
190 | call_rcu_sched(), you will need to invoke rcu_barrier_sched() before | 189 | call_srcu(), you will need to invoke srcu_barrier() before unloading, |
191 | unloading. If your module uses call_rcu(), call_rcu_bh(), -and- | 190 | and on the same srcu_struct structure. If your module uses call_rcu() |
192 | call_rcu_sched(), then you will need to invoke each of rcu_barrier(), | 191 | -and- call_srcu(), then you will need to invoke rcu_barrier() -and- |
193 | rcu_barrier_bh(), and rcu_barrier_sched(). | 192 | srcu_barrier(). |
194 | 193 | ||
195 | 194 | ||
196 | Implementing rcu_barrier() | 195 | Implementing rcu_barrier() |
@@ -223,8 +222,8 @@ shown below. Note that the final "1" in on_each_cpu()'s argument list | |||
223 | ensures that all the calls to rcu_barrier_func() will have completed | 222 | ensures that all the calls to rcu_barrier_func() will have completed |
224 | before on_each_cpu() returns. Line 9 then waits for the completion. | 223 | before on_each_cpu() returns. Line 9 then waits for the completion. |
225 | 224 | ||
226 | This code was rewritten in 2008 to support rcu_barrier_bh() and | 225 | This code was rewritten in 2008 and several times thereafter, but this |
227 | rcu_barrier_sched() in addition to the original rcu_barrier(). | 226 | still gives the general idea. |
228 | 227 | ||
229 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() | 228 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() |
230 | to post an RCU callback, as follows: | 229 | to post an RCU callback, as follows: |
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt index 1ace20815bb1..981651a8b65d 100644 --- a/Documentation/RCU/whatisRCU.txt +++ b/Documentation/RCU/whatisRCU.txt | |||
@@ -310,7 +310,7 @@ reader, updater, and reclaimer. | |||
310 | 310 | ||
311 | 311 | ||
312 | rcu_assign_pointer() | 312 | rcu_assign_pointer() |
313 | +--------+ | 313 | +--------+ |
314 | +---------------------->| reader |---------+ | 314 | +---------------------->| reader |---------+ |
315 | | +--------+ | | 315 | | +--------+ | |
316 | | | | | 316 | | | | |
@@ -318,12 +318,12 @@ reader, updater, and reclaimer. | |||
318 | | | | rcu_read_lock() | 318 | | | | rcu_read_lock() |
319 | | | | rcu_read_unlock() | 319 | | | | rcu_read_unlock() |
320 | | rcu_dereference() | | | 320 | | rcu_dereference() | | |
321 | +---------+ | | | 321 | +---------+ | | |
322 | | updater |<---------------------+ | | 322 | | updater |<----------------+ | |
323 | +---------+ V | 323 | +---------+ V |
324 | | +-----------+ | 324 | | +-----------+ |
325 | +----------------------------------->| reclaimer | | 325 | +----------------------------------->| reclaimer | |
326 | +-----------+ | 326 | +-----------+ |
327 | Defer: | 327 | Defer: |
328 | synchronize_rcu() & call_rcu() | 328 | synchronize_rcu() & call_rcu() |
329 | 329 | ||
diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 10f4499e677c..ee60e519438a 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt | |||
@@ -243,10 +243,10 @@ Optimization | |||
243 | ^^^^^^^^^^^^ | 243 | ^^^^^^^^^^^^ |
244 | 244 | ||
245 | The Kprobe-optimizer doesn't insert the jump instruction immediately; | 245 | The Kprobe-optimizer doesn't insert the jump instruction immediately; |
246 | rather, it calls synchronize_sched() for safety first, because it's | 246 | rather, it calls synchronize_rcu() for safety first, because it's |
247 | possible for a CPU to be interrupted in the middle of executing the | 247 | possible for a CPU to be interrupted in the middle of executing the |
248 | optimized region [3]_. As you know, synchronize_sched() can ensure | 248 | optimized region [3]_. As you know, synchronize_rcu() can ensure |
249 | that all interruptions that were active when synchronize_sched() | 249 | that all interruptions that were active when synchronize_rcu() |
250 | was called are done, but only if CONFIG_PREEMPT=n. So, this version | 250 | was called are done, but only if CONFIG_PREEMPT=n. So, this version |
251 | of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_. | 251 | of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_. |
252 | 252 | ||
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 470601980794..739c5b4830d7 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c | |||
@@ -388,7 +388,7 @@ static void nvme_free_ns_head(struct kref *ref) | |||
388 | nvme_mpath_remove_disk(head); | 388 | nvme_mpath_remove_disk(head); |
389 | ida_simple_remove(&head->subsys->ns_ida, head->instance); | 389 | ida_simple_remove(&head->subsys->ns_ida, head->instance); |
390 | list_del_init(&head->entry); | 390 | list_del_init(&head->entry); |
391 | cleanup_srcu_struct_quiesced(&head->srcu); | 391 | cleanup_srcu_struct(&head->srcu); |
392 | nvme_put_subsystem(head->subsys); | 392 | nvme_put_subsystem(head->subsys); |
393 | kfree(head); | 393 | kfree(head); |
394 | } | 394 | } |
diff --git a/include/linux/srcu.h b/include/linux/srcu.h index c495b2d51569..e432cc92c73d 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h | |||
@@ -56,45 +56,11 @@ struct srcu_struct { }; | |||
56 | 56 | ||
57 | void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, | 57 | void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, |
58 | void (*func)(struct rcu_head *head)); | 58 | void (*func)(struct rcu_head *head)); |
59 | void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced); | 59 | void cleanup_srcu_struct(struct srcu_struct *ssp); |
60 | int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); | 60 | int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); |
61 | void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); | 61 | void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); |
62 | void synchronize_srcu(struct srcu_struct *ssp); | 62 | void synchronize_srcu(struct srcu_struct *ssp); |
63 | 63 | ||
64 | /** | ||
65 | * cleanup_srcu_struct - deconstruct a sleep-RCU structure | ||
66 | * @ssp: structure to clean up. | ||
67 | * | ||
68 | * Must invoke this after you are finished using a given srcu_struct that | ||
69 | * was initialized via init_srcu_struct(), else you leak memory. | ||
70 | */ | ||
71 | static inline void cleanup_srcu_struct(struct srcu_struct *ssp) | ||
72 | { | ||
73 | _cleanup_srcu_struct(ssp, false); | ||
74 | } | ||
75 | |||
76 | /** | ||
77 | * cleanup_srcu_struct_quiesced - deconstruct a quiesced sleep-RCU structure | ||
78 | * @ssp: structure to clean up. | ||
79 | * | ||
80 | * Must invoke this after you are finished using a given srcu_struct that | ||
81 | * was initialized via init_srcu_struct(), else you leak memory. Also, | ||
82 | * all grace-period processing must have completed. | ||
83 | * | ||
84 | * "Completed" means that the last synchronize_srcu() and | ||
85 | * synchronize_srcu_expedited() calls must have returned before the call | ||
86 | * to cleanup_srcu_struct_quiesced(). It also means that the callback | ||
87 | * from the last call_srcu() must have been invoked before the call to | ||
88 | * cleanup_srcu_struct_quiesced(), but you can use srcu_barrier() to help | ||
89 | * with this last. Violating these rules will get you a WARN_ON() splat | ||
90 | * (with high probability, anyway), and will also cause the srcu_struct | ||
91 | * to be leaked. | ||
92 | */ | ||
93 | static inline void cleanup_srcu_struct_quiesced(struct srcu_struct *ssp) | ||
94 | { | ||
95 | _cleanup_srcu_struct(ssp, true); | ||
96 | } | ||
97 | |||
98 | #ifdef CONFIG_DEBUG_LOCK_ALLOC | 64 | #ifdef CONFIG_DEBUG_LOCK_ALLOC |
99 | 65 | ||
100 | /** | 66 | /** |
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c index ad40a2617063..80a463d31a8d 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c | |||
@@ -829,7 +829,9 @@ static void lock_torture_cleanup(void) | |||
829 | "End of test: SUCCESS"); | 829 | "End of test: SUCCESS"); |
830 | 830 | ||
831 | kfree(cxt.lwsa); | 831 | kfree(cxt.lwsa); |
832 | cxt.lwsa = NULL; | ||
832 | kfree(cxt.lrsa); | 833 | kfree(cxt.lrsa); |
834 | cxt.lrsa = NULL; | ||
833 | 835 | ||
834 | end: | 836 | end: |
835 | torture_cleanup_end(); | 837 | torture_cleanup_end(); |
diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index acee72c0b24b..4b58c907b4b7 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h | |||
@@ -233,6 +233,7 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head) | |||
233 | #ifdef CONFIG_RCU_STALL_COMMON | 233 | #ifdef CONFIG_RCU_STALL_COMMON |
234 | 234 | ||
235 | extern int rcu_cpu_stall_suppress; | 235 | extern int rcu_cpu_stall_suppress; |
236 | extern int rcu_cpu_stall_timeout; | ||
236 | int rcu_jiffies_till_stall_check(void); | 237 | int rcu_jiffies_till_stall_check(void); |
237 | 238 | ||
238 | #define rcu_ftrace_dump_stall_suppress() \ | 239 | #define rcu_ftrace_dump_stall_suppress() \ |
diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c index c29761152874..7a6890b23c5f 100644 --- a/kernel/rcu/rcuperf.c +++ b/kernel/rcu/rcuperf.c | |||
@@ -494,6 +494,10 @@ rcu_perf_cleanup(void) | |||
494 | 494 | ||
495 | if (torture_cleanup_begin()) | 495 | if (torture_cleanup_begin()) |
496 | return; | 496 | return; |
497 | if (!cur_ops) { | ||
498 | torture_cleanup_end(); | ||
499 | return; | ||
500 | } | ||
497 | 501 | ||
498 | if (reader_tasks) { | 502 | if (reader_tasks) { |
499 | for (i = 0; i < nrealreaders; i++) | 503 | for (i = 0; i < nrealreaders; i++) |
@@ -614,6 +618,7 @@ rcu_perf_init(void) | |||
614 | pr_cont("\n"); | 618 | pr_cont("\n"); |
615 | WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST)); | 619 | WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST)); |
616 | firsterr = -EINVAL; | 620 | firsterr = -EINVAL; |
621 | cur_ops = NULL; | ||
617 | goto unwind; | 622 | goto unwind; |
618 | } | 623 | } |
619 | if (cur_ops->init) | 624 | if (cur_ops->init) |
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index f14d1b18a74f..efaa5b3f4d3f 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c | |||
@@ -299,7 +299,6 @@ struct rcu_torture_ops { | |||
299 | int irq_capable; | 299 | int irq_capable; |
300 | int can_boost; | 300 | int can_boost; |
301 | int extendables; | 301 | int extendables; |
302 | int ext_irq_conflict; | ||
303 | const char *name; | 302 | const char *name; |
304 | }; | 303 | }; |
305 | 304 | ||
@@ -592,12 +591,7 @@ static void srcu_torture_init(void) | |||
592 | 591 | ||
593 | static void srcu_torture_cleanup(void) | 592 | static void srcu_torture_cleanup(void) |
594 | { | 593 | { |
595 | static DEFINE_TORTURE_RANDOM(rand); | 594 | cleanup_srcu_struct(&srcu_ctld); |
596 | |||
597 | if (torture_random(&rand) & 0x800) | ||
598 | cleanup_srcu_struct(&srcu_ctld); | ||
599 | else | ||
600 | cleanup_srcu_struct_quiesced(&srcu_ctld); | ||
601 | srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */ | 595 | srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */ |
602 | } | 596 | } |
603 | 597 | ||
@@ -1160,7 +1154,7 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp) | |||
1160 | unsigned long randmask2 = randmask1 >> 3; | 1154 | unsigned long randmask2 = randmask1 >> 3; |
1161 | 1155 | ||
1162 | WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT); | 1156 | WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT); |
1163 | /* Most of the time lots of bits, half the time only one bit. */ | 1157 | /* Mostly only one bit (need preemption!), sometimes lots of bits. */ |
1164 | if (!(randmask1 & 0x7)) | 1158 | if (!(randmask1 & 0x7)) |
1165 | mask = mask & randmask2; | 1159 | mask = mask & randmask2; |
1166 | else | 1160 | else |
@@ -1170,10 +1164,6 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp) | |||
1170 | ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) || | 1164 | ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) || |
1171 | (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH)))) | 1165 | (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH)))) |
1172 | mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH; | 1166 | mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH; |
1173 | if ((mask & RCUTORTURE_RDR_IRQ) && | ||
1174 | !(mask & cur_ops->ext_irq_conflict) && | ||
1175 | (oldmask & cur_ops->ext_irq_conflict)) | ||
1176 | mask |= cur_ops->ext_irq_conflict; /* Or if readers object. */ | ||
1177 | return mask ?: RCUTORTURE_RDR_RCU; | 1167 | return mask ?: RCUTORTURE_RDR_RCU; |
1178 | } | 1168 | } |
1179 | 1169 | ||
@@ -1848,7 +1838,7 @@ static int rcutorture_oom_notify(struct notifier_block *self, | |||
1848 | WARN(1, "%s invoked upon OOM during forward-progress testing.\n", | 1838 | WARN(1, "%s invoked upon OOM during forward-progress testing.\n", |
1849 | __func__); | 1839 | __func__); |
1850 | rcu_torture_fwd_cb_hist(); | 1840 | rcu_torture_fwd_cb_hist(); |
1851 | rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat) / 2)); | 1841 | rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat)) / 2); |
1852 | WRITE_ONCE(rcu_fwd_emergency_stop, true); | 1842 | WRITE_ONCE(rcu_fwd_emergency_stop, true); |
1853 | smp_mb(); /* Emergency stop before free and wait to avoid hangs. */ | 1843 | smp_mb(); /* Emergency stop before free and wait to avoid hangs. */ |
1854 | pr_info("%s: Freed %lu RCU callbacks.\n", | 1844 | pr_info("%s: Freed %lu RCU callbacks.\n", |
@@ -2094,6 +2084,10 @@ rcu_torture_cleanup(void) | |||
2094 | cur_ops->cb_barrier(); | 2084 | cur_ops->cb_barrier(); |
2095 | return; | 2085 | return; |
2096 | } | 2086 | } |
2087 | if (!cur_ops) { | ||
2088 | torture_cleanup_end(); | ||
2089 | return; | ||
2090 | } | ||
2097 | 2091 | ||
2098 | rcu_torture_barrier_cleanup(); | 2092 | rcu_torture_barrier_cleanup(); |
2099 | torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task); | 2093 | torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task); |
@@ -2267,6 +2261,7 @@ rcu_torture_init(void) | |||
2267 | pr_cont("\n"); | 2261 | pr_cont("\n"); |
2268 | WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST)); | 2262 | WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST)); |
2269 | firsterr = -EINVAL; | 2263 | firsterr = -EINVAL; |
2264 | cur_ops = NULL; | ||
2270 | goto unwind; | 2265 | goto unwind; |
2271 | } | 2266 | } |
2272 | if (cur_ops->fqs == NULL && fqs_duration != 0) { | 2267 | if (cur_ops->fqs == NULL && fqs_duration != 0) { |
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c index 5d4a39a6505a..44d6606b8325 100644 --- a/kernel/rcu/srcutiny.c +++ b/kernel/rcu/srcutiny.c | |||
@@ -76,19 +76,16 @@ EXPORT_SYMBOL_GPL(init_srcu_struct); | |||
76 | * Must invoke this after you are finished using a given srcu_struct that | 76 | * Must invoke this after you are finished using a given srcu_struct that |
77 | * was initialized via init_srcu_struct(), else you leak memory. | 77 | * was initialized via init_srcu_struct(), else you leak memory. |
78 | */ | 78 | */ |
79 | void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) | 79 | void cleanup_srcu_struct(struct srcu_struct *ssp) |
80 | { | 80 | { |
81 | WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]); | 81 | WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]); |
82 | if (quiesced) | 82 | flush_work(&ssp->srcu_work); |
83 | WARN_ON(work_pending(&ssp->srcu_work)); | ||
84 | else | ||
85 | flush_work(&ssp->srcu_work); | ||
86 | WARN_ON(ssp->srcu_gp_running); | 83 | WARN_ON(ssp->srcu_gp_running); |
87 | WARN_ON(ssp->srcu_gp_waiting); | 84 | WARN_ON(ssp->srcu_gp_waiting); |
88 | WARN_ON(ssp->srcu_cb_head); | 85 | WARN_ON(ssp->srcu_cb_head); |
89 | WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail); | 86 | WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail); |
90 | } | 87 | } |
91 | EXPORT_SYMBOL_GPL(_cleanup_srcu_struct); | 88 | EXPORT_SYMBOL_GPL(cleanup_srcu_struct); |
92 | 89 | ||
93 | /* | 90 | /* |
94 | * Removes the count for the old reader from the appropriate element of | 91 | * Removes the count for the old reader from the appropriate element of |
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index a60b8ba9e1ac..9b761e546de8 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c | |||
@@ -360,8 +360,14 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) | |||
360 | return SRCU_INTERVAL; | 360 | return SRCU_INTERVAL; |
361 | } | 361 | } |
362 | 362 | ||
363 | /* Helper for cleanup_srcu_struct() and cleanup_srcu_struct_quiesced(). */ | 363 | /** |
364 | void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) | 364 | * cleanup_srcu_struct - deconstruct a sleep-RCU structure |
365 | * @ssp: structure to clean up. | ||
366 | * | ||
367 | * Must invoke this after you are finished using a given srcu_struct that | ||
368 | * was initialized via init_srcu_struct(), else you leak memory. | ||
369 | */ | ||
370 | void cleanup_srcu_struct(struct srcu_struct *ssp) | ||
365 | { | 371 | { |
366 | int cpu; | 372 | int cpu; |
367 | 373 | ||
@@ -369,24 +375,14 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) | |||
369 | return; /* Just leak it! */ | 375 | return; /* Just leak it! */ |
370 | if (WARN_ON(srcu_readers_active(ssp))) | 376 | if (WARN_ON(srcu_readers_active(ssp))) |
371 | return; /* Just leak it! */ | 377 | return; /* Just leak it! */ |
372 | if (quiesced) { | 378 | flush_delayed_work(&ssp->work); |
373 | if (WARN_ON(delayed_work_pending(&ssp->work))) | ||
374 | return; /* Just leak it! */ | ||
375 | } else { | ||
376 | flush_delayed_work(&ssp->work); | ||
377 | } | ||
378 | for_each_possible_cpu(cpu) { | 379 | for_each_possible_cpu(cpu) { |
379 | struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); | 380 | struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); |
380 | 381 | ||
381 | if (quiesced) { | 382 | del_timer_sync(&sdp->delay_work); |
382 | if (WARN_ON(timer_pending(&sdp->delay_work))) | 383 | flush_work(&sdp->work); |
383 | return; /* Just leak it! */ | 384 | if (WARN_ON(rcu_segcblist_n_cbs(&sdp->srcu_cblist))) |
384 | if (WARN_ON(work_pending(&sdp->work))) | 385 | return; /* Forgot srcu_barrier(), so just leak it! */ |
385 | return; /* Just leak it! */ | ||
386 | } else { | ||
387 | del_timer_sync(&sdp->delay_work); | ||
388 | flush_work(&sdp->work); | ||
389 | } | ||
390 | } | 386 | } |
391 | if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) || | 387 | if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) || |
392 | WARN_ON(srcu_readers_active(ssp))) { | 388 | WARN_ON(srcu_readers_active(ssp))) { |
@@ -397,7 +393,7 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced) | |||
397 | free_percpu(ssp->sda); | 393 | free_percpu(ssp->sda); |
398 | ssp->sda = NULL; | 394 | ssp->sda = NULL; |
399 | } | 395 | } |
400 | EXPORT_SYMBOL_GPL(_cleanup_srcu_struct); | 396 | EXPORT_SYMBOL_GPL(cleanup_srcu_struct); |
401 | 397 | ||
402 | /* | 398 | /* |
403 | * Counts the new reader in the appropriate per-CPU element of the | 399 | * Counts the new reader in the appropriate per-CPU element of the |
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index abc8512ceb5f..ec77ec336f58 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c | |||
@@ -102,11 +102,6 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS; | |||
102 | /* Number of rcu_nodes at specified level. */ | 102 | /* Number of rcu_nodes at specified level. */ |
103 | int num_rcu_lvl[] = NUM_RCU_LVL_INIT; | 103 | int num_rcu_lvl[] = NUM_RCU_LVL_INIT; |
104 | int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */ | 104 | int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */ |
105 | /* panic() on RCU Stall sysctl. */ | ||
106 | int sysctl_panic_on_rcu_stall __read_mostly; | ||
107 | /* Commandeer a sysrq key to dump RCU's tree. */ | ||
108 | static bool sysrq_rcu; | ||
109 | module_param(sysrq_rcu, bool, 0444); | ||
110 | 105 | ||
111 | /* | 106 | /* |
112 | * The rcu_scheduler_active variable is initialized to the value | 107 | * The rcu_scheduler_active variable is initialized to the value |
@@ -514,74 +509,6 @@ static const char *gp_state_getname(short gs) | |||
514 | } | 509 | } |
515 | 510 | ||
516 | /* | 511 | /* |
517 | * Show the state of the grace-period kthreads. | ||
518 | */ | ||
519 | void show_rcu_gp_kthreads(void) | ||
520 | { | ||
521 | int cpu; | ||
522 | unsigned long j; | ||
523 | unsigned long ja; | ||
524 | unsigned long jr; | ||
525 | unsigned long jw; | ||
526 | struct rcu_data *rdp; | ||
527 | struct rcu_node *rnp; | ||
528 | |||
529 | j = jiffies; | ||
530 | ja = j - READ_ONCE(rcu_state.gp_activity); | ||
531 | jr = j - READ_ONCE(rcu_state.gp_req_activity); | ||
532 | jw = j - READ_ONCE(rcu_state.gp_wake_time); | ||
533 | pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n", | ||
534 | rcu_state.name, gp_state_getname(rcu_state.gp_state), | ||
535 | rcu_state.gp_state, | ||
536 | rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL, | ||
537 | ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq), | ||
538 | (long)READ_ONCE(rcu_state.gp_seq), | ||
539 | (long)READ_ONCE(rcu_get_root()->gp_seq_needed), | ||
540 | READ_ONCE(rcu_state.gp_flags)); | ||
541 | rcu_for_each_node_breadth_first(rnp) { | ||
542 | if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed)) | ||
543 | continue; | ||
544 | pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n", | ||
545 | rnp->grplo, rnp->grphi, (long)rnp->gp_seq, | ||
546 | (long)rnp->gp_seq_needed); | ||
547 | if (!rcu_is_leaf_node(rnp)) | ||
548 | continue; | ||
549 | for_each_leaf_node_possible_cpu(rnp, cpu) { | ||
550 | rdp = per_cpu_ptr(&rcu_data, cpu); | ||
551 | if (rdp->gpwrap || | ||
552 | ULONG_CMP_GE(rcu_state.gp_seq, | ||
553 | rdp->gp_seq_needed)) | ||
554 | continue; | ||
555 | pr_info("\tcpu %d ->gp_seq_needed %ld\n", | ||
556 | cpu, (long)rdp->gp_seq_needed); | ||
557 | } | ||
558 | } | ||
559 | /* sched_show_task(rcu_state.gp_kthread); */ | ||
560 | } | ||
561 | EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads); | ||
562 | |||
563 | /* Dump grace-period-request information due to commandeered sysrq. */ | ||
564 | static void sysrq_show_rcu(int key) | ||
565 | { | ||
566 | show_rcu_gp_kthreads(); | ||
567 | } | ||
568 | |||
569 | static struct sysrq_key_op sysrq_rcudump_op = { | ||
570 | .handler = sysrq_show_rcu, | ||
571 | .help_msg = "show-rcu(y)", | ||
572 | .action_msg = "Show RCU tree", | ||
573 | .enable_mask = SYSRQ_ENABLE_DUMP, | ||
574 | }; | ||
575 | |||
576 | static int __init rcu_sysrq_init(void) | ||
577 | { | ||
578 | if (sysrq_rcu) | ||
579 | return register_sysrq_key('y', &sysrq_rcudump_op); | ||
580 | return 0; | ||
581 | } | ||
582 | early_initcall(rcu_sysrq_init); | ||
583 | |||
584 | /* | ||
585 | * Send along grace-period-related data for rcutorture diagnostics. | 512 | * Send along grace-period-related data for rcutorture diagnostics. |
586 | */ | 513 | */ |
587 | void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, | 514 | void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, |
@@ -1035,27 +962,6 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp) | |||
1035 | } | 962 | } |
1036 | 963 | ||
1037 | /* | 964 | /* |
1038 | * Handler for the irq_work request posted when a grace period has | ||
1039 | * gone on for too long, but not yet long enough for an RCU CPU | ||
1040 | * stall warning. Set state appropriately, but just complain if | ||
1041 | * there is unexpected state on entry. | ||
1042 | */ | ||
1043 | static void rcu_iw_handler(struct irq_work *iwp) | ||
1044 | { | ||
1045 | struct rcu_data *rdp; | ||
1046 | struct rcu_node *rnp; | ||
1047 | |||
1048 | rdp = container_of(iwp, struct rcu_data, rcu_iw); | ||
1049 | rnp = rdp->mynode; | ||
1050 | raw_spin_lock_rcu_node(rnp); | ||
1051 | if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) { | ||
1052 | rdp->rcu_iw_gp_seq = rnp->gp_seq; | ||
1053 | rdp->rcu_iw_pending = false; | ||
1054 | } | ||
1055 | raw_spin_unlock_rcu_node(rnp); | ||
1056 | } | ||
1057 | |||
1058 | /* | ||
1059 | * Return true if the specified CPU has passed through a quiescent | 965 | * Return true if the specified CPU has passed through a quiescent |
1060 | * state by virtue of being in or having passed through an dynticks | 966 | * state by virtue of being in or having passed through an dynticks |
1061 | * idle state since the last call to dyntick_save_progress_counter() | 967 | * idle state since the last call to dyntick_save_progress_counter() |
@@ -1168,295 +1074,6 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) | |||
1168 | return 0; | 1074 | return 0; |
1169 | } | 1075 | } |
1170 | 1076 | ||
1171 | static void record_gp_stall_check_time(void) | ||
1172 | { | ||
1173 | unsigned long j = jiffies; | ||
1174 | unsigned long j1; | ||
1175 | |||
1176 | rcu_state.gp_start = j; | ||
1177 | j1 = rcu_jiffies_till_stall_check(); | ||
1178 | /* Record ->gp_start before ->jiffies_stall. */ | ||
1179 | smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */ | ||
1180 | rcu_state.jiffies_resched = j + j1 / 2; | ||
1181 | rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs); | ||
1182 | } | ||
1183 | |||
1184 | /* | ||
1185 | * Complain about starvation of grace-period kthread. | ||
1186 | */ | ||
1187 | static void rcu_check_gp_kthread_starvation(void) | ||
1188 | { | ||
1189 | struct task_struct *gpk = rcu_state.gp_kthread; | ||
1190 | unsigned long j; | ||
1191 | |||
1192 | j = jiffies - READ_ONCE(rcu_state.gp_activity); | ||
1193 | if (j > 2 * HZ) { | ||
1194 | pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n", | ||
1195 | rcu_state.name, j, | ||
1196 | (long)rcu_seq_current(&rcu_state.gp_seq), | ||
1197 | READ_ONCE(rcu_state.gp_flags), | ||
1198 | gp_state_getname(rcu_state.gp_state), rcu_state.gp_state, | ||
1199 | gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1); | ||
1200 | if (gpk) { | ||
1201 | pr_err("RCU grace-period kthread stack dump:\n"); | ||
1202 | sched_show_task(gpk); | ||
1203 | wake_up_process(gpk); | ||
1204 | } | ||
1205 | } | ||
1206 | } | ||
1207 | |||
1208 | /* | ||
1209 | * Dump stacks of all tasks running on stalled CPUs. First try using | ||
1210 | * NMIs, but fall back to manual remote stack tracing on architectures | ||
1211 | * that don't support NMI-based stack dumps. The NMI-triggered stack | ||
1212 | * traces are more accurate because they are printed by the target CPU. | ||
1213 | */ | ||
1214 | static void rcu_dump_cpu_stacks(void) | ||
1215 | { | ||
1216 | int cpu; | ||
1217 | unsigned long flags; | ||
1218 | struct rcu_node *rnp; | ||
1219 | |||
1220 | rcu_for_each_leaf_node(rnp) { | ||
1221 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
1222 | for_each_leaf_node_possible_cpu(rnp, cpu) | ||
1223 | if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) | ||
1224 | if (!trigger_single_cpu_backtrace(cpu)) | ||
1225 | dump_cpu_task(cpu); | ||
1226 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
1227 | } | ||
1228 | } | ||
1229 | |||
1230 | /* | ||
1231 | * If too much time has passed in the current grace period, and if | ||
1232 | * so configured, go kick the relevant kthreads. | ||
1233 | */ | ||
1234 | static void rcu_stall_kick_kthreads(void) | ||
1235 | { | ||
1236 | unsigned long j; | ||
1237 | |||
1238 | if (!rcu_kick_kthreads) | ||
1239 | return; | ||
1240 | j = READ_ONCE(rcu_state.jiffies_kick_kthreads); | ||
1241 | if (time_after(jiffies, j) && rcu_state.gp_kthread && | ||
1242 | (rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) { | ||
1243 | WARN_ONCE(1, "Kicking %s grace-period kthread\n", | ||
1244 | rcu_state.name); | ||
1245 | rcu_ftrace_dump(DUMP_ALL); | ||
1246 | wake_up_process(rcu_state.gp_kthread); | ||
1247 | WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ); | ||
1248 | } | ||
1249 | } | ||
1250 | |||
1251 | static void panic_on_rcu_stall(void) | ||
1252 | { | ||
1253 | if (sysctl_panic_on_rcu_stall) | ||
1254 | panic("RCU Stall\n"); | ||
1255 | } | ||
1256 | |||
1257 | static void print_other_cpu_stall(unsigned long gp_seq) | ||
1258 | { | ||
1259 | int cpu; | ||
1260 | unsigned long flags; | ||
1261 | unsigned long gpa; | ||
1262 | unsigned long j; | ||
1263 | int ndetected = 0; | ||
1264 | struct rcu_node *rnp = rcu_get_root(); | ||
1265 | long totqlen = 0; | ||
1266 | |||
1267 | /* Kick and suppress, if so configured. */ | ||
1268 | rcu_stall_kick_kthreads(); | ||
1269 | if (rcu_cpu_stall_suppress) | ||
1270 | return; | ||
1271 | |||
1272 | /* | ||
1273 | * OK, time to rat on our buddy... | ||
1274 | * See Documentation/RCU/stallwarn.txt for info on how to debug | ||
1275 | * RCU CPU stall warnings. | ||
1276 | */ | ||
1277 | pr_err("INFO: %s detected stalls on CPUs/tasks:", rcu_state.name); | ||
1278 | print_cpu_stall_info_begin(); | ||
1279 | rcu_for_each_leaf_node(rnp) { | ||
1280 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
1281 | ndetected += rcu_print_task_stall(rnp); | ||
1282 | if (rnp->qsmask != 0) { | ||
1283 | for_each_leaf_node_possible_cpu(rnp, cpu) | ||
1284 | if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) { | ||
1285 | print_cpu_stall_info(cpu); | ||
1286 | ndetected++; | ||
1287 | } | ||
1288 | } | ||
1289 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
1290 | } | ||
1291 | |||
1292 | print_cpu_stall_info_end(); | ||
1293 | for_each_possible_cpu(cpu) | ||
1294 | totqlen += rcu_get_n_cbs_cpu(cpu); | ||
1295 | pr_cont("(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n", | ||
1296 | smp_processor_id(), (long)(jiffies - rcu_state.gp_start), | ||
1297 | (long)rcu_seq_current(&rcu_state.gp_seq), totqlen); | ||
1298 | if (ndetected) { | ||
1299 | rcu_dump_cpu_stacks(); | ||
1300 | |||
1301 | /* Complain about tasks blocking the grace period. */ | ||
1302 | rcu_print_detail_task_stall(); | ||
1303 | } else { | ||
1304 | if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) { | ||
1305 | pr_err("INFO: Stall ended before state dump start\n"); | ||
1306 | } else { | ||
1307 | j = jiffies; | ||
1308 | gpa = READ_ONCE(rcu_state.gp_activity); | ||
1309 | pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n", | ||
1310 | rcu_state.name, j - gpa, j, gpa, | ||
1311 | READ_ONCE(jiffies_till_next_fqs), | ||
1312 | rcu_get_root()->qsmask); | ||
1313 | /* In this case, the current CPU might be at fault. */ | ||
1314 | sched_show_task(current); | ||
1315 | } | ||
1316 | } | ||
1317 | /* Rewrite if needed in case of slow consoles. */ | ||
1318 | if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall))) | ||
1319 | WRITE_ONCE(rcu_state.jiffies_stall, | ||
1320 | jiffies + 3 * rcu_jiffies_till_stall_check() + 3); | ||
1321 | |||
1322 | rcu_check_gp_kthread_starvation(); | ||
1323 | |||
1324 | panic_on_rcu_stall(); | ||
1325 | |||
1326 | rcu_force_quiescent_state(); /* Kick them all. */ | ||
1327 | } | ||
1328 | |||
1329 | static void print_cpu_stall(void) | ||
1330 | { | ||
1331 | int cpu; | ||
1332 | unsigned long flags; | ||
1333 | struct rcu_data *rdp = this_cpu_ptr(&rcu_data); | ||
1334 | struct rcu_node *rnp = rcu_get_root(); | ||
1335 | long totqlen = 0; | ||
1336 | |||
1337 | /* Kick and suppress, if so configured. */ | ||
1338 | rcu_stall_kick_kthreads(); | ||
1339 | if (rcu_cpu_stall_suppress) | ||
1340 | return; | ||
1341 | |||
1342 | /* | ||
1343 | * OK, time to rat on ourselves... | ||
1344 | * See Documentation/RCU/stallwarn.txt for info on how to debug | ||
1345 | * RCU CPU stall warnings. | ||
1346 | */ | ||
1347 | pr_err("INFO: %s self-detected stall on CPU", rcu_state.name); | ||
1348 | print_cpu_stall_info_begin(); | ||
1349 | raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags); | ||
1350 | print_cpu_stall_info(smp_processor_id()); | ||
1351 | raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags); | ||
1352 | print_cpu_stall_info_end(); | ||
1353 | for_each_possible_cpu(cpu) | ||
1354 | totqlen += rcu_get_n_cbs_cpu(cpu); | ||
1355 | pr_cont(" (t=%lu jiffies g=%ld q=%lu)\n", | ||
1356 | jiffies - rcu_state.gp_start, | ||
1357 | (long)rcu_seq_current(&rcu_state.gp_seq), totqlen); | ||
1358 | |||
1359 | rcu_check_gp_kthread_starvation(); | ||
1360 | |||
1361 | rcu_dump_cpu_stacks(); | ||
1362 | |||
1363 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
1364 | /* Rewrite if needed in case of slow consoles. */ | ||
1365 | if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall))) | ||
1366 | WRITE_ONCE(rcu_state.jiffies_stall, | ||
1367 | jiffies + 3 * rcu_jiffies_till_stall_check() + 3); | ||
1368 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
1369 | |||
1370 | panic_on_rcu_stall(); | ||
1371 | |||
1372 | /* | ||
1373 | * Attempt to revive the RCU machinery by forcing a context switch. | ||
1374 | * | ||
1375 | * A context switch would normally allow the RCU state machine to make | ||
1376 | * progress and it could be we're stuck in kernel space without context | ||
1377 | * switches for an entirely unreasonable amount of time. | ||
1378 | */ | ||
1379 | set_tsk_need_resched(current); | ||
1380 | set_preempt_need_resched(); | ||
1381 | } | ||
1382 | |||
1383 | static void check_cpu_stall(struct rcu_data *rdp) | ||
1384 | { | ||
1385 | unsigned long gs1; | ||
1386 | unsigned long gs2; | ||
1387 | unsigned long gps; | ||
1388 | unsigned long j; | ||
1389 | unsigned long jn; | ||
1390 | unsigned long js; | ||
1391 | struct rcu_node *rnp; | ||
1392 | |||
1393 | if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) || | ||
1394 | !rcu_gp_in_progress()) | ||
1395 | return; | ||
1396 | rcu_stall_kick_kthreads(); | ||
1397 | j = jiffies; | ||
1398 | |||
1399 | /* | ||
1400 | * Lots of memory barriers to reject false positives. | ||
1401 | * | ||
1402 | * The idea is to pick up rcu_state.gp_seq, then | ||
1403 | * rcu_state.jiffies_stall, then rcu_state.gp_start, and finally | ||
1404 | * another copy of rcu_state.gp_seq. These values are updated in | ||
1405 | * the opposite order with memory barriers (or equivalent) during | ||
1406 | * grace-period initialization and cleanup. Now, a false positive | ||
1407 | * can occur if we get an new value of rcu_state.gp_start and a old | ||
1408 | * value of rcu_state.jiffies_stall. But given the memory barriers, | ||
1409 | * the only way that this can happen is if one grace period ends | ||
1410 | * and another starts between these two fetches. This is detected | ||
1411 | * by comparing the second fetch of rcu_state.gp_seq with the | ||
1412 | * previous fetch from rcu_state.gp_seq. | ||
1413 | * | ||
1414 | * Given this check, comparisons of jiffies, rcu_state.jiffies_stall, | ||
1415 | * and rcu_state.gp_start suffice to forestall false positives. | ||
1416 | */ | ||
1417 | gs1 = READ_ONCE(rcu_state.gp_seq); | ||
1418 | smp_rmb(); /* Pick up ->gp_seq first... */ | ||
1419 | js = READ_ONCE(rcu_state.jiffies_stall); | ||
1420 | smp_rmb(); /* ...then ->jiffies_stall before the rest... */ | ||
1421 | gps = READ_ONCE(rcu_state.gp_start); | ||
1422 | smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */ | ||
1423 | gs2 = READ_ONCE(rcu_state.gp_seq); | ||
1424 | if (gs1 != gs2 || | ||
1425 | ULONG_CMP_LT(j, js) || | ||
1426 | ULONG_CMP_GE(gps, js)) | ||
1427 | return; /* No stall or GP completed since entering function. */ | ||
1428 | rnp = rdp->mynode; | ||
1429 | jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3; | ||
1430 | if (rcu_gp_in_progress() && | ||
1431 | (READ_ONCE(rnp->qsmask) & rdp->grpmask) && | ||
1432 | cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) { | ||
1433 | |||
1434 | /* We haven't checked in, so go dump stack. */ | ||
1435 | print_cpu_stall(); | ||
1436 | |||
1437 | } else if (rcu_gp_in_progress() && | ||
1438 | ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) && | ||
1439 | cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) { | ||
1440 | |||
1441 | /* They had a few time units to dump stack, so complain. */ | ||
1442 | print_other_cpu_stall(gs2); | ||
1443 | } | ||
1444 | } | ||
1445 | |||
1446 | /** | ||
1447 | * rcu_cpu_stall_reset - prevent further stall warnings in current grace period | ||
1448 | * | ||
1449 | * Set the stall-warning timeout way off into the future, thus preventing | ||
1450 | * any RCU CPU stall-warning messages from appearing in the current set of | ||
1451 | * RCU grace periods. | ||
1452 | * | ||
1453 | * The caller must disable hard irqs. | ||
1454 | */ | ||
1455 | void rcu_cpu_stall_reset(void) | ||
1456 | { | ||
1457 | WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2); | ||
1458 | } | ||
1459 | |||
1460 | /* Trace-event wrapper function for trace_rcu_future_grace_period. */ | 1077 | /* Trace-event wrapper function for trace_rcu_future_grace_period. */ |
1461 | static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, | 1078 | static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, |
1462 | unsigned long gp_seq_req, const char *s) | 1079 | unsigned long gp_seq_req, const char *s) |
@@ -2635,101 +2252,6 @@ void rcu_force_quiescent_state(void) | |||
2635 | } | 2252 | } |
2636 | EXPORT_SYMBOL_GPL(rcu_force_quiescent_state); | 2253 | EXPORT_SYMBOL_GPL(rcu_force_quiescent_state); |
2637 | 2254 | ||
2638 | /* | ||
2639 | * This function checks for grace-period requests that fail to motivate | ||
2640 | * RCU to come out of its idle mode. | ||
2641 | */ | ||
2642 | void | ||
2643 | rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, | ||
2644 | const unsigned long gpssdelay) | ||
2645 | { | ||
2646 | unsigned long flags; | ||
2647 | unsigned long j; | ||
2648 | struct rcu_node *rnp_root = rcu_get_root(); | ||
2649 | static atomic_t warned = ATOMIC_INIT(0); | ||
2650 | |||
2651 | if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() || | ||
2652 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed)) | ||
2653 | return; | ||
2654 | j = jiffies; /* Expensive access, and in common case don't get here. */ | ||
2655 | if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || | ||
2656 | time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) || | ||
2657 | atomic_read(&warned)) | ||
2658 | return; | ||
2659 | |||
2660 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
2661 | j = jiffies; | ||
2662 | if (rcu_gp_in_progress() || | ||
2663 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || | ||
2664 | time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || | ||
2665 | time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) || | ||
2666 | atomic_read(&warned)) { | ||
2667 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
2668 | return; | ||
2669 | } | ||
2670 | /* Hold onto the leaf lock to make others see warned==1. */ | ||
2671 | |||
2672 | if (rnp_root != rnp) | ||
2673 | raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */ | ||
2674 | j = jiffies; | ||
2675 | if (rcu_gp_in_progress() || | ||
2676 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || | ||
2677 | time_before(j, rcu_state.gp_req_activity + gpssdelay) || | ||
2678 | time_before(j, rcu_state.gp_activity + gpssdelay) || | ||
2679 | atomic_xchg(&warned, 1)) { | ||
2680 | raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */ | ||
2681 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
2682 | return; | ||
2683 | } | ||
2684 | WARN_ON(1); | ||
2685 | if (rnp_root != rnp) | ||
2686 | raw_spin_unlock_rcu_node(rnp_root); | ||
2687 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
2688 | show_rcu_gp_kthreads(); | ||
2689 | } | ||
2690 | |||
2691 | /* | ||
2692 | * Do a forward-progress check for rcutorture. This is normally invoked | ||
2693 | * due to an OOM event. The argument "j" gives the time period during | ||
2694 | * which rcutorture would like progress to have been made. | ||
2695 | */ | ||
2696 | void rcu_fwd_progress_check(unsigned long j) | ||
2697 | { | ||
2698 | unsigned long cbs; | ||
2699 | int cpu; | ||
2700 | unsigned long max_cbs = 0; | ||
2701 | int max_cpu = -1; | ||
2702 | struct rcu_data *rdp; | ||
2703 | |||
2704 | if (rcu_gp_in_progress()) { | ||
2705 | pr_info("%s: GP age %lu jiffies\n", | ||
2706 | __func__, jiffies - rcu_state.gp_start); | ||
2707 | show_rcu_gp_kthreads(); | ||
2708 | } else { | ||
2709 | pr_info("%s: Last GP end %lu jiffies ago\n", | ||
2710 | __func__, jiffies - rcu_state.gp_end); | ||
2711 | preempt_disable(); | ||
2712 | rdp = this_cpu_ptr(&rcu_data); | ||
2713 | rcu_check_gp_start_stall(rdp->mynode, rdp, j); | ||
2714 | preempt_enable(); | ||
2715 | } | ||
2716 | for_each_possible_cpu(cpu) { | ||
2717 | cbs = rcu_get_n_cbs_cpu(cpu); | ||
2718 | if (!cbs) | ||
2719 | continue; | ||
2720 | if (max_cpu < 0) | ||
2721 | pr_info("%s: callbacks", __func__); | ||
2722 | pr_cont(" %d: %lu", cpu, cbs); | ||
2723 | if (cbs <= max_cbs) | ||
2724 | continue; | ||
2725 | max_cbs = cbs; | ||
2726 | max_cpu = cpu; | ||
2727 | } | ||
2728 | if (max_cpu >= 0) | ||
2729 | pr_cont("\n"); | ||
2730 | } | ||
2731 | EXPORT_SYMBOL_GPL(rcu_fwd_progress_check); | ||
2732 | |||
2733 | /* Perform RCU core processing work for the current CPU. */ | 2255 | /* Perform RCU core processing work for the current CPU. */ |
2734 | static __latent_entropy void rcu_core(struct softirq_action *unused) | 2256 | static __latent_entropy void rcu_core(struct softirq_action *unused) |
2735 | { | 2257 | { |
@@ -3855,5 +3377,6 @@ void __init rcu_init(void) | |||
3855 | srcu_init(); | 3377 | srcu_init(); |
3856 | } | 3378 | } |
3857 | 3379 | ||
3380 | #include "tree_stall.h" | ||
3858 | #include "tree_exp.h" | 3381 | #include "tree_exp.h" |
3859 | #include "tree_plugin.h" | 3382 | #include "tree_plugin.h" |
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index bb4f995f2d3f..e253d11af3c4 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h | |||
@@ -393,15 +393,13 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name; | |||
393 | 393 | ||
394 | int rcu_dynticks_snap(struct rcu_data *rdp); | 394 | int rcu_dynticks_snap(struct rcu_data *rdp); |
395 | 395 | ||
396 | /* Forward declarations for rcutree_plugin.h */ | 396 | /* Forward declarations for tree_plugin.h */ |
397 | static void rcu_bootup_announce(void); | 397 | static void rcu_bootup_announce(void); |
398 | static void rcu_qs(void); | 398 | static void rcu_qs(void); |
399 | static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); | 399 | static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); |
400 | #ifdef CONFIG_HOTPLUG_CPU | 400 | #ifdef CONFIG_HOTPLUG_CPU |
401 | static bool rcu_preempt_has_tasks(struct rcu_node *rnp); | 401 | static bool rcu_preempt_has_tasks(struct rcu_node *rnp); |
402 | #endif /* #ifdef CONFIG_HOTPLUG_CPU */ | 402 | #endif /* #ifdef CONFIG_HOTPLUG_CPU */ |
403 | static void rcu_print_detail_task_stall(void); | ||
404 | static int rcu_print_task_stall(struct rcu_node *rnp); | ||
405 | static int rcu_print_task_exp_stall(struct rcu_node *rnp); | 403 | static int rcu_print_task_exp_stall(struct rcu_node *rnp); |
406 | static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp); | 404 | static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp); |
407 | static void rcu_flavor_sched_clock_irq(int user); | 405 | static void rcu_flavor_sched_clock_irq(int user); |
@@ -418,9 +416,6 @@ static void rcu_prepare_for_idle(void); | |||
418 | static bool rcu_preempt_has_tasks(struct rcu_node *rnp); | 416 | static bool rcu_preempt_has_tasks(struct rcu_node *rnp); |
419 | static bool rcu_preempt_need_deferred_qs(struct task_struct *t); | 417 | static bool rcu_preempt_need_deferred_qs(struct task_struct *t); |
420 | static void rcu_preempt_deferred_qs(struct task_struct *t); | 418 | static void rcu_preempt_deferred_qs(struct task_struct *t); |
421 | static void print_cpu_stall_info_begin(void); | ||
422 | static void print_cpu_stall_info(int cpu); | ||
423 | static void print_cpu_stall_info_end(void); | ||
424 | static void zero_cpu_stall_ticks(struct rcu_data *rdp); | 419 | static void zero_cpu_stall_ticks(struct rcu_data *rdp); |
425 | static bool rcu_nocb_cpu_needs_barrier(int cpu); | 420 | static bool rcu_nocb_cpu_needs_barrier(int cpu); |
426 | static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); | 421 | static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); |
@@ -445,3 +440,10 @@ static void rcu_bind_gp_kthread(void); | |||
445 | static bool rcu_nohz_full_cpu(void); | 440 | static bool rcu_nohz_full_cpu(void); |
446 | static void rcu_dynticks_task_enter(void); | 441 | static void rcu_dynticks_task_enter(void); |
447 | static void rcu_dynticks_task_exit(void); | 442 | static void rcu_dynticks_task_exit(void); |
443 | |||
444 | /* Forward declarations for tree_stall.h */ | ||
445 | static void record_gp_stall_check_time(void); | ||
446 | static void rcu_iw_handler(struct irq_work *iwp); | ||
447 | static void check_cpu_stall(struct rcu_data *rdp); | ||
448 | static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, | ||
449 | const unsigned long gpssdelay); | ||
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 1ee0782213b8..9c990df880d1 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h | |||
@@ -10,6 +10,7 @@ | |||
10 | #include <linux/lockdep.h> | 10 | #include <linux/lockdep.h> |
11 | 11 | ||
12 | static void rcu_exp_handler(void *unused); | 12 | static void rcu_exp_handler(void *unused); |
13 | static int rcu_print_task_exp_stall(struct rcu_node *rnp); | ||
13 | 14 | ||
14 | /* | 15 | /* |
15 | * Record the start of an expedited grace period. | 16 | * Record the start of an expedited grace period. |
@@ -670,6 +671,27 @@ static void sync_sched_exp_online_cleanup(int cpu) | |||
670 | { | 671 | { |
671 | } | 672 | } |
672 | 673 | ||
674 | /* | ||
675 | * Scan the current list of tasks blocked within RCU read-side critical | ||
676 | * sections, printing out the tid of each that is blocking the current | ||
677 | * expedited grace period. | ||
678 | */ | ||
679 | static int rcu_print_task_exp_stall(struct rcu_node *rnp) | ||
680 | { | ||
681 | struct task_struct *t; | ||
682 | int ndetected = 0; | ||
683 | |||
684 | if (!rnp->exp_tasks) | ||
685 | return 0; | ||
686 | t = list_entry(rnp->exp_tasks->prev, | ||
687 | struct task_struct, rcu_node_entry); | ||
688 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
689 | pr_cont(" P%d", t->pid); | ||
690 | ndetected++; | ||
691 | } | ||
692 | return ndetected; | ||
693 | } | ||
694 | |||
673 | #else /* #ifdef CONFIG_PREEMPT_RCU */ | 695 | #else /* #ifdef CONFIG_PREEMPT_RCU */ |
674 | 696 | ||
675 | /* Invoked on each online non-idle CPU for expedited quiescent state. */ | 697 | /* Invoked on each online non-idle CPU for expedited quiescent state. */ |
@@ -709,6 +731,16 @@ static void sync_sched_exp_online_cleanup(int cpu) | |||
709 | WARN_ON_ONCE(ret); | 731 | WARN_ON_ONCE(ret); |
710 | } | 732 | } |
711 | 733 | ||
734 | /* | ||
735 | * Because preemptible RCU does not exist, we never have to check for | ||
736 | * tasks blocked within RCU read-side critical sections that are | ||
737 | * blocking the current expedited grace period. | ||
738 | */ | ||
739 | static int rcu_print_task_exp_stall(struct rcu_node *rnp) | ||
740 | { | ||
741 | return 0; | ||
742 | } | ||
743 | |||
712 | #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ | 744 | #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ |
713 | 745 | ||
714 | /** | 746 | /** |
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 6ddb3c05e88f..1102765f91fd 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h | |||
@@ -643,100 +643,6 @@ static void rcu_read_unlock_special(struct task_struct *t) | |||
643 | } | 643 | } |
644 | 644 | ||
645 | /* | 645 | /* |
646 | * Dump detailed information for all tasks blocking the current RCU | ||
647 | * grace period on the specified rcu_node structure. | ||
648 | */ | ||
649 | static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp) | ||
650 | { | ||
651 | unsigned long flags; | ||
652 | struct task_struct *t; | ||
653 | |||
654 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
655 | if (!rcu_preempt_blocked_readers_cgp(rnp)) { | ||
656 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
657 | return; | ||
658 | } | ||
659 | t = list_entry(rnp->gp_tasks->prev, | ||
660 | struct task_struct, rcu_node_entry); | ||
661 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
662 | /* | ||
663 | * We could be printing a lot while holding a spinlock. | ||
664 | * Avoid triggering hard lockup. | ||
665 | */ | ||
666 | touch_nmi_watchdog(); | ||
667 | sched_show_task(t); | ||
668 | } | ||
669 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
670 | } | ||
671 | |||
672 | /* | ||
673 | * Dump detailed information for all tasks blocking the current RCU | ||
674 | * grace period. | ||
675 | */ | ||
676 | static void rcu_print_detail_task_stall(void) | ||
677 | { | ||
678 | struct rcu_node *rnp = rcu_get_root(); | ||
679 | |||
680 | rcu_print_detail_task_stall_rnp(rnp); | ||
681 | rcu_for_each_leaf_node(rnp) | ||
682 | rcu_print_detail_task_stall_rnp(rnp); | ||
683 | } | ||
684 | |||
685 | static void rcu_print_task_stall_begin(struct rcu_node *rnp) | ||
686 | { | ||
687 | pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):", | ||
688 | rnp->level, rnp->grplo, rnp->grphi); | ||
689 | } | ||
690 | |||
691 | static void rcu_print_task_stall_end(void) | ||
692 | { | ||
693 | pr_cont("\n"); | ||
694 | } | ||
695 | |||
696 | /* | ||
697 | * Scan the current list of tasks blocked within RCU read-side critical | ||
698 | * sections, printing out the tid of each. | ||
699 | */ | ||
700 | static int rcu_print_task_stall(struct rcu_node *rnp) | ||
701 | { | ||
702 | struct task_struct *t; | ||
703 | int ndetected = 0; | ||
704 | |||
705 | if (!rcu_preempt_blocked_readers_cgp(rnp)) | ||
706 | return 0; | ||
707 | rcu_print_task_stall_begin(rnp); | ||
708 | t = list_entry(rnp->gp_tasks->prev, | ||
709 | struct task_struct, rcu_node_entry); | ||
710 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
711 | pr_cont(" P%d", t->pid); | ||
712 | ndetected++; | ||
713 | } | ||
714 | rcu_print_task_stall_end(); | ||
715 | return ndetected; | ||
716 | } | ||
717 | |||
718 | /* | ||
719 | * Scan the current list of tasks blocked within RCU read-side critical | ||
720 | * sections, printing out the tid of each that is blocking the current | ||
721 | * expedited grace period. | ||
722 | */ | ||
723 | static int rcu_print_task_exp_stall(struct rcu_node *rnp) | ||
724 | { | ||
725 | struct task_struct *t; | ||
726 | int ndetected = 0; | ||
727 | |||
728 | if (!rnp->exp_tasks) | ||
729 | return 0; | ||
730 | t = list_entry(rnp->exp_tasks->prev, | ||
731 | struct task_struct, rcu_node_entry); | ||
732 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
733 | pr_cont(" P%d", t->pid); | ||
734 | ndetected++; | ||
735 | } | ||
736 | return ndetected; | ||
737 | } | ||
738 | |||
739 | /* | ||
740 | * Check that the list of blocked tasks for the newly completed grace | 646 | * Check that the list of blocked tasks for the newly completed grace |
741 | * period is in fact empty. It is a serious bug to complete a grace | 647 | * period is in fact empty. It is a serious bug to complete a grace |
742 | * period that still has RCU readers blocked! This function must be | 648 | * period that still has RCU readers blocked! This function must be |
@@ -986,33 +892,6 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t) | |||
986 | static void rcu_preempt_deferred_qs(struct task_struct *t) { } | 892 | static void rcu_preempt_deferred_qs(struct task_struct *t) { } |
987 | 893 | ||
988 | /* | 894 | /* |
989 | * Because preemptible RCU does not exist, we never have to check for | ||
990 | * tasks blocked within RCU read-side critical sections. | ||
991 | */ | ||
992 | static void rcu_print_detail_task_stall(void) | ||
993 | { | ||
994 | } | ||
995 | |||
996 | /* | ||
997 | * Because preemptible RCU does not exist, we never have to check for | ||
998 | * tasks blocked within RCU read-side critical sections. | ||
999 | */ | ||
1000 | static int rcu_print_task_stall(struct rcu_node *rnp) | ||
1001 | { | ||
1002 | return 0; | ||
1003 | } | ||
1004 | |||
1005 | /* | ||
1006 | * Because preemptible RCU does not exist, we never have to check for | ||
1007 | * tasks blocked within RCU read-side critical sections that are | ||
1008 | * blocking the current expedited grace period. | ||
1009 | */ | ||
1010 | static int rcu_print_task_exp_stall(struct rcu_node *rnp) | ||
1011 | { | ||
1012 | return 0; | ||
1013 | } | ||
1014 | |||
1015 | /* | ||
1016 | * Because there is no preemptible RCU, there can be no readers blocked, | 895 | * Because there is no preemptible RCU, there can be no readers blocked, |
1017 | * so there is no need to check for blocked tasks. So check only for | 896 | * so there is no need to check for blocked tasks. So check only for |
1018 | * bogus qsmask values. | 897 | * bogus qsmask values. |
@@ -1652,98 +1531,6 @@ static void rcu_cleanup_after_idle(void) | |||
1652 | 1531 | ||
1653 | #endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */ | 1532 | #endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */ |
1654 | 1533 | ||
1655 | #ifdef CONFIG_RCU_FAST_NO_HZ | ||
1656 | |||
1657 | static void print_cpu_stall_fast_no_hz(char *cp, int cpu) | ||
1658 | { | ||
1659 | struct rcu_data *rdp = &per_cpu(rcu_data, cpu); | ||
1660 | |||
1661 | sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c", | ||
1662 | rdp->last_accelerate & 0xffff, jiffies & 0xffff, | ||
1663 | ".l"[rdp->all_lazy], | ||
1664 | ".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)], | ||
1665 | ".D"[!rdp->tick_nohz_enabled_snap]); | ||
1666 | } | ||
1667 | |||
1668 | #else /* #ifdef CONFIG_RCU_FAST_NO_HZ */ | ||
1669 | |||
1670 | static void print_cpu_stall_fast_no_hz(char *cp, int cpu) | ||
1671 | { | ||
1672 | *cp = '\0'; | ||
1673 | } | ||
1674 | |||
1675 | #endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */ | ||
1676 | |||
1677 | /* Initiate the stall-info list. */ | ||
1678 | static void print_cpu_stall_info_begin(void) | ||
1679 | { | ||
1680 | pr_cont("\n"); | ||
1681 | } | ||
1682 | |||
1683 | /* | ||
1684 | * Print out diagnostic information for the specified stalled CPU. | ||
1685 | * | ||
1686 | * If the specified CPU is aware of the current RCU grace period, then | ||
1687 | * print the number of scheduling clock interrupts the CPU has taken | ||
1688 | * during the time that it has been aware. Otherwise, print the number | ||
1689 | * of RCU grace periods that this CPU is ignorant of, for example, "1" | ||
1690 | * if the CPU was aware of the previous grace period. | ||
1691 | * | ||
1692 | * Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info. | ||
1693 | */ | ||
1694 | static void print_cpu_stall_info(int cpu) | ||
1695 | { | ||
1696 | unsigned long delta; | ||
1697 | char fast_no_hz[72]; | ||
1698 | struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); | ||
1699 | char *ticks_title; | ||
1700 | unsigned long ticks_value; | ||
1701 | |||
1702 | /* | ||
1703 | * We could be printing a lot while holding a spinlock. Avoid | ||
1704 | * triggering hard lockup. | ||
1705 | */ | ||
1706 | touch_nmi_watchdog(); | ||
1707 | |||
1708 | ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq); | ||
1709 | if (ticks_value) { | ||
1710 | ticks_title = "GPs behind"; | ||
1711 | } else { | ||
1712 | ticks_title = "ticks this GP"; | ||
1713 | ticks_value = rdp->ticks_this_gp; | ||
1714 | } | ||
1715 | print_cpu_stall_fast_no_hz(fast_no_hz, cpu); | ||
1716 | delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); | ||
1717 | pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n", | ||
1718 | cpu, | ||
1719 | "O."[!!cpu_online(cpu)], | ||
1720 | "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)], | ||
1721 | "N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)], | ||
1722 | !IS_ENABLED(CONFIG_IRQ_WORK) ? '?' : | ||
1723 | rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' : | ||
1724 | "!."[!delta], | ||
1725 | ticks_value, ticks_title, | ||
1726 | rcu_dynticks_snap(rdp) & 0xfff, | ||
1727 | rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, | ||
1728 | rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), | ||
1729 | READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, | ||
1730 | fast_no_hz); | ||
1731 | } | ||
1732 | |||
1733 | /* Terminate the stall-info list. */ | ||
1734 | static void print_cpu_stall_info_end(void) | ||
1735 | { | ||
1736 | pr_err("\t"); | ||
1737 | } | ||
1738 | |||
1739 | /* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */ | ||
1740 | static void zero_cpu_stall_ticks(struct rcu_data *rdp) | ||
1741 | { | ||
1742 | rdp->ticks_this_gp = 0; | ||
1743 | rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id()); | ||
1744 | WRITE_ONCE(rdp->last_fqs_resched, jiffies); | ||
1745 | } | ||
1746 | |||
1747 | #ifdef CONFIG_RCU_NOCB_CPU | 1534 | #ifdef CONFIG_RCU_NOCB_CPU |
1748 | 1535 | ||
1749 | /* | 1536 | /* |
diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h new file mode 100644 index 000000000000..f65a73a97323 --- /dev/null +++ b/kernel/rcu/tree_stall.h | |||
@@ -0,0 +1,709 @@ | |||
1 | // SPDX-License-Identifier: GPL-2.0+ | ||
2 | /* | ||
3 | * RCU CPU stall warnings for normal RCU grace periods | ||
4 | * | ||
5 | * Copyright IBM Corporation, 2019 | ||
6 | * | ||
7 | * Author: Paul E. McKenney <paulmck@linux.ibm.com> | ||
8 | */ | ||
9 | |||
10 | ////////////////////////////////////////////////////////////////////////////// | ||
11 | // | ||
12 | // Controlling CPU stall warnings, including delay calculation. | ||
13 | |||
14 | /* panic() on RCU Stall sysctl. */ | ||
15 | int sysctl_panic_on_rcu_stall __read_mostly; | ||
16 | |||
17 | #ifdef CONFIG_PROVE_RCU | ||
18 | #define RCU_STALL_DELAY_DELTA (5 * HZ) | ||
19 | #else | ||
20 | #define RCU_STALL_DELAY_DELTA 0 | ||
21 | #endif | ||
22 | |||
23 | /* Limit-check stall timeouts specified at boottime and runtime. */ | ||
24 | int rcu_jiffies_till_stall_check(void) | ||
25 | { | ||
26 | int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout); | ||
27 | |||
28 | /* | ||
29 | * Limit check must be consistent with the Kconfig limits | ||
30 | * for CONFIG_RCU_CPU_STALL_TIMEOUT. | ||
31 | */ | ||
32 | if (till_stall_check < 3) { | ||
33 | WRITE_ONCE(rcu_cpu_stall_timeout, 3); | ||
34 | till_stall_check = 3; | ||
35 | } else if (till_stall_check > 300) { | ||
36 | WRITE_ONCE(rcu_cpu_stall_timeout, 300); | ||
37 | till_stall_check = 300; | ||
38 | } | ||
39 | return till_stall_check * HZ + RCU_STALL_DELAY_DELTA; | ||
40 | } | ||
41 | EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check); | ||
42 | |||
43 | /* Don't do RCU CPU stall warnings during long sysrq printouts. */ | ||
44 | void rcu_sysrq_start(void) | ||
45 | { | ||
46 | if (!rcu_cpu_stall_suppress) | ||
47 | rcu_cpu_stall_suppress = 2; | ||
48 | } | ||
49 | |||
50 | void rcu_sysrq_end(void) | ||
51 | { | ||
52 | if (rcu_cpu_stall_suppress == 2) | ||
53 | rcu_cpu_stall_suppress = 0; | ||
54 | } | ||
55 | |||
56 | /* Don't print RCU CPU stall warnings during a kernel panic. */ | ||
57 | static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr) | ||
58 | { | ||
59 | rcu_cpu_stall_suppress = 1; | ||
60 | return NOTIFY_DONE; | ||
61 | } | ||
62 | |||
63 | static struct notifier_block rcu_panic_block = { | ||
64 | .notifier_call = rcu_panic, | ||
65 | }; | ||
66 | |||
67 | static int __init check_cpu_stall_init(void) | ||
68 | { | ||
69 | atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block); | ||
70 | return 0; | ||
71 | } | ||
72 | early_initcall(check_cpu_stall_init); | ||
73 | |||
74 | /* If so specified via sysctl, panic, yielding cleaner stall-warning output. */ | ||
75 | static void panic_on_rcu_stall(void) | ||
76 | { | ||
77 | if (sysctl_panic_on_rcu_stall) | ||
78 | panic("RCU Stall\n"); | ||
79 | } | ||
80 | |||
81 | /** | ||
82 | * rcu_cpu_stall_reset - prevent further stall warnings in current grace period | ||
83 | * | ||
84 | * Set the stall-warning timeout way off into the future, thus preventing | ||
85 | * any RCU CPU stall-warning messages from appearing in the current set of | ||
86 | * RCU grace periods. | ||
87 | * | ||
88 | * The caller must disable hard irqs. | ||
89 | */ | ||
90 | void rcu_cpu_stall_reset(void) | ||
91 | { | ||
92 | WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2); | ||
93 | } | ||
94 | |||
95 | ////////////////////////////////////////////////////////////////////////////// | ||
96 | // | ||
97 | // Interaction with RCU grace periods | ||
98 | |||
99 | /* Start of new grace period, so record stall time (and forcing times). */ | ||
100 | static void record_gp_stall_check_time(void) | ||
101 | { | ||
102 | unsigned long j = jiffies; | ||
103 | unsigned long j1; | ||
104 | |||
105 | rcu_state.gp_start = j; | ||
106 | j1 = rcu_jiffies_till_stall_check(); | ||
107 | /* Record ->gp_start before ->jiffies_stall. */ | ||
108 | smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */ | ||
109 | rcu_state.jiffies_resched = j + j1 / 2; | ||
110 | rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs); | ||
111 | } | ||
112 | |||
113 | /* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */ | ||
114 | static void zero_cpu_stall_ticks(struct rcu_data *rdp) | ||
115 | { | ||
116 | rdp->ticks_this_gp = 0; | ||
117 | rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id()); | ||
118 | WRITE_ONCE(rdp->last_fqs_resched, jiffies); | ||
119 | } | ||
120 | |||
121 | /* | ||
122 | * If too much time has passed in the current grace period, and if | ||
123 | * so configured, go kick the relevant kthreads. | ||
124 | */ | ||
125 | static void rcu_stall_kick_kthreads(void) | ||
126 | { | ||
127 | unsigned long j; | ||
128 | |||
129 | if (!rcu_kick_kthreads) | ||
130 | return; | ||
131 | j = READ_ONCE(rcu_state.jiffies_kick_kthreads); | ||
132 | if (time_after(jiffies, j) && rcu_state.gp_kthread && | ||
133 | (rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) { | ||
134 | WARN_ONCE(1, "Kicking %s grace-period kthread\n", | ||
135 | rcu_state.name); | ||
136 | rcu_ftrace_dump(DUMP_ALL); | ||
137 | wake_up_process(rcu_state.gp_kthread); | ||
138 | WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ); | ||
139 | } | ||
140 | } | ||
141 | |||
142 | /* | ||
143 | * Handler for the irq_work request posted about halfway into the RCU CPU | ||
144 | * stall timeout, and used to detect excessive irq disabling. Set state | ||
145 | * appropriately, but just complain if there is unexpected state on entry. | ||
146 | */ | ||
147 | static void rcu_iw_handler(struct irq_work *iwp) | ||
148 | { | ||
149 | struct rcu_data *rdp; | ||
150 | struct rcu_node *rnp; | ||
151 | |||
152 | rdp = container_of(iwp, struct rcu_data, rcu_iw); | ||
153 | rnp = rdp->mynode; | ||
154 | raw_spin_lock_rcu_node(rnp); | ||
155 | if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) { | ||
156 | rdp->rcu_iw_gp_seq = rnp->gp_seq; | ||
157 | rdp->rcu_iw_pending = false; | ||
158 | } | ||
159 | raw_spin_unlock_rcu_node(rnp); | ||
160 | } | ||
161 | |||
162 | ////////////////////////////////////////////////////////////////////////////// | ||
163 | // | ||
164 | // Printing RCU CPU stall warnings | ||
165 | |||
166 | #ifdef CONFIG_PREEMPT | ||
167 | |||
168 | /* | ||
169 | * Dump detailed information for all tasks blocking the current RCU | ||
170 | * grace period on the specified rcu_node structure. | ||
171 | */ | ||
172 | static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp) | ||
173 | { | ||
174 | unsigned long flags; | ||
175 | struct task_struct *t; | ||
176 | |||
177 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
178 | if (!rcu_preempt_blocked_readers_cgp(rnp)) { | ||
179 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
180 | return; | ||
181 | } | ||
182 | t = list_entry(rnp->gp_tasks->prev, | ||
183 | struct task_struct, rcu_node_entry); | ||
184 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
185 | /* | ||
186 | * We could be printing a lot while holding a spinlock. | ||
187 | * Avoid triggering hard lockup. | ||
188 | */ | ||
189 | touch_nmi_watchdog(); | ||
190 | sched_show_task(t); | ||
191 | } | ||
192 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
193 | } | ||
194 | |||
195 | /* | ||
196 | * Scan the current list of tasks blocked within RCU read-side critical | ||
197 | * sections, printing out the tid of each. | ||
198 | */ | ||
199 | static int rcu_print_task_stall(struct rcu_node *rnp) | ||
200 | { | ||
201 | struct task_struct *t; | ||
202 | int ndetected = 0; | ||
203 | |||
204 | if (!rcu_preempt_blocked_readers_cgp(rnp)) | ||
205 | return 0; | ||
206 | pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):", | ||
207 | rnp->level, rnp->grplo, rnp->grphi); | ||
208 | t = list_entry(rnp->gp_tasks->prev, | ||
209 | struct task_struct, rcu_node_entry); | ||
210 | list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { | ||
211 | pr_cont(" P%d", t->pid); | ||
212 | ndetected++; | ||
213 | } | ||
214 | pr_cont("\n"); | ||
215 | return ndetected; | ||
216 | } | ||
217 | |||
218 | #else /* #ifdef CONFIG_PREEMPT */ | ||
219 | |||
220 | /* | ||
221 | * Because preemptible RCU does not exist, we never have to check for | ||
222 | * tasks blocked within RCU read-side critical sections. | ||
223 | */ | ||
224 | static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp) | ||
225 | { | ||
226 | } | ||
227 | |||
228 | /* | ||
229 | * Because preemptible RCU does not exist, we never have to check for | ||
230 | * tasks blocked within RCU read-side critical sections. | ||
231 | */ | ||
232 | static int rcu_print_task_stall(struct rcu_node *rnp) | ||
233 | { | ||
234 | return 0; | ||
235 | } | ||
236 | #endif /* #else #ifdef CONFIG_PREEMPT */ | ||
237 | |||
238 | /* | ||
239 | * Dump stacks of all tasks running on stalled CPUs. First try using | ||
240 | * NMIs, but fall back to manual remote stack tracing on architectures | ||
241 | * that don't support NMI-based stack dumps. The NMI-triggered stack | ||
242 | * traces are more accurate because they are printed by the target CPU. | ||
243 | */ | ||
244 | static void rcu_dump_cpu_stacks(void) | ||
245 | { | ||
246 | int cpu; | ||
247 | unsigned long flags; | ||
248 | struct rcu_node *rnp; | ||
249 | |||
250 | rcu_for_each_leaf_node(rnp) { | ||
251 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
252 | for_each_leaf_node_possible_cpu(rnp, cpu) | ||
253 | if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) | ||
254 | if (!trigger_single_cpu_backtrace(cpu)) | ||
255 | dump_cpu_task(cpu); | ||
256 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
257 | } | ||
258 | } | ||
259 | |||
260 | #ifdef CONFIG_RCU_FAST_NO_HZ | ||
261 | |||
262 | static void print_cpu_stall_fast_no_hz(char *cp, int cpu) | ||
263 | { | ||
264 | struct rcu_data *rdp = &per_cpu(rcu_data, cpu); | ||
265 | |||
266 | sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c", | ||
267 | rdp->last_accelerate & 0xffff, jiffies & 0xffff, | ||
268 | ".l"[rdp->all_lazy], | ||
269 | ".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)], | ||
270 | ".D"[!!rdp->tick_nohz_enabled_snap]); | ||
271 | } | ||
272 | |||
273 | #else /* #ifdef CONFIG_RCU_FAST_NO_HZ */ | ||
274 | |||
275 | static void print_cpu_stall_fast_no_hz(char *cp, int cpu) | ||
276 | { | ||
277 | *cp = '\0'; | ||
278 | } | ||
279 | |||
280 | #endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */ | ||
281 | |||
282 | /* | ||
283 | * Print out diagnostic information for the specified stalled CPU. | ||
284 | * | ||
285 | * If the specified CPU is aware of the current RCU grace period, then | ||
286 | * print the number of scheduling clock interrupts the CPU has taken | ||
287 | * during the time that it has been aware. Otherwise, print the number | ||
288 | * of RCU grace periods that this CPU is ignorant of, for example, "1" | ||
289 | * if the CPU was aware of the previous grace period. | ||
290 | * | ||
291 | * Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info. | ||
292 | */ | ||
293 | static void print_cpu_stall_info(int cpu) | ||
294 | { | ||
295 | unsigned long delta; | ||
296 | char fast_no_hz[72]; | ||
297 | struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); | ||
298 | char *ticks_title; | ||
299 | unsigned long ticks_value; | ||
300 | |||
301 | /* | ||
302 | * We could be printing a lot while holding a spinlock. Avoid | ||
303 | * triggering hard lockup. | ||
304 | */ | ||
305 | touch_nmi_watchdog(); | ||
306 | |||
307 | ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq); | ||
308 | if (ticks_value) { | ||
309 | ticks_title = "GPs behind"; | ||
310 | } else { | ||
311 | ticks_title = "ticks this GP"; | ||
312 | ticks_value = rdp->ticks_this_gp; | ||
313 | } | ||
314 | print_cpu_stall_fast_no_hz(fast_no_hz, cpu); | ||
315 | delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); | ||
316 | pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n", | ||
317 | cpu, | ||
318 | "O."[!!cpu_online(cpu)], | ||
319 | "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)], | ||
320 | "N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)], | ||
321 | !IS_ENABLED(CONFIG_IRQ_WORK) ? '?' : | ||
322 | rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' : | ||
323 | "!."[!delta], | ||
324 | ticks_value, ticks_title, | ||
325 | rcu_dynticks_snap(rdp) & 0xfff, | ||
326 | rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, | ||
327 | rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), | ||
328 | READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, | ||
329 | fast_no_hz); | ||
330 | } | ||
331 | |||
332 | /* Complain about starvation of grace-period kthread. */ | ||
333 | static void rcu_check_gp_kthread_starvation(void) | ||
334 | { | ||
335 | struct task_struct *gpk = rcu_state.gp_kthread; | ||
336 | unsigned long j; | ||
337 | |||
338 | j = jiffies - READ_ONCE(rcu_state.gp_activity); | ||
339 | if (j > 2 * HZ) { | ||
340 | pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n", | ||
341 | rcu_state.name, j, | ||
342 | (long)rcu_seq_current(&rcu_state.gp_seq), | ||
343 | READ_ONCE(rcu_state.gp_flags), | ||
344 | gp_state_getname(rcu_state.gp_state), rcu_state.gp_state, | ||
345 | gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1); | ||
346 | if (gpk) { | ||
347 | pr_err("RCU grace-period kthread stack dump:\n"); | ||
348 | sched_show_task(gpk); | ||
349 | wake_up_process(gpk); | ||
350 | } | ||
351 | } | ||
352 | } | ||
353 | |||
354 | static void print_other_cpu_stall(unsigned long gp_seq) | ||
355 | { | ||
356 | int cpu; | ||
357 | unsigned long flags; | ||
358 | unsigned long gpa; | ||
359 | unsigned long j; | ||
360 | int ndetected = 0; | ||
361 | struct rcu_node *rnp; | ||
362 | long totqlen = 0; | ||
363 | |||
364 | /* Kick and suppress, if so configured. */ | ||
365 | rcu_stall_kick_kthreads(); | ||
366 | if (rcu_cpu_stall_suppress) | ||
367 | return; | ||
368 | |||
369 | /* | ||
370 | * OK, time to rat on our buddy... | ||
371 | * See Documentation/RCU/stallwarn.txt for info on how to debug | ||
372 | * RCU CPU stall warnings. | ||
373 | */ | ||
374 | pr_err("INFO: %s detected stalls on CPUs/tasks:\n", rcu_state.name); | ||
375 | rcu_for_each_leaf_node(rnp) { | ||
376 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
377 | ndetected += rcu_print_task_stall(rnp); | ||
378 | if (rnp->qsmask != 0) { | ||
379 | for_each_leaf_node_possible_cpu(rnp, cpu) | ||
380 | if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) { | ||
381 | print_cpu_stall_info(cpu); | ||
382 | ndetected++; | ||
383 | } | ||
384 | } | ||
385 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
386 | } | ||
387 | |||
388 | for_each_possible_cpu(cpu) | ||
389 | totqlen += rcu_get_n_cbs_cpu(cpu); | ||
390 | pr_cont("\t(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n", | ||
391 | smp_processor_id(), (long)(jiffies - rcu_state.gp_start), | ||
392 | (long)rcu_seq_current(&rcu_state.gp_seq), totqlen); | ||
393 | if (ndetected) { | ||
394 | rcu_dump_cpu_stacks(); | ||
395 | |||
396 | /* Complain about tasks blocking the grace period. */ | ||
397 | rcu_for_each_leaf_node(rnp) | ||
398 | rcu_print_detail_task_stall_rnp(rnp); | ||
399 | } else { | ||
400 | if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) { | ||
401 | pr_err("INFO: Stall ended before state dump start\n"); | ||
402 | } else { | ||
403 | j = jiffies; | ||
404 | gpa = READ_ONCE(rcu_state.gp_activity); | ||
405 | pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n", | ||
406 | rcu_state.name, j - gpa, j, gpa, | ||
407 | READ_ONCE(jiffies_till_next_fqs), | ||
408 | rcu_get_root()->qsmask); | ||
409 | /* In this case, the current CPU might be at fault. */ | ||
410 | sched_show_task(current); | ||
411 | } | ||
412 | } | ||
413 | /* Rewrite if needed in case of slow consoles. */ | ||
414 | if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall))) | ||
415 | WRITE_ONCE(rcu_state.jiffies_stall, | ||
416 | jiffies + 3 * rcu_jiffies_till_stall_check() + 3); | ||
417 | |||
418 | rcu_check_gp_kthread_starvation(); | ||
419 | |||
420 | panic_on_rcu_stall(); | ||
421 | |||
422 | rcu_force_quiescent_state(); /* Kick them all. */ | ||
423 | } | ||
424 | |||
425 | static void print_cpu_stall(void) | ||
426 | { | ||
427 | int cpu; | ||
428 | unsigned long flags; | ||
429 | struct rcu_data *rdp = this_cpu_ptr(&rcu_data); | ||
430 | struct rcu_node *rnp = rcu_get_root(); | ||
431 | long totqlen = 0; | ||
432 | |||
433 | /* Kick and suppress, if so configured. */ | ||
434 | rcu_stall_kick_kthreads(); | ||
435 | if (rcu_cpu_stall_suppress) | ||
436 | return; | ||
437 | |||
438 | /* | ||
439 | * OK, time to rat on ourselves... | ||
440 | * See Documentation/RCU/stallwarn.txt for info on how to debug | ||
441 | * RCU CPU stall warnings. | ||
442 | */ | ||
443 | pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name); | ||
444 | raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags); | ||
445 | print_cpu_stall_info(smp_processor_id()); | ||
446 | raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags); | ||
447 | for_each_possible_cpu(cpu) | ||
448 | totqlen += rcu_get_n_cbs_cpu(cpu); | ||
449 | pr_cont("\t(t=%lu jiffies g=%ld q=%lu)\n", | ||
450 | jiffies - rcu_state.gp_start, | ||
451 | (long)rcu_seq_current(&rcu_state.gp_seq), totqlen); | ||
452 | |||
453 | rcu_check_gp_kthread_starvation(); | ||
454 | |||
455 | rcu_dump_cpu_stacks(); | ||
456 | |||
457 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
458 | /* Rewrite if needed in case of slow consoles. */ | ||
459 | if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall))) | ||
460 | WRITE_ONCE(rcu_state.jiffies_stall, | ||
461 | jiffies + 3 * rcu_jiffies_till_stall_check() + 3); | ||
462 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
463 | |||
464 | panic_on_rcu_stall(); | ||
465 | |||
466 | /* | ||
467 | * Attempt to revive the RCU machinery by forcing a context switch. | ||
468 | * | ||
469 | * A context switch would normally allow the RCU state machine to make | ||
470 | * progress and it could be we're stuck in kernel space without context | ||
471 | * switches for an entirely unreasonable amount of time. | ||
472 | */ | ||
473 | set_tsk_need_resched(current); | ||
474 | set_preempt_need_resched(); | ||
475 | } | ||
476 | |||
477 | static void check_cpu_stall(struct rcu_data *rdp) | ||
478 | { | ||
479 | unsigned long gs1; | ||
480 | unsigned long gs2; | ||
481 | unsigned long gps; | ||
482 | unsigned long j; | ||
483 | unsigned long jn; | ||
484 | unsigned long js; | ||
485 | struct rcu_node *rnp; | ||
486 | |||
487 | if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) || | ||
488 | !rcu_gp_in_progress()) | ||
489 | return; | ||
490 | rcu_stall_kick_kthreads(); | ||
491 | j = jiffies; | ||
492 | |||
493 | /* | ||
494 | * Lots of memory barriers to reject false positives. | ||
495 | * | ||
496 | * The idea is to pick up rcu_state.gp_seq, then | ||
497 | * rcu_state.jiffies_stall, then rcu_state.gp_start, and finally | ||
498 | * another copy of rcu_state.gp_seq. These values are updated in | ||
499 | * the opposite order with memory barriers (or equivalent) during | ||
500 | * grace-period initialization and cleanup. Now, a false positive | ||
501 | * can occur if we get an new value of rcu_state.gp_start and a old | ||
502 | * value of rcu_state.jiffies_stall. But given the memory barriers, | ||
503 | * the only way that this can happen is if one grace period ends | ||
504 | * and another starts between these two fetches. This is detected | ||
505 | * by comparing the second fetch of rcu_state.gp_seq with the | ||
506 | * previous fetch from rcu_state.gp_seq. | ||
507 | * | ||
508 | * Given this check, comparisons of jiffies, rcu_state.jiffies_stall, | ||
509 | * and rcu_state.gp_start suffice to forestall false positives. | ||
510 | */ | ||
511 | gs1 = READ_ONCE(rcu_state.gp_seq); | ||
512 | smp_rmb(); /* Pick up ->gp_seq first... */ | ||
513 | js = READ_ONCE(rcu_state.jiffies_stall); | ||
514 | smp_rmb(); /* ...then ->jiffies_stall before the rest... */ | ||
515 | gps = READ_ONCE(rcu_state.gp_start); | ||
516 | smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */ | ||
517 | gs2 = READ_ONCE(rcu_state.gp_seq); | ||
518 | if (gs1 != gs2 || | ||
519 | ULONG_CMP_LT(j, js) || | ||
520 | ULONG_CMP_GE(gps, js)) | ||
521 | return; /* No stall or GP completed since entering function. */ | ||
522 | rnp = rdp->mynode; | ||
523 | jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3; | ||
524 | if (rcu_gp_in_progress() && | ||
525 | (READ_ONCE(rnp->qsmask) & rdp->grpmask) && | ||
526 | cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) { | ||
527 | |||
528 | /* We haven't checked in, so go dump stack. */ | ||
529 | print_cpu_stall(); | ||
530 | |||
531 | } else if (rcu_gp_in_progress() && | ||
532 | ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) && | ||
533 | cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) { | ||
534 | |||
535 | /* They had a few time units to dump stack, so complain. */ | ||
536 | print_other_cpu_stall(gs2); | ||
537 | } | ||
538 | } | ||
539 | |||
540 | ////////////////////////////////////////////////////////////////////////////// | ||
541 | // | ||
542 | // RCU forward-progress mechanisms, including of callback invocation. | ||
543 | |||
544 | |||
545 | /* | ||
546 | * Show the state of the grace-period kthreads. | ||
547 | */ | ||
548 | void show_rcu_gp_kthreads(void) | ||
549 | { | ||
550 | int cpu; | ||
551 | unsigned long j; | ||
552 | unsigned long ja; | ||
553 | unsigned long jr; | ||
554 | unsigned long jw; | ||
555 | struct rcu_data *rdp; | ||
556 | struct rcu_node *rnp; | ||
557 | |||
558 | j = jiffies; | ||
559 | ja = j - READ_ONCE(rcu_state.gp_activity); | ||
560 | jr = j - READ_ONCE(rcu_state.gp_req_activity); | ||
561 | jw = j - READ_ONCE(rcu_state.gp_wake_time); | ||
562 | pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n", | ||
563 | rcu_state.name, gp_state_getname(rcu_state.gp_state), | ||
564 | rcu_state.gp_state, | ||
565 | rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL, | ||
566 | ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq), | ||
567 | (long)READ_ONCE(rcu_state.gp_seq), | ||
568 | (long)READ_ONCE(rcu_get_root()->gp_seq_needed), | ||
569 | READ_ONCE(rcu_state.gp_flags)); | ||
570 | rcu_for_each_node_breadth_first(rnp) { | ||
571 | if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed)) | ||
572 | continue; | ||
573 | pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n", | ||
574 | rnp->grplo, rnp->grphi, (long)rnp->gp_seq, | ||
575 | (long)rnp->gp_seq_needed); | ||
576 | if (!rcu_is_leaf_node(rnp)) | ||
577 | continue; | ||
578 | for_each_leaf_node_possible_cpu(rnp, cpu) { | ||
579 | rdp = per_cpu_ptr(&rcu_data, cpu); | ||
580 | if (rdp->gpwrap || | ||
581 | ULONG_CMP_GE(rcu_state.gp_seq, | ||
582 | rdp->gp_seq_needed)) | ||
583 | continue; | ||
584 | pr_info("\tcpu %d ->gp_seq_needed %ld\n", | ||
585 | cpu, (long)rdp->gp_seq_needed); | ||
586 | } | ||
587 | } | ||
588 | /* sched_show_task(rcu_state.gp_kthread); */ | ||
589 | } | ||
590 | EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads); | ||
591 | |||
592 | /* | ||
593 | * This function checks for grace-period requests that fail to motivate | ||
594 | * RCU to come out of its idle mode. | ||
595 | */ | ||
596 | static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, | ||
597 | const unsigned long gpssdelay) | ||
598 | { | ||
599 | unsigned long flags; | ||
600 | unsigned long j; | ||
601 | struct rcu_node *rnp_root = rcu_get_root(); | ||
602 | static atomic_t warned = ATOMIC_INIT(0); | ||
603 | |||
604 | if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() || | ||
605 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed)) | ||
606 | return; | ||
607 | j = jiffies; /* Expensive access, and in common case don't get here. */ | ||
608 | if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || | ||
609 | time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) || | ||
610 | atomic_read(&warned)) | ||
611 | return; | ||
612 | |||
613 | raw_spin_lock_irqsave_rcu_node(rnp, flags); | ||
614 | j = jiffies; | ||
615 | if (rcu_gp_in_progress() || | ||
616 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || | ||
617 | time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || | ||
618 | time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) || | ||
619 | atomic_read(&warned)) { | ||
620 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
621 | return; | ||
622 | } | ||
623 | /* Hold onto the leaf lock to make others see warned==1. */ | ||
624 | |||
625 | if (rnp_root != rnp) | ||
626 | raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */ | ||
627 | j = jiffies; | ||
628 | if (rcu_gp_in_progress() || | ||
629 | ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || | ||
630 | time_before(j, rcu_state.gp_req_activity + gpssdelay) || | ||
631 | time_before(j, rcu_state.gp_activity + gpssdelay) || | ||
632 | atomic_xchg(&warned, 1)) { | ||
633 | raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */ | ||
634 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
635 | return; | ||
636 | } | ||
637 | WARN_ON(1); | ||
638 | if (rnp_root != rnp) | ||
639 | raw_spin_unlock_rcu_node(rnp_root); | ||
640 | raw_spin_unlock_irqrestore_rcu_node(rnp, flags); | ||
641 | show_rcu_gp_kthreads(); | ||
642 | } | ||
643 | |||
644 | /* | ||
645 | * Do a forward-progress check for rcutorture. This is normally invoked | ||
646 | * due to an OOM event. The argument "j" gives the time period during | ||
647 | * which rcutorture would like progress to have been made. | ||
648 | */ | ||
649 | void rcu_fwd_progress_check(unsigned long j) | ||
650 | { | ||
651 | unsigned long cbs; | ||
652 | int cpu; | ||
653 | unsigned long max_cbs = 0; | ||
654 | int max_cpu = -1; | ||
655 | struct rcu_data *rdp; | ||
656 | |||
657 | if (rcu_gp_in_progress()) { | ||
658 | pr_info("%s: GP age %lu jiffies\n", | ||
659 | __func__, jiffies - rcu_state.gp_start); | ||
660 | show_rcu_gp_kthreads(); | ||
661 | } else { | ||
662 | pr_info("%s: Last GP end %lu jiffies ago\n", | ||
663 | __func__, jiffies - rcu_state.gp_end); | ||
664 | preempt_disable(); | ||
665 | rdp = this_cpu_ptr(&rcu_data); | ||
666 | rcu_check_gp_start_stall(rdp->mynode, rdp, j); | ||
667 | preempt_enable(); | ||
668 | } | ||
669 | for_each_possible_cpu(cpu) { | ||
670 | cbs = rcu_get_n_cbs_cpu(cpu); | ||
671 | if (!cbs) | ||
672 | continue; | ||
673 | if (max_cpu < 0) | ||
674 | pr_info("%s: callbacks", __func__); | ||
675 | pr_cont(" %d: %lu", cpu, cbs); | ||
676 | if (cbs <= max_cbs) | ||
677 | continue; | ||
678 | max_cbs = cbs; | ||
679 | max_cpu = cpu; | ||
680 | } | ||
681 | if (max_cpu >= 0) | ||
682 | pr_cont("\n"); | ||
683 | } | ||
684 | EXPORT_SYMBOL_GPL(rcu_fwd_progress_check); | ||
685 | |||
686 | /* Commandeer a sysrq key to dump RCU's tree. */ | ||
687 | static bool sysrq_rcu; | ||
688 | module_param(sysrq_rcu, bool, 0444); | ||
689 | |||
690 | /* Dump grace-period-request information due to commandeered sysrq. */ | ||
691 | static void sysrq_show_rcu(int key) | ||
692 | { | ||
693 | show_rcu_gp_kthreads(); | ||
694 | } | ||
695 | |||
696 | static struct sysrq_key_op sysrq_rcudump_op = { | ||
697 | .handler = sysrq_show_rcu, | ||
698 | .help_msg = "show-rcu(y)", | ||
699 | .action_msg = "Show RCU tree", | ||
700 | .enable_mask = SYSRQ_ENABLE_DUMP, | ||
701 | }; | ||
702 | |||
703 | static int __init rcu_sysrq_init(void) | ||
704 | { | ||
705 | if (sysrq_rcu) | ||
706 | return register_sysrq_key('y', &sysrq_rcudump_op); | ||
707 | return 0; | ||
708 | } | ||
709 | early_initcall(rcu_sysrq_init); | ||
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index cbaa976c5945..c3bf44ba42e5 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c | |||
@@ -424,68 +424,11 @@ EXPORT_SYMBOL_GPL(do_trace_rcu_torture_read); | |||
424 | #endif | 424 | #endif |
425 | 425 | ||
426 | #ifdef CONFIG_RCU_STALL_COMMON | 426 | #ifdef CONFIG_RCU_STALL_COMMON |
427 | |||
428 | #ifdef CONFIG_PROVE_RCU | ||
429 | #define RCU_STALL_DELAY_DELTA (5 * HZ) | ||
430 | #else | ||
431 | #define RCU_STALL_DELAY_DELTA 0 | ||
432 | #endif | ||
433 | |||
434 | int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */ | 427 | int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */ |
435 | EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress); | 428 | EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress); |
436 | static int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT; | ||
437 | |||
438 | module_param(rcu_cpu_stall_suppress, int, 0644); | 429 | module_param(rcu_cpu_stall_suppress, int, 0644); |
430 | int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT; | ||
439 | module_param(rcu_cpu_stall_timeout, int, 0644); | 431 | module_param(rcu_cpu_stall_timeout, int, 0644); |
440 | |||
441 | int rcu_jiffies_till_stall_check(void) | ||
442 | { | ||
443 | int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout); | ||
444 | |||
445 | /* | ||
446 | * Limit check must be consistent with the Kconfig limits | ||
447 | * for CONFIG_RCU_CPU_STALL_TIMEOUT. | ||
448 | */ | ||
449 | if (till_stall_check < 3) { | ||
450 | WRITE_ONCE(rcu_cpu_stall_timeout, 3); | ||
451 | till_stall_check = 3; | ||
452 | } else if (till_stall_check > 300) { | ||
453 | WRITE_ONCE(rcu_cpu_stall_timeout, 300); | ||
454 | till_stall_check = 300; | ||
455 | } | ||
456 | return till_stall_check * HZ + RCU_STALL_DELAY_DELTA; | ||
457 | } | ||
458 | EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check); | ||
459 | |||
460 | void rcu_sysrq_start(void) | ||
461 | { | ||
462 | if (!rcu_cpu_stall_suppress) | ||
463 | rcu_cpu_stall_suppress = 2; | ||
464 | } | ||
465 | |||
466 | void rcu_sysrq_end(void) | ||
467 | { | ||
468 | if (rcu_cpu_stall_suppress == 2) | ||
469 | rcu_cpu_stall_suppress = 0; | ||
470 | } | ||
471 | |||
472 | static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr) | ||
473 | { | ||
474 | rcu_cpu_stall_suppress = 1; | ||
475 | return NOTIFY_DONE; | ||
476 | } | ||
477 | |||
478 | static struct notifier_block rcu_panic_block = { | ||
479 | .notifier_call = rcu_panic, | ||
480 | }; | ||
481 | |||
482 | static int __init check_cpu_stall_init(void) | ||
483 | { | ||
484 | atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block); | ||
485 | return 0; | ||
486 | } | ||
487 | early_initcall(check_cpu_stall_init); | ||
488 | |||
489 | #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ | 432 | #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ |
490 | 433 | ||
491 | #ifdef CONFIG_TASKS_RCU | 434 | #ifdef CONFIG_TASKS_RCU |
diff --git a/kernel/torture.c b/kernel/torture.c index 8faa1a9aaeb9..17b2be9bde12 100644 --- a/kernel/torture.c +++ b/kernel/torture.c | |||
@@ -88,6 +88,8 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes, | |||
88 | 88 | ||
89 | if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu)) | 89 | if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu)) |
90 | return false; | 90 | return false; |
91 | if (num_online_cpus() <= 1) | ||
92 | return false; /* Can't offline the last CPU. */ | ||
91 | 93 | ||
92 | if (verbose > 1) | 94 | if (verbose > 1) |
93 | pr_alert("%s" TORTURE_FLAG | 95 | pr_alert("%s" TORTURE_FLAG |
diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c index 835d50b279f5..a2a88ab07f7b 100644 --- a/net/ipv4/netfilter/ipt_CLUSTERIP.c +++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c | |||
@@ -56,7 +56,7 @@ struct clusterip_config { | |||
56 | #endif | 56 | #endif |
57 | enum clusterip_hashmode hash_mode; /* which hashing mode */ | 57 | enum clusterip_hashmode hash_mode; /* which hashing mode */ |
58 | u_int32_t hash_initval; /* hash initialization */ | 58 | u_int32_t hash_initval; /* hash initialization */ |
59 | struct rcu_head rcu; /* for call_rcu_bh */ | 59 | struct rcu_head rcu; /* for call_rcu */ |
60 | struct net *net; /* netns for pernet list */ | 60 | struct net *net; /* netns for pernet list */ |
61 | char ifname[IFNAMSIZ]; /* device ifname */ | 61 | char ifname[IFNAMSIZ]; /* device ifname */ |
62 | }; | 62 | }; |
diff --git a/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh b/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh index 43540f1828cc..2deea2169fc2 100755 --- a/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh +++ b/tools/testing/selftests/rcutorture/bin/configNR_CPUS.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Extract the number of CPUs expected from the specified Kconfig-file | 4 | # Extract the number of CPUs expected from the specified Kconfig-file |
4 | # fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified | 5 | # fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified |
@@ -7,23 +8,9 @@ | |||
7 | # | 8 | # |
8 | # Usage: configNR_CPUS.sh config-frag | 9 | # Usage: configNR_CPUS.sh config-frag |
9 | # | 10 | # |
10 | # This program is free software; you can redistribute it and/or modify | ||
11 | # it under the terms of the GNU General Public License as published by | ||
12 | # the Free Software Foundation; either version 2 of the License, or | ||
13 | # (at your option) any later version. | ||
14 | # | ||
15 | # This program is distributed in the hope that it will be useful, | ||
16 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
17 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
18 | # GNU General Public License for more details. | ||
19 | # | ||
20 | # You should have received a copy of the GNU General Public License | ||
21 | # along with this program; if not, you can access it online at | ||
22 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
23 | # | ||
24 | # Copyright (C) IBM Corporation, 2013 | 11 | # Copyright (C) IBM Corporation, 2013 |
25 | # | 12 | # |
26 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 13 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
27 | 14 | ||
28 | cf=$1 | 15 | cf=$1 |
29 | if test ! -r $cf | 16 | if test ! -r $cf |
diff --git a/tools/testing/selftests/rcutorture/bin/config_override.sh b/tools/testing/selftests/rcutorture/bin/config_override.sh index ef7fcbac3d42..90016c359e83 100755 --- a/tools/testing/selftests/rcutorture/bin/config_override.sh +++ b/tools/testing/selftests/rcutorture/bin/config_override.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # config_override.sh base override | 4 | # config_override.sh base override |
4 | # | 5 | # |
@@ -6,23 +7,9 @@ | |||
6 | # that conflict with any in override, concatenating what remains and | 7 | # that conflict with any in override, concatenating what remains and |
7 | # sending the result to standard output. | 8 | # sending the result to standard output. |
8 | # | 9 | # |
9 | # This program is free software; you can redistribute it and/or modify | ||
10 | # it under the terms of the GNU General Public License as published by | ||
11 | # the Free Software Foundation; either version 2 of the License, or | ||
12 | # (at your option) any later version. | ||
13 | # | ||
14 | # This program is distributed in the hope that it will be useful, | ||
15 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
16 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | # GNU General Public License for more details. | ||
18 | # | ||
19 | # You should have received a copy of the GNU General Public License | ||
20 | # along with this program; if not, you can access it online at | ||
21 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
22 | # | ||
23 | # Copyright (C) IBM Corporation, 2017 | 10 | # Copyright (C) IBM Corporation, 2017 |
24 | # | 11 | # |
25 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 12 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
26 | 13 | ||
27 | base=$1 | 14 | base=$1 |
28 | if test -r $base | 15 | if test -r $base |
diff --git a/tools/testing/selftests/rcutorture/bin/configcheck.sh b/tools/testing/selftests/rcutorture/bin/configcheck.sh index 197deece7c7c..31584cee84d7 100755 --- a/tools/testing/selftests/rcutorture/bin/configcheck.sh +++ b/tools/testing/selftests/rcutorture/bin/configcheck.sh | |||
@@ -1,23 +1,11 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # Usage: configcheck.sh .config .config-template | 2 | # SPDX-License-Identifier: GPL-2.0+ |
3 | # | ||
4 | # This program is free software; you can redistribute it and/or modify | ||
5 | # it under the terms of the GNU General Public License as published by | ||
6 | # the Free Software Foundation; either version 2 of the License, or | ||
7 | # (at your option) any later version. | ||
8 | # | 3 | # |
9 | # This program is distributed in the hope that it will be useful, | 4 | # Usage: configcheck.sh .config .config-template |
10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
12 | # GNU General Public License for more details. | ||
13 | # | ||
14 | # You should have received a copy of the GNU General Public License | ||
15 | # along with this program; if not, you can access it online at | ||
16 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
17 | # | 5 | # |
18 | # Copyright (C) IBM Corporation, 2011 | 6 | # Copyright (C) IBM Corporation, 2011 |
19 | # | 7 | # |
20 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 8 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
21 | 9 | ||
22 | T=${TMPDIR-/tmp}/abat-chk-config.sh.$$ | 10 | T=${TMPDIR-/tmp}/abat-chk-config.sh.$$ |
23 | trap 'rm -rf $T' 0 | 11 | trap 'rm -rf $T' 0 |
@@ -26,6 +14,7 @@ mkdir $T | |||
26 | cat $1 > $T/.config | 14 | cat $1 > $T/.config |
27 | 15 | ||
28 | cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' | | 16 | cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' | |
17 | grep -v '^CONFIG_INITRAMFS_SOURCE' | | ||
29 | awk ' | 18 | awk ' |
30 | { | 19 | { |
31 | print "if grep -q \"" $0 "\" < '"$T/.config"'"; | 20 | print "if grep -q \"" $0 "\" < '"$T/.config"'"; |
diff --git a/tools/testing/selftests/rcutorture/bin/configinit.sh b/tools/testing/selftests/rcutorture/bin/configinit.sh index 65541c21a544..40359486b3a8 100755 --- a/tools/testing/selftests/rcutorture/bin/configinit.sh +++ b/tools/testing/selftests/rcutorture/bin/configinit.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Usage: configinit.sh config-spec-file build-output-dir results-dir | 4 | # Usage: configinit.sh config-spec-file build-output-dir results-dir |
4 | # | 5 | # |
@@ -14,23 +15,9 @@ | |||
14 | # for example, "O=/tmp/foo". If this argument is omitted, the .config | 15 | # for example, "O=/tmp/foo". If this argument is omitted, the .config |
15 | # file will be generated directly in the current directory. | 16 | # file will be generated directly in the current directory. |
16 | # | 17 | # |
17 | # This program is free software; you can redistribute it and/or modify | ||
18 | # it under the terms of the GNU General Public License as published by | ||
19 | # the Free Software Foundation; either version 2 of the License, or | ||
20 | # (at your option) any later version. | ||
21 | # | ||
22 | # This program is distributed in the hope that it will be useful, | ||
23 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
24 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
25 | # GNU General Public License for more details. | ||
26 | # | ||
27 | # You should have received a copy of the GNU General Public License | ||
28 | # along with this program; if not, you can access it online at | ||
29 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
30 | # | ||
31 | # Copyright (C) IBM Corporation, 2013 | 18 | # Copyright (C) IBM Corporation, 2013 |
32 | # | 19 | # |
33 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 20 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
34 | 21 | ||
35 | T=${TMPDIR-/tmp}/configinit.sh.$$ | 22 | T=${TMPDIR-/tmp}/configinit.sh.$$ |
36 | trap 'rm -rf $T' 0 | 23 | trap 'rm -rf $T' 0 |
diff --git a/tools/testing/selftests/rcutorture/bin/cpus2use.sh b/tools/testing/selftests/rcutorture/bin/cpus2use.sh index bb99cde3f5f9..ff7102212703 100755 --- a/tools/testing/selftests/rcutorture/bin/cpus2use.sh +++ b/tools/testing/selftests/rcutorture/bin/cpus2use.sh | |||
@@ -1,26 +1,13 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Get an estimate of how CPU-hoggy to be. | 4 | # Get an estimate of how CPU-hoggy to be. |
4 | # | 5 | # |
5 | # Usage: cpus2use.sh | 6 | # Usage: cpus2use.sh |
6 | # | 7 | # |
7 | # This program is free software; you can redistribute it and/or modify | ||
8 | # it under the terms of the GNU General Public License as published by | ||
9 | # the Free Software Foundation; either version 2 of the License, or | ||
10 | # (at your option) any later version. | ||
11 | # | ||
12 | # This program is distributed in the hope that it will be useful, | ||
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
15 | # GNU General Public License for more details. | ||
16 | # | ||
17 | # You should have received a copy of the GNU General Public License | ||
18 | # along with this program; if not, you can access it online at | ||
19 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
20 | # | ||
21 | # Copyright (C) IBM Corporation, 2013 | 8 | # Copyright (C) IBM Corporation, 2013 |
22 | # | 9 | # |
23 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 10 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
24 | 11 | ||
25 | ncpus=`grep '^processor' /proc/cpuinfo | wc -l` | 12 | ncpus=`grep '^processor' /proc/cpuinfo | wc -l` |
26 | idlecpus=`mpstat | tail -1 | \ | 13 | idlecpus=`mpstat | tail -1 | \ |
diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh b/tools/testing/selftests/rcutorture/bin/functions.sh index 65f6655026f0..6bcb8b5b2ff2 100644 --- a/tools/testing/selftests/rcutorture/bin/functions.sh +++ b/tools/testing/selftests/rcutorture/bin/functions.sh | |||
@@ -1,24 +1,11 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Shell functions for the rest of the scripts. | 4 | # Shell functions for the rest of the scripts. |
4 | # | 5 | # |
5 | # This program is free software; you can redistribute it and/or modify | ||
6 | # it under the terms of the GNU General Public License as published by | ||
7 | # the Free Software Foundation; either version 2 of the License, or | ||
8 | # (at your option) any later version. | ||
9 | # | ||
10 | # This program is distributed in the hope that it will be useful, | ||
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
13 | # GNU General Public License for more details. | ||
14 | # | ||
15 | # You should have received a copy of the GNU General Public License | ||
16 | # along with this program; if not, you can access it online at | ||
17 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
18 | # | ||
19 | # Copyright (C) IBM Corporation, 2013 | 6 | # Copyright (C) IBM Corporation, 2013 |
20 | # | 7 | # |
21 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 8 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
22 | 9 | ||
23 | # bootparam_hotplug_cpu bootparam-string | 10 | # bootparam_hotplug_cpu bootparam-string |
24 | # | 11 | # |
diff --git a/tools/testing/selftests/rcutorture/bin/jitter.sh b/tools/testing/selftests/rcutorture/bin/jitter.sh index 3633828375e3..435b60933985 100755 --- a/tools/testing/selftests/rcutorture/bin/jitter.sh +++ b/tools/testing/selftests/rcutorture/bin/jitter.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Alternate sleeping and spinning on randomly selected CPUs. The purpose | 4 | # Alternate sleeping and spinning on randomly selected CPUs. The purpose |
4 | # of this script is to inflict random OS jitter on a concurrently running | 5 | # of this script is to inflict random OS jitter on a concurrently running |
@@ -11,23 +12,9 @@ | |||
11 | # sleepmax: Maximum microseconds to sleep, defaults to one second. | 12 | # sleepmax: Maximum microseconds to sleep, defaults to one second. |
12 | # spinmax: Maximum microseconds to spin, defaults to one millisecond. | 13 | # spinmax: Maximum microseconds to spin, defaults to one millisecond. |
13 | # | 14 | # |
14 | # This program is free software; you can redistribute it and/or modify | ||
15 | # it under the terms of the GNU General Public License as published by | ||
16 | # the Free Software Foundation; either version 2 of the License, or | ||
17 | # (at your option) any later version. | ||
18 | # | ||
19 | # This program is distributed in the hope that it will be useful, | ||
20 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
21 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
22 | # GNU General Public License for more details. | ||
23 | # | ||
24 | # You should have received a copy of the GNU General Public License | ||
25 | # along with this program; if not, you can access it online at | ||
26 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
27 | # | ||
28 | # Copyright (C) IBM Corporation, 2016 | 15 | # Copyright (C) IBM Corporation, 2016 |
29 | # | 16 | # |
30 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 17 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
31 | 18 | ||
32 | me=$(($1 * 1000)) | 19 | me=$(($1 * 1000)) |
33 | duration=$2 | 20 | duration=$2 |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-build.sh b/tools/testing/selftests/rcutorture/bin/kvm-build.sh index 9115fcdb5617..c27a0bbb9c02 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-build.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-build.sh | |||
@@ -1,26 +1,13 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Build a kvm-ready Linux kernel from the tree in the current directory. | 4 | # Build a kvm-ready Linux kernel from the tree in the current directory. |
4 | # | 5 | # |
5 | # Usage: kvm-build.sh config-template build-dir resdir | 6 | # Usage: kvm-build.sh config-template build-dir resdir |
6 | # | 7 | # |
7 | # This program is free software; you can redistribute it and/or modify | ||
8 | # it under the terms of the GNU General Public License as published by | ||
9 | # the Free Software Foundation; either version 2 of the License, or | ||
10 | # (at your option) any later version. | ||
11 | # | ||
12 | # This program is distributed in the hope that it will be useful, | ||
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
15 | # GNU General Public License for more details. | ||
16 | # | ||
17 | # You should have received a copy of the GNU General Public License | ||
18 | # along with this program; if not, you can access it online at | ||
19 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
20 | # | ||
21 | # Copyright (C) IBM Corporation, 2011 | 8 | # Copyright (C) IBM Corporation, 2011 |
22 | # | 9 | # |
23 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 10 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
24 | 11 | ||
25 | config_template=${1} | 12 | config_template=${1} |
26 | if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template" | 13 | if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template" |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh b/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh index 98f650c9bf54..8426fe1f15ee 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/sh | 1 | #!/bin/sh |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Invoke a text editor on all console.log files for all runs with diagnostics, | 4 | # Invoke a text editor on all console.log files for all runs with diagnostics, |
4 | # that is, on all such files having a console.log.diags counterpart. | 5 | # that is, on all such files having a console.log.diags counterpart. |
@@ -10,6 +11,10 @@ | |||
10 | # | 11 | # |
11 | # The "directory" above should end with the date/time directory, for example, | 12 | # The "directory" above should end with the date/time directory, for example, |
12 | # "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27". | 13 | # "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27". |
14 | # | ||
15 | # Copyright (C) IBM Corporation, 2018 | ||
16 | # | ||
17 | # Author: Paul E. McKenney <paulmck@linux.ibm.com> | ||
13 | 18 | ||
14 | rundir="${1}" | 19 | rundir="${1}" |
15 | if test -z "$rundir" -o ! -d "$rundir" | 20 | if test -z "$rundir" -o ! -d "$rundir" |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh index 2de92f43ee8c..f3a7a5e2b89d 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-lock.sh | |||
@@ -1,26 +1,13 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Analyze a given results directory for locktorture progress. | 4 | # Analyze a given results directory for locktorture progress. |
4 | # | 5 | # |
5 | # Usage: kvm-recheck-lock.sh resdir | 6 | # Usage: kvm-recheck-lock.sh resdir |
6 | # | 7 | # |
7 | # This program is free software; you can redistribute it and/or modify | ||
8 | # it under the terms of the GNU General Public License as published by | ||
9 | # the Free Software Foundation; either version 2 of the License, or | ||
10 | # (at your option) any later version. | ||
11 | # | ||
12 | # This program is distributed in the hope that it will be useful, | ||
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
15 | # GNU General Public License for more details. | ||
16 | # | ||
17 | # You should have received a copy of the GNU General Public License | ||
18 | # along with this program; if not, you can access it online at | ||
19 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
20 | # | ||
21 | # Copyright (C) IBM Corporation, 2014 | 8 | # Copyright (C) IBM Corporation, 2014 |
22 | # | 9 | # |
23 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 10 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
24 | 11 | ||
25 | i="$1" | 12 | i="$1" |
26 | if test -d "$i" -a -r "$i" | 13 | if test -d "$i" -a -r "$i" |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh index 0fa8a61ccb7b..2a7f3f4756a7 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh | |||
@@ -1,26 +1,13 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Analyze a given results directory for rcutorture progress. | 4 | # Analyze a given results directory for rcutorture progress. |
4 | # | 5 | # |
5 | # Usage: kvm-recheck-rcu.sh resdir | 6 | # Usage: kvm-recheck-rcu.sh resdir |
6 | # | 7 | # |
7 | # This program is free software; you can redistribute it and/or modify | ||
8 | # it under the terms of the GNU General Public License as published by | ||
9 | # the Free Software Foundation; either version 2 of the License, or | ||
10 | # (at your option) any later version. | ||
11 | # | ||
12 | # This program is distributed in the hope that it will be useful, | ||
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
15 | # GNU General Public License for more details. | ||
16 | # | ||
17 | # You should have received a copy of the GNU General Public License | ||
18 | # along with this program; if not, you can access it online at | ||
19 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
20 | # | ||
21 | # Copyright (C) IBM Corporation, 2014 | 8 | # Copyright (C) IBM Corporation, 2014 |
22 | # | 9 | # |
23 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 10 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
24 | 11 | ||
25 | i="$1" | 12 | i="$1" |
26 | if test -d "$i" -a -r "$i" | 13 | if test -d "$i" -a -r "$i" |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh index 8948f7926b21..7d3c2be66c64 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Analyze a given results directory for rcuperf performance measurements, | 4 | # Analyze a given results directory for rcuperf performance measurements, |
4 | # looking for ftrace data. Exits with 0 if data was found, analyzed, and | 5 | # looking for ftrace data. Exits with 0 if data was found, analyzed, and |
@@ -7,23 +8,9 @@ | |||
7 | # | 8 | # |
8 | # Usage: kvm-recheck-rcuperf-ftrace.sh resdir | 9 | # Usage: kvm-recheck-rcuperf-ftrace.sh resdir |
9 | # | 10 | # |
10 | # This program is free software; you can redistribute it and/or modify | ||
11 | # it under the terms of the GNU General Public License as published by | ||
12 | # the Free Software Foundation; either version 2 of the License, or | ||
13 | # (at your option) any later version. | ||
14 | # | ||
15 | # This program is distributed in the hope that it will be useful, | ||
16 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
17 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
18 | # GNU General Public License for more details. | ||
19 | # | ||
20 | # You should have received a copy of the GNU General Public License | ||
21 | # along with this program; if not, you can access it online at | ||
22 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
23 | # | ||
24 | # Copyright (C) IBM Corporation, 2016 | 11 | # Copyright (C) IBM Corporation, 2016 |
25 | # | 12 | # |
26 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 13 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
27 | 14 | ||
28 | i="$1" | 15 | i="$1" |
29 | . functions.sh | 16 | . functions.sh |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh index ccebf772fa1e..db0375a57f28 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh | |||
@@ -1,26 +1,13 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Analyze a given results directory for rcuperf performance measurements. | 4 | # Analyze a given results directory for rcuperf performance measurements. |
4 | # | 5 | # |
5 | # Usage: kvm-recheck-rcuperf.sh resdir | 6 | # Usage: kvm-recheck-rcuperf.sh resdir |
6 | # | 7 | # |
7 | # This program is free software; you can redistribute it and/or modify | ||
8 | # it under the terms of the GNU General Public License as published by | ||
9 | # the Free Software Foundation; either version 2 of the License, or | ||
10 | # (at your option) any later version. | ||
11 | # | ||
12 | # This program is distributed in the hope that it will be useful, | ||
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
15 | # GNU General Public License for more details. | ||
16 | # | ||
17 | # You should have received a copy of the GNU General Public License | ||
18 | # along with this program; if not, you can access it online at | ||
19 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
20 | # | ||
21 | # Copyright (C) IBM Corporation, 2016 | 8 | # Copyright (C) IBM Corporation, 2016 |
22 | # | 9 | # |
23 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 10 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
24 | 11 | ||
25 | i="$1" | 12 | i="$1" |
26 | if test -d "$i" -a -r "$i" | 13 | if test -d "$i" -a -r "$i" |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh index c9bab57a77eb..2adde6aaafdb 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Given the results directories for previous KVM-based torture runs, | 4 | # Given the results directories for previous KVM-based torture runs, |
4 | # check the build and console output for errors. Given a directory | 5 | # check the build and console output for errors. Given a directory |
@@ -6,23 +7,9 @@ | |||
6 | # | 7 | # |
7 | # Usage: kvm-recheck.sh resdir ... | 8 | # Usage: kvm-recheck.sh resdir ... |
8 | # | 9 | # |
9 | # This program is free software; you can redistribute it and/or modify | ||
10 | # it under the terms of the GNU General Public License as published by | ||
11 | # the Free Software Foundation; either version 2 of the License, or | ||
12 | # (at your option) any later version. | ||
13 | # | ||
14 | # This program is distributed in the hope that it will be useful, | ||
15 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
16 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | # GNU General Public License for more details. | ||
18 | # | ||
19 | # You should have received a copy of the GNU General Public License | ||
20 | # along with this program; if not, you can access it online at | ||
21 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
22 | # | ||
23 | # Copyright (C) IBM Corporation, 2011 | 10 | # Copyright (C) IBM Corporation, 2011 |
24 | # | 11 | # |
25 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 12 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
26 | 13 | ||
27 | PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH | 14 | PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH |
28 | . functions.sh | 15 | . functions.sh |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh index 58ca758a5786..0eb1ec16d78a 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Run a kvm-based test of the specified tree on the specified configs. | 4 | # Run a kvm-based test of the specified tree on the specified configs. |
4 | # Fully automated run and error checking, no graphics console. | 5 | # Fully automated run and error checking, no graphics console. |
@@ -20,23 +21,9 @@ | |||
20 | # | 21 | # |
21 | # More sophisticated argument parsing is clearly needed. | 22 | # More sophisticated argument parsing is clearly needed. |
22 | # | 23 | # |
23 | # This program is free software; you can redistribute it and/or modify | ||
24 | # it under the terms of the GNU General Public License as published by | ||
25 | # the Free Software Foundation; either version 2 of the License, or | ||
26 | # (at your option) any later version. | ||
27 | # | ||
28 | # This program is distributed in the hope that it will be useful, | ||
29 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
30 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
31 | # GNU General Public License for more details. | ||
32 | # | ||
33 | # You should have received a copy of the GNU General Public License | ||
34 | # along with this program; if not, you can access it online at | ||
35 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
36 | # | ||
37 | # Copyright (C) IBM Corporation, 2011 | 24 | # Copyright (C) IBM Corporation, 2011 |
38 | # | 25 | # |
39 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 26 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
40 | 27 | ||
41 | T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$ | 28 | T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$ |
42 | trap 'rm -rf $T' 0 | 29 | trap 'rm -rf $T' 0 |
diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh index 19864f1cb27a..8f1e337b9b54 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Run a series of tests under KVM. By default, this series is specified | 4 | # Run a series of tests under KVM. By default, this series is specified |
4 | # by the relevant CFLIST file, but can be overridden by the --configs | 5 | # by the relevant CFLIST file, but can be overridden by the --configs |
@@ -6,23 +7,9 @@ | |||
6 | # | 7 | # |
7 | # Usage: kvm.sh [ options ] | 8 | # Usage: kvm.sh [ options ] |
8 | # | 9 | # |
9 | # This program is free software; you can redistribute it and/or modify | ||
10 | # it under the terms of the GNU General Public License as published by | ||
11 | # the Free Software Foundation; either version 2 of the License, or | ||
12 | # (at your option) any later version. | ||
13 | # | ||
14 | # This program is distributed in the hope that it will be useful, | ||
15 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
16 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | # GNU General Public License for more details. | ||
18 | # | ||
19 | # You should have received a copy of the GNU General Public License | ||
20 | # along with this program; if not, you can access it online at | ||
21 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
22 | # | ||
23 | # Copyright (C) IBM Corporation, 2011 | 10 | # Copyright (C) IBM Corporation, 2011 |
24 | # | 11 | # |
25 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 12 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
26 | 13 | ||
27 | scriptname=$0 | 14 | scriptname=$0 |
28 | args="$*" | 15 | args="$*" |
diff --git a/tools/testing/selftests/rcutorture/bin/mkinitrd.sh b/tools/testing/selftests/rcutorture/bin/mkinitrd.sh index 83552bb007b4..6fa9bd1ddc09 100755 --- a/tools/testing/selftests/rcutorture/bin/mkinitrd.sh +++ b/tools/testing/selftests/rcutorture/bin/mkinitrd.sh | |||
@@ -1,21 +1,8 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Create an initrd directory if one does not already exist. | 4 | # Create an initrd directory if one does not already exist. |
4 | # | 5 | # |
5 | # This program is free software; you can redistribute it and/or modify | ||
6 | # it under the terms of the GNU General Public License as published by | ||
7 | # the Free Software Foundation; either version 2 of the License, or | ||
8 | # (at your option) any later version. | ||
9 | # | ||
10 | # This program is distributed in the hope that it will be useful, | ||
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
13 | # GNU General Public License for more details. | ||
14 | # | ||
15 | # You should have received a copy of the GNU General Public License | ||
16 | # along with this program; if not, you can access it online at | ||
17 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
18 | # | ||
19 | # Copyright (C) IBM Corporation, 2013 | 6 | # Copyright (C) IBM Corporation, 2013 |
20 | # | 7 | # |
21 | # Author: Connor Shu <Connor.Shu@ibm.com> | 8 | # Author: Connor Shu <Connor.Shu@ibm.com> |
diff --git a/tools/testing/selftests/rcutorture/bin/parse-build.sh b/tools/testing/selftests/rcutorture/bin/parse-build.sh index 24fe5f822b28..0701b3bf6ade 100755 --- a/tools/testing/selftests/rcutorture/bin/parse-build.sh +++ b/tools/testing/selftests/rcutorture/bin/parse-build.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Check the build output from an rcutorture run for goodness. | 4 | # Check the build output from an rcutorture run for goodness. |
4 | # The "file" is a pathname on the local system, and "title" is | 5 | # The "file" is a pathname on the local system, and "title" is |
@@ -8,23 +9,9 @@ | |||
8 | # | 9 | # |
9 | # Usage: parse-build.sh file title | 10 | # Usage: parse-build.sh file title |
10 | # | 11 | # |
11 | # This program is free software; you can redistribute it and/or modify | ||
12 | # it under the terms of the GNU General Public License as published by | ||
13 | # the Free Software Foundation; either version 2 of the License, or | ||
14 | # (at your option) any later version. | ||
15 | # | ||
16 | # This program is distributed in the hope that it will be useful, | ||
17 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
18 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
19 | # GNU General Public License for more details. | ||
20 | # | ||
21 | # You should have received a copy of the GNU General Public License | ||
22 | # along with this program; if not, you can access it online at | ||
23 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
24 | # | ||
25 | # Copyright (C) IBM Corporation, 2011 | 12 | # Copyright (C) IBM Corporation, 2011 |
26 | # | 13 | # |
27 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 14 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
28 | 15 | ||
29 | F=$1 | 16 | F=$1 |
30 | title=$2 | 17 | title=$2 |
diff --git a/tools/testing/selftests/rcutorture/bin/parse-console.sh b/tools/testing/selftests/rcutorture/bin/parse-console.sh index 84933f6aed77..4508373a922f 100755 --- a/tools/testing/selftests/rcutorture/bin/parse-console.sh +++ b/tools/testing/selftests/rcutorture/bin/parse-console.sh | |||
@@ -1,4 +1,5 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Check the console output from an rcutorture run for oopses. | 4 | # Check the console output from an rcutorture run for oopses. |
4 | # The "file" is a pathname on the local system, and "title" is | 5 | # The "file" is a pathname on the local system, and "title" is |
@@ -6,23 +7,9 @@ | |||
6 | # | 7 | # |
7 | # Usage: parse-console.sh file title | 8 | # Usage: parse-console.sh file title |
8 | # | 9 | # |
9 | # This program is free software; you can redistribute it and/or modify | ||
10 | # it under the terms of the GNU General Public License as published by | ||
11 | # the Free Software Foundation; either version 2 of the License, or | ||
12 | # (at your option) any later version. | ||
13 | # | ||
14 | # This program is distributed in the hope that it will be useful, | ||
15 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
16 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | # GNU General Public License for more details. | ||
18 | # | ||
19 | # You should have received a copy of the GNU General Public License | ||
20 | # along with this program; if not, you can access it online at | ||
21 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
22 | # | ||
23 | # Copyright (C) IBM Corporation, 2011 | 10 | # Copyright (C) IBM Corporation, 2011 |
24 | # | 11 | # |
25 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 12 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
26 | 13 | ||
27 | T=${TMPDIR-/tmp}/parse-console.sh.$$ | 14 | T=${TMPDIR-/tmp}/parse-console.sh.$$ |
28 | file="$1" | 15 | file="$1" |
diff --git a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh index 80eb646e1319..d3e4b2971f92 100644 --- a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh | |||
@@ -1,24 +1,11 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Kernel-version-dependent shell functions for the rest of the scripts. | 4 | # Kernel-version-dependent shell functions for the rest of the scripts. |
4 | # | 5 | # |
5 | # This program is free software; you can redistribute it and/or modify | ||
6 | # it under the terms of the GNU General Public License as published by | ||
7 | # the Free Software Foundation; either version 2 of the License, or | ||
8 | # (at your option) any later version. | ||
9 | # | ||
10 | # This program is distributed in the hope that it will be useful, | ||
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
13 | # GNU General Public License for more details. | ||
14 | # | ||
15 | # You should have received a copy of the GNU General Public License | ||
16 | # along with this program; if not, you can access it online at | ||
17 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
18 | # | ||
19 | # Copyright (C) IBM Corporation, 2014 | 6 | # Copyright (C) IBM Corporation, 2014 |
20 | # | 7 | # |
21 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 8 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
22 | 9 | ||
23 | # locktorture_param_onoff bootparam-string config-file | 10 | # locktorture_param_onoff bootparam-string config-file |
24 | # | 11 | # |
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh index 7bab8246392b..effa415f9b92 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcu/ver_functions.sh | |||
@@ -1,24 +1,11 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Kernel-version-dependent shell functions for the rest of the scripts. | 4 | # Kernel-version-dependent shell functions for the rest of the scripts. |
4 | # | 5 | # |
5 | # This program is free software; you can redistribute it and/or modify | ||
6 | # it under the terms of the GNU General Public License as published by | ||
7 | # the Free Software Foundation; either version 2 of the License, or | ||
8 | # (at your option) any later version. | ||
9 | # | ||
10 | # This program is distributed in the hope that it will be useful, | ||
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
13 | # GNU General Public License for more details. | ||
14 | # | ||
15 | # You should have received a copy of the GNU General Public License | ||
16 | # along with this program; if not, you can access it online at | ||
17 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
18 | # | ||
19 | # Copyright (C) IBM Corporation, 2013 | 6 | # Copyright (C) IBM Corporation, 2013 |
20 | # | 7 | # |
21 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 8 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
22 | 9 | ||
23 | # rcutorture_param_n_barrier_cbs bootparam-string | 10 | # rcutorture_param_n_barrier_cbs bootparam-string |
24 | # | 11 | # |
diff --git a/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh index d36b8fd6f0fc..777d5b0c190f 100644 --- a/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh +++ b/tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh | |||
@@ -1,24 +1,11 @@ | |||
1 | #!/bin/bash | 1 | #!/bin/bash |
2 | # SPDX-License-Identifier: GPL-2.0+ | ||
2 | # | 3 | # |
3 | # Torture-suite-dependent shell functions for the rest of the scripts. | 4 | # Torture-suite-dependent shell functions for the rest of the scripts. |
4 | # | 5 | # |
5 | # This program is free software; you can redistribute it and/or modify | ||
6 | # it under the terms of the GNU General Public License as published by | ||
7 | # the Free Software Foundation; either version 2 of the License, or | ||
8 | # (at your option) any later version. | ||
9 | # | ||
10 | # This program is distributed in the hope that it will be useful, | ||
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
13 | # GNU General Public License for more details. | ||
14 | # | ||
15 | # You should have received a copy of the GNU General Public License | ||
16 | # along with this program; if not, you can access it online at | ||
17 | # http://www.gnu.org/licenses/gpl-2.0.html. | ||
18 | # | ||
19 | # Copyright (C) IBM Corporation, 2015 | 6 | # Copyright (C) IBM Corporation, 2015 |
20 | # | 7 | # |
21 | # Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 8 | # Authors: Paul E. McKenney <paulmck@linux.ibm.com> |
22 | 9 | ||
23 | # per_version_boot_params bootparam-string config-file seconds | 10 | # per_version_boot_params bootparam-string config-file seconds |
24 | # | 11 | # |