aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/RCU/arrayRCU.txt20
-rw-r--r--Documentation/RCU/lockdep.txt10
-rw-r--r--Documentation/RCU/rcu_dereference.txt33
-rw-r--r--Documentation/RCU/whatisRCU.txt2
-rw-r--r--Documentation/kernel-parameters.txt33
-rw-r--r--Documentation/memory-barriers.txt55
-rw-r--r--arch/powerpc/include/asm/barrier.h1
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce.c15
-rw-r--r--include/linux/compiler.h16
-rw-r--r--include/linux/rculist.h4
-rw-r--r--include/linux/rcupdate.h58
-rw-r--r--include/linux/rcutiny.h16
-rw-r--r--include/linux/rcutree.h7
-rw-r--r--init/Kconfig72
-rw-r--r--kernel/cpu.c4
-rw-r--r--kernel/events/ring_buffer.c2
-rw-r--r--kernel/locking/locktorture.c14
-rw-r--r--kernel/rcu/rcutorture.c103
-rw-r--r--kernel/rcu/tiny.c38
-rw-r--r--kernel/rcu/tree.c181
-rw-r--r--kernel/rcu/tree.h35
-rw-r--r--kernel/rcu/tree_plugin.h123
-rw-r--r--lib/Kconfig.debug66
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configinit.sh2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck.sh4
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm.sh25
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/CFcommon2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/SRCU-N1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/SRCU-P1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TASKS015
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TASKS021
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TASKS032
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TINY022
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE011
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE022
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE02-T1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE038
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE048
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE054
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE064
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE074
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE086
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE08-T1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE08-T.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE091
-rw-r--r--tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt36
51 files changed, 607 insertions, 429 deletions
diff --git a/Documentation/RCU/arrayRCU.txt b/Documentation/RCU/arrayRCU.txt
index 453ebe6953ee..f05a9afb2c39 100644
--- a/Documentation/RCU/arrayRCU.txt
+++ b/Documentation/RCU/arrayRCU.txt
@@ -10,7 +10,19 @@ also be used to protect arrays. Three situations are as follows:
10 10
113. Resizeable Arrays 113. Resizeable Arrays
12 12
13Each of these situations are discussed below. 13Each of these three situations involves an RCU-protected pointer to an
14array that is separately indexed. It might be tempting to consider use
15of RCU to instead protect the index into an array, however, this use
16case is -not- supported. The problem with RCU-protected indexes into
17arrays is that compilers can play way too many optimization games with
18integers, which means that the rules governing handling of these indexes
19are far more trouble than they are worth. If RCU-protected indexes into
20arrays prove to be particularly valuable (which they have not thus far),
21explicit cooperation from the compiler will be required to permit them
22to be safely used.
23
24That aside, each of the three RCU-protected pointer situations are
25described in the following sections.
14 26
15 27
16Situation 1: Hash Tables 28Situation 1: Hash Tables
@@ -36,9 +48,9 @@ Quick Quiz: Why is it so important that updates be rare when
36Situation 3: Resizeable Arrays 48Situation 3: Resizeable Arrays
37 49
38Use of RCU for resizeable arrays is demonstrated by the grow_ary() 50Use of RCU for resizeable arrays is demonstrated by the grow_ary()
39function used by the System V IPC code. The array is used to map from 51function formerly used by the System V IPC code. The array is used
40semaphore, message-queue, and shared-memory IDs to the data structure 52to map from semaphore, message-queue, and shared-memory IDs to the data
41that represents the corresponding IPC construct. The grow_ary() 53structure that represents the corresponding IPC construct. The grow_ary()
42function does not acquire any locks; instead its caller must hold the 54function does not acquire any locks; instead its caller must hold the
43ids->sem semaphore. 55ids->sem semaphore.
44 56
diff --git a/Documentation/RCU/lockdep.txt b/Documentation/RCU/lockdep.txt
index cd83d2348fef..da51d3068850 100644
--- a/Documentation/RCU/lockdep.txt
+++ b/Documentation/RCU/lockdep.txt
@@ -47,11 +47,6 @@ checking of rcu_dereference() primitives:
47 Use explicit check expression "c" along with 47 Use explicit check expression "c" along with
48 srcu_read_lock_held()(). This is useful in code that 48 srcu_read_lock_held()(). This is useful in code that
49 is invoked by both SRCU readers and updaters. 49 is invoked by both SRCU readers and updaters.
50 rcu_dereference_index_check(p, c):
51 Use explicit check expression "c", but the caller
52 must supply one of the rcu_read_lock_held() functions.
53 This is useful in code that uses RCU-protected arrays
54 that is invoked by both RCU readers and updaters.
55 rcu_dereference_raw(p): 50 rcu_dereference_raw(p):
56 Don't check. (Use sparingly, if at all.) 51 Don't check. (Use sparingly, if at all.)
57 rcu_dereference_protected(p, c): 52 rcu_dereference_protected(p, c):
@@ -64,11 +59,6 @@ checking of rcu_dereference() primitives:
64 but retain the compiler constraints that prevent duplicating 59 but retain the compiler constraints that prevent duplicating
65 or coalescsing. This is useful when when testing the 60 or coalescsing. This is useful when when testing the
66 value of the pointer itself, for example, against NULL. 61 value of the pointer itself, for example, against NULL.
67 rcu_access_index(idx):
68 Return the value of the index and omit all barriers, but
69 retain the compiler constraints that prevent duplicating
70 or coalescsing. This is useful when when testing the
71 value of the index itself, for example, against -1.
72 62
73The rcu_dereference_check() check expression can be any boolean 63The rcu_dereference_check() check expression can be any boolean
74expression, but would normally include a lockdep expression. However, 64expression, but would normally include a lockdep expression. However,
diff --git a/Documentation/RCU/rcu_dereference.txt b/Documentation/RCU/rcu_dereference.txt
index 2d05c9241a33..1e6c0da994f5 100644
--- a/Documentation/RCU/rcu_dereference.txt
+++ b/Documentation/RCU/rcu_dereference.txt
@@ -25,17 +25,6 @@ o You must use one of the rcu_dereference() family of primitives
25 for an example where the compiler can in fact deduce the exact 25 for an example where the compiler can in fact deduce the exact
26 value of the pointer, and thus cause misordering. 26 value of the pointer, and thus cause misordering.
27 27
28o Do not use single-element RCU-protected arrays. The compiler
29 is within its right to assume that the value of an index into
30 such an array must necessarily evaluate to zero. The compiler
31 could then substitute the constant zero for the computation, so
32 that the array index no longer depended on the value returned
33 by rcu_dereference(). If the array index no longer depends
34 on rcu_dereference(), then both the compiler and the CPU
35 are within their rights to order the array access before the
36 rcu_dereference(), which can cause the array access to return
37 garbage.
38
39o Avoid cancellation when using the "+" and "-" infix arithmetic 28o Avoid cancellation when using the "+" and "-" infix arithmetic
40 operators. For example, for a given variable "x", avoid 29 operators. For example, for a given variable "x", avoid
41 "(x-x)". There are similar arithmetic pitfalls from other 30 "(x-x)". There are similar arithmetic pitfalls from other
@@ -76,14 +65,15 @@ o Do not use the results from the boolean "&&" and "||" when
76 dereferencing. For example, the following (rather improbable) 65 dereferencing. For example, the following (rather improbable)
77 code is buggy: 66 code is buggy:
78 67
79 int a[2]; 68 int *p;
80 int index; 69 int *q;
81 int force_zero_index = 1;
82 70
83 ... 71 ...
84 72
85 r1 = rcu_dereference(i1) 73 p = rcu_dereference(gp)
86 r2 = a[r1 && force_zero_index]; /* BUGGY!!! */ 74 q = &global_q;
75 q += p != &oom_p1 && p != &oom_p2;
76 r1 = *q; /* BUGGY!!! */
87 77
88 The reason this is buggy is that "&&" and "||" are often compiled 78 The reason this is buggy is that "&&" and "||" are often compiled
89 using branches. While weak-memory machines such as ARM or PowerPC 79 using branches. While weak-memory machines such as ARM or PowerPC
@@ -94,14 +84,15 @@ o Do not use the results from relational operators ("==", "!=",
94 ">", ">=", "<", or "<=") when dereferencing. For example, 84 ">", ">=", "<", or "<=") when dereferencing. For example,
95 the following (quite strange) code is buggy: 85 the following (quite strange) code is buggy:
96 86
97 int a[2]; 87 int *p;
98 int index; 88 int *q;
99 int flip_index = 0;
100 89
101 ... 90 ...
102 91
103 r1 = rcu_dereference(i1) 92 p = rcu_dereference(gp)
104 r2 = a[r1 != flip_index]; /* BUGGY!!! */ 93 q = &global_q;
94 q += p > &oom_p;
95 r1 = *q; /* BUGGY!!! */
105 96
106 As before, the reason this is buggy is that relational operators 97 As before, the reason this is buggy is that relational operators
107 are often compiled using branches. And as before, although 98 are often compiled using branches. And as before, although
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index 16622c9e86b5..5746b0c77f3e 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -881,9 +881,7 @@ SRCU: Initialization/cleanup
881 881
882All: lockdep-checked RCU-protected pointer access 882All: lockdep-checked RCU-protected pointer access
883 883
884 rcu_access_index
885 rcu_access_pointer 884 rcu_access_pointer
886 rcu_dereference_index_check
887 rcu_dereference_raw 885 rcu_dereference_raw
888 rcu_lockdep_assert 886 rcu_lockdep_assert
889 rcu_sleep_check 887 rcu_sleep_check
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 61ab1628a057..0b7f3e7a029c 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2992,11 +2992,34 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
2992 Set maximum number of finished RCU callbacks to 2992 Set maximum number of finished RCU callbacks to
2993 process in one batch. 2993 process in one batch.
2994 2994
2995 rcutree.dump_tree= [KNL]
2996 Dump the structure of the rcu_node combining tree
2997 out at early boot. This is used for diagnostic
2998 purposes, to verify correct tree setup.
2999
3000 rcutree.gp_cleanup_delay= [KNL]
3001 Set the number of jiffies to delay each step of
3002 RCU grace-period cleanup. This only has effect
3003 when CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP is set.
3004
2995 rcutree.gp_init_delay= [KNL] 3005 rcutree.gp_init_delay= [KNL]
2996 Set the number of jiffies to delay each step of 3006 Set the number of jiffies to delay each step of
2997 RCU grace-period initialization. This only has 3007 RCU grace-period initialization. This only has
2998 effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT is 3008 effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT
2999 set. 3009 is set.
3010
3011 rcutree.gp_preinit_delay= [KNL]
3012 Set the number of jiffies to delay each step of
3013 RCU grace-period pre-initialization, that is,
3014 the propagation of recent CPU-hotplug changes up
3015 the rcu_node combining tree. This only has effect
3016 when CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT is set.
3017
3018 rcutree.rcu_fanout_exact= [KNL]
3019 Disable autobalancing of the rcu_node combining
3020 tree. This is used by rcutorture, and might
3021 possibly be useful for architectures having high
3022 cache-to-cache transfer latencies.
3000 3023
3001 rcutree.rcu_fanout_leaf= [KNL] 3024 rcutree.rcu_fanout_leaf= [KNL]
3002 Increase the number of CPUs assigned to each 3025 Increase the number of CPUs assigned to each
@@ -3101,7 +3124,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
3101 test, hence the "fake". 3124 test, hence the "fake".
3102 3125
3103 rcutorture.nreaders= [KNL] 3126 rcutorture.nreaders= [KNL]
3104 Set number of RCU readers. 3127 Set number of RCU readers. The value -1 selects
3128 N-1, where N is the number of CPUs. A value
3129 "n" less than -1 selects N-n-2, where N is again
3130 the number of CPUs. For example, -2 selects N
3131 (the number of CPUs), -3 selects N+1, and so on.
3105 3132
3106 rcutorture.object_debug= [KNL] 3133 rcutorture.object_debug= [KNL]
3107 Enable debug-object double-call_rcu() testing. 3134 Enable debug-object double-call_rcu() testing.
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 1f362fd2ecb4..360841da3744 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -617,16 +617,16 @@ case what's actually required is:
617However, stores are not speculated. This means that ordering -is- provided 617However, stores are not speculated. This means that ordering -is- provided
618for load-store control dependencies, as in the following example: 618for load-store control dependencies, as in the following example:
619 619
620 q = ACCESS_ONCE(a); 620 q = READ_ONCE_CTRL(a);
621 if (q) { 621 if (q) {
622 ACCESS_ONCE(b) = p; 622 ACCESS_ONCE(b) = p;
623 } 623 }
624 624
625Control dependencies pair normally with other types of barriers. 625Control dependencies pair normally with other types of barriers. That
626That said, please note that ACCESS_ONCE() is not optional! Without the 626said, please note that READ_ONCE_CTRL() is not optional! Without the
627ACCESS_ONCE(), might combine the load from 'a' with other loads from 627READ_ONCE_CTRL(), the compiler might combine the load from 'a' with
628'a', and the store to 'b' with other stores to 'b', with possible highly 628other loads from 'a', and the store to 'b' with other stores to 'b',
629counterintuitive effects on ordering. 629with possible highly counterintuitive effects on ordering.
630 630
631Worse yet, if the compiler is able to prove (say) that the value of 631Worse yet, if the compiler is able to prove (say) that the value of
632variable 'a' is always non-zero, it would be well within its rights 632variable 'a' is always non-zero, it would be well within its rights
@@ -636,12 +636,15 @@ as follows:
636 q = a; 636 q = a;
637 b = p; /* BUG: Compiler and CPU can both reorder!!! */ 637 b = p; /* BUG: Compiler and CPU can both reorder!!! */
638 638
639So don't leave out the ACCESS_ONCE(). 639Finally, the READ_ONCE_CTRL() includes an smp_read_barrier_depends()
640that DEC Alpha needs in order to respect control depedencies.
641
642So don't leave out the READ_ONCE_CTRL().
640 643
641It is tempting to try to enforce ordering on identical stores on both 644It is tempting to try to enforce ordering on identical stores on both
642branches of the "if" statement as follows: 645branches of the "if" statement as follows:
643 646
644 q = ACCESS_ONCE(a); 647 q = READ_ONCE_CTRL(a);
645 if (q) { 648 if (q) {
646 barrier(); 649 barrier();
647 ACCESS_ONCE(b) = p; 650 ACCESS_ONCE(b) = p;
@@ -655,7 +658,7 @@ branches of the "if" statement as follows:
655Unfortunately, current compilers will transform this as follows at high 658Unfortunately, current compilers will transform this as follows at high
656optimization levels: 659optimization levels:
657 660
658 q = ACCESS_ONCE(a); 661 q = READ_ONCE_CTRL(a);
659 barrier(); 662 barrier();
660 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ 663 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
661 if (q) { 664 if (q) {
@@ -685,7 +688,7 @@ memory barriers, for example, smp_store_release():
685In contrast, without explicit memory barriers, two-legged-if control 688In contrast, without explicit memory barriers, two-legged-if control
686ordering is guaranteed only when the stores differ, for example: 689ordering is guaranteed only when the stores differ, for example:
687 690
688 q = ACCESS_ONCE(a); 691 q = READ_ONCE_CTRL(a);
689 if (q) { 692 if (q) {
690 ACCESS_ONCE(b) = p; 693 ACCESS_ONCE(b) = p;
691 do_something(); 694 do_something();
@@ -694,14 +697,14 @@ ordering is guaranteed only when the stores differ, for example:
694 do_something_else(); 697 do_something_else();
695 } 698 }
696 699
697The initial ACCESS_ONCE() is still required to prevent the compiler from 700The initial READ_ONCE_CTRL() is still required to prevent the compiler
698proving the value of 'a'. 701from proving the value of 'a'.
699 702
700In addition, you need to be careful what you do with the local variable 'q', 703In addition, you need to be careful what you do with the local variable 'q',
701otherwise the compiler might be able to guess the value and again remove 704otherwise the compiler might be able to guess the value and again remove
702the needed conditional. For example: 705the needed conditional. For example:
703 706
704 q = ACCESS_ONCE(a); 707 q = READ_ONCE_CTRL(a);
705 if (q % MAX) { 708 if (q % MAX) {
706 ACCESS_ONCE(b) = p; 709 ACCESS_ONCE(b) = p;
707 do_something(); 710 do_something();
@@ -714,7 +717,7 @@ If MAX is defined to be 1, then the compiler knows that (q % MAX) is
714equal to zero, in which case the compiler is within its rights to 717equal to zero, in which case the compiler is within its rights to
715transform the above code into the following: 718transform the above code into the following:
716 719
717 q = ACCESS_ONCE(a); 720 q = READ_ONCE_CTRL(a);
718 ACCESS_ONCE(b) = p; 721 ACCESS_ONCE(b) = p;
719 do_something_else(); 722 do_something_else();
720 723
@@ -725,7 +728,7 @@ is gone, and the barrier won't bring it back. Therefore, if you are
725relying on this ordering, you should make sure that MAX is greater than 728relying on this ordering, you should make sure that MAX is greater than
726one, perhaps as follows: 729one, perhaps as follows:
727 730
728 q = ACCESS_ONCE(a); 731 q = READ_ONCE_CTRL(a);
729 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 732 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
730 if (q % MAX) { 733 if (q % MAX) {
731 ACCESS_ONCE(b) = p; 734 ACCESS_ONCE(b) = p;
@@ -742,14 +745,15 @@ of the 'if' statement.
742You must also be careful not to rely too much on boolean short-circuit 745You must also be careful not to rely too much on boolean short-circuit
743evaluation. Consider this example: 746evaluation. Consider this example:
744 747
745 q = ACCESS_ONCE(a); 748 q = READ_ONCE_CTRL(a);
746 if (a || 1 > 0) 749 if (a || 1 > 0)
747 ACCESS_ONCE(b) = 1; 750 ACCESS_ONCE(b) = 1;
748 751
749Because the second condition is always true, the compiler can transform 752Because the first condition cannot fault and the second condition is
750this example as following, defeating control dependency: 753always true, the compiler can transform this example as following,
754defeating control dependency:
751 755
752 q = ACCESS_ONCE(a); 756 q = READ_ONCE_CTRL(a);
753 ACCESS_ONCE(b) = 1; 757 ACCESS_ONCE(b) = 1;
754 758
755This example underscores the need to ensure that the compiler cannot 759This example underscores the need to ensure that the compiler cannot
@@ -762,8 +766,8 @@ demonstrated by two related examples, with the initial values of
762x and y both being zero: 766x and y both being zero:
763 767
764 CPU 0 CPU 1 768 CPU 0 CPU 1
765 ===================== ===================== 769 ======================= =======================
766 r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); 770 r1 = READ_ONCE_CTRL(x); r2 = READ_ONCE_CTRL(y);
767 if (r1 > 0) if (r2 > 0) 771 if (r1 > 0) if (r2 > 0)
768 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; 772 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
769 773
@@ -783,7 +787,8 @@ But because control dependencies do -not- provide transitivity, the above
783assertion can fail after the combined three-CPU example completes. If you 787assertion can fail after the combined three-CPU example completes. If you
784need the three-CPU example to provide ordering, you will need smp_mb() 788need the three-CPU example to provide ordering, you will need smp_mb()
785between the loads and stores in the CPU 0 and CPU 1 code fragments, 789between the loads and stores in the CPU 0 and CPU 1 code fragments,
786that is, just before or just after the "if" statements. 790that is, just before or just after the "if" statements. Furthermore,
791the original two-CPU example is very fragile and should be avoided.
787 792
788These two examples are the LB and WWC litmus tests from this paper: 793These two examples are the LB and WWC litmus tests from this paper:
789http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this 794http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
@@ -791,6 +796,12 @@ site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
791 796
792In summary: 797In summary:
793 798
799 (*) Control dependencies must be headed by READ_ONCE_CTRL().
800 Or, as a much less preferable alternative, interpose
801 be headed by READ_ONCE() or an ACCESS_ONCE() read and must
802 have smp_read_barrier_depends() between this read and the
803 control-dependent write.
804
794 (*) Control dependencies can order prior loads against later stores. 805 (*) Control dependencies can order prior loads against later stores.
795 However, they do -not- guarantee any other sort of ordering: 806 However, they do -not- guarantee any other sort of ordering:
796 Not prior loads against later loads, nor prior stores against 807 Not prior loads against later loads, nor prior stores against
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a3bf5be111ff..1124f59b8df4 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -89,5 +89,6 @@ do { \
89 89
90#define smp_mb__before_atomic() smp_mb() 90#define smp_mb__before_atomic() smp_mb()
91#define smp_mb__after_atomic() smp_mb() 91#define smp_mb__after_atomic() smp_mb()
92#define smp_mb__before_spinlock() smp_mb()
92 93
93#endif /* _ASM_POWERPC_BARRIER_H */ 94#endif /* _ASM_POWERPC_BARRIER_H */
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index e535533d5ab8..d4298feaa6fb 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -53,9 +53,12 @@
53static DEFINE_MUTEX(mce_chrdev_read_mutex); 53static DEFINE_MUTEX(mce_chrdev_read_mutex);
54 54
55#define rcu_dereference_check_mce(p) \ 55#define rcu_dereference_check_mce(p) \
56 rcu_dereference_index_check((p), \ 56({ \
57 rcu_read_lock_sched_held() || \ 57 rcu_lockdep_assert(rcu_read_lock_sched_held() || \
58 lockdep_is_held(&mce_chrdev_read_mutex)) 58 lockdep_is_held(&mce_chrdev_read_mutex), \
59 "suspicious rcu_dereference_check_mce() usage"); \
60 smp_load_acquire(&(p)); \
61})
59 62
60#define CREATE_TRACE_POINTS 63#define CREATE_TRACE_POINTS
61#include <trace/events/mce.h> 64#include <trace/events/mce.h>
@@ -1884,7 +1887,7 @@ out:
1884static unsigned int mce_chrdev_poll(struct file *file, poll_table *wait) 1887static unsigned int mce_chrdev_poll(struct file *file, poll_table *wait)
1885{ 1888{
1886 poll_wait(file, &mce_chrdev_wait, wait); 1889 poll_wait(file, &mce_chrdev_wait, wait);
1887 if (rcu_access_index(mcelog.next)) 1890 if (READ_ONCE(mcelog.next))
1888 return POLLIN | POLLRDNORM; 1891 return POLLIN | POLLRDNORM;
1889 if (!mce_apei_read_done && apei_check_mce()) 1892 if (!mce_apei_read_done && apei_check_mce())
1890 return POLLIN | POLLRDNORM; 1893 return POLLIN | POLLRDNORM;
@@ -1929,8 +1932,8 @@ void register_mce_write_callback(ssize_t (*fn)(struct file *filp,
1929} 1932}
1930EXPORT_SYMBOL_GPL(register_mce_write_callback); 1933EXPORT_SYMBOL_GPL(register_mce_write_callback);
1931 1934
1932ssize_t mce_chrdev_write(struct file *filp, const char __user *ubuf, 1935static ssize_t mce_chrdev_write(struct file *filp, const char __user *ubuf,
1933 size_t usize, loff_t *off) 1936 size_t usize, loff_t *off)
1934{ 1937{
1935 if (mce_write) 1938 if (mce_write)
1936 return mce_write(filp, ubuf, usize, off); 1939 return mce_write(filp, ubuf, usize, off);
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 867722591be2..5d66777914db 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -252,6 +252,22 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
252#define WRITE_ONCE(x, val) \ 252#define WRITE_ONCE(x, val) \
253 ({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; }) 253 ({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; })
254 254
255/**
256 * READ_ONCE_CTRL - Read a value heading a control dependency
257 * @x: The value to be read, heading the control dependency
258 *
259 * Control dependencies are tricky. See Documentation/memory-barriers.txt
260 * for important information on how to use them. Note that in many cases,
261 * use of smp_load_acquire() will be much simpler. Control dependencies
262 * should be avoided except on the hottest of hotpaths.
263 */
264#define READ_ONCE_CTRL(x) \
265({ \
266 typeof(x) __val = READ_ONCE(x); \
267 smp_read_barrier_depends(); /* Enforce control dependency. */ \
268 __val; \
269})
270
255#endif /* __KERNEL__ */ 271#endif /* __KERNEL__ */
256 272
257#endif /* __ASSEMBLY__ */ 273#endif /* __ASSEMBLY__ */
diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index 665397247e82..17c6b1f84a77 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -549,8 +549,8 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
549 */ 549 */
550#define hlist_for_each_entry_from_rcu(pos, member) \ 550#define hlist_for_each_entry_from_rcu(pos, member) \
551 for (; pos; \ 551 for (; pos; \
552 pos = hlist_entry_safe(rcu_dereference((pos)->member.next),\ 552 pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu( \
553 typeof(*(pos)), member)) 553 &(pos)->member)), typeof(*(pos)), member))
554 554
555#endif /* __KERNEL__ */ 555#endif /* __KERNEL__ */
556#endif 556#endif
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 87bb0eee665b..03a899aabd17 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -292,10 +292,6 @@ void rcu_sched_qs(void);
292void rcu_bh_qs(void); 292void rcu_bh_qs(void);
293void rcu_check_callbacks(int user); 293void rcu_check_callbacks(int user);
294struct notifier_block; 294struct notifier_block;
295void rcu_idle_enter(void);
296void rcu_idle_exit(void);
297void rcu_irq_enter(void);
298void rcu_irq_exit(void);
299int rcu_cpu_notify(struct notifier_block *self, 295int rcu_cpu_notify(struct notifier_block *self,
300 unsigned long action, void *hcpu); 296 unsigned long action, void *hcpu);
301 297
@@ -628,21 +624,6 @@ static inline void rcu_preempt_sleep_check(void)
628 ((typeof(*p) __force __kernel *)(p)); \ 624 ((typeof(*p) __force __kernel *)(p)); \
629}) 625})
630 626
631#define __rcu_access_index(p, space) \
632({ \
633 typeof(p) _________p1 = READ_ONCE(p); \
634 rcu_dereference_sparse(p, space); \
635 (_________p1); \
636})
637#define __rcu_dereference_index_check(p, c) \
638({ \
639 /* Dependency order vs. p above. */ \
640 typeof(p) _________p1 = lockless_dereference(p); \
641 rcu_lockdep_assert(c, \
642 "suspicious rcu_dereference_index_check() usage"); \
643 (_________p1); \
644})
645
646/** 627/**
647 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable 628 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
648 * @v: The value to statically initialize with. 629 * @v: The value to statically initialize with.
@@ -787,41 +768,6 @@ static inline void rcu_preempt_sleep_check(void)
787#define rcu_dereference_raw_notrace(p) __rcu_dereference_check((p), 1, __rcu) 768#define rcu_dereference_raw_notrace(p) __rcu_dereference_check((p), 1, __rcu)
788 769
789/** 770/**
790 * rcu_access_index() - fetch RCU index with no dereferencing
791 * @p: The index to read
792 *
793 * Return the value of the specified RCU-protected index, but omit the
794 * smp_read_barrier_depends() and keep the READ_ONCE(). This is useful
795 * when the value of this index is accessed, but the index is not
796 * dereferenced, for example, when testing an RCU-protected index against
797 * -1. Although rcu_access_index() may also be used in cases where
798 * update-side locks prevent the value of the index from changing, you
799 * should instead use rcu_dereference_index_protected() for this use case.
800 */
801#define rcu_access_index(p) __rcu_access_index((p), __rcu)
802
803/**
804 * rcu_dereference_index_check() - rcu_dereference for indices with debug checking
805 * @p: The pointer to read, prior to dereferencing
806 * @c: The conditions under which the dereference will take place
807 *
808 * Similar to rcu_dereference_check(), but omits the sparse checking.
809 * This allows rcu_dereference_index_check() to be used on integers,
810 * which can then be used as array indices. Attempting to use
811 * rcu_dereference_check() on an integer will give compiler warnings
812 * because the sparse address-space mechanism relies on dereferencing
813 * the RCU-protected pointer. Dereferencing integers is not something
814 * that even gcc will put up with.
815 *
816 * Note that this function does not implicitly check for RCU read-side
817 * critical sections. If this function gains lots of uses, it might
818 * make sense to provide versions for each flavor of RCU, but it does
819 * not make sense as of early 2010.
820 */
821#define rcu_dereference_index_check(p, c) \
822 __rcu_dereference_index_check((p), (c))
823
824/**
825 * rcu_dereference_protected() - fetch RCU pointer when updates prevented 771 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
826 * @p: The pointer to read, prior to dereferencing 772 * @p: The pointer to read, prior to dereferencing
827 * @c: The conditions under which the dereference will take place 773 * @c: The conditions under which the dereference will take place
@@ -1153,13 +1099,13 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
1153#define kfree_rcu(ptr, rcu_head) \ 1099#define kfree_rcu(ptr, rcu_head) \
1154 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) 1100 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
1155 1101
1156#if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) 1102#ifdef CONFIG_TINY_RCU
1157static inline int rcu_needs_cpu(unsigned long *delta_jiffies) 1103static inline int rcu_needs_cpu(unsigned long *delta_jiffies)
1158{ 1104{
1159 *delta_jiffies = ULONG_MAX; 1105 *delta_jiffies = ULONG_MAX;
1160 return 0; 1106 return 0;
1161} 1107}
1162#endif /* #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) */ 1108#endif /* #ifdef CONFIG_TINY_RCU */
1163 1109
1164#if defined(CONFIG_RCU_NOCB_CPU_ALL) 1110#if defined(CONFIG_RCU_NOCB_CPU_ALL)
1165static inline bool rcu_is_nocb_cpu(int cpu) { return true; } 1111static inline bool rcu_is_nocb_cpu(int cpu) { return true; }
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 937edaeb150d..3df6c1ec4e25 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -159,6 +159,22 @@ static inline void rcu_cpu_stall_reset(void)
159{ 159{
160} 160}
161 161
162static inline void rcu_idle_enter(void)
163{
164}
165
166static inline void rcu_idle_exit(void)
167{
168}
169
170static inline void rcu_irq_enter(void)
171{
172}
173
174static inline void rcu_irq_exit(void)
175{
176}
177
162static inline void exit_rcu(void) 178static inline void exit_rcu(void)
163{ 179{
164} 180}
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index d2e583a6aaca..3fa4a43ab415 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -31,9 +31,7 @@
31#define __LINUX_RCUTREE_H 31#define __LINUX_RCUTREE_H
32 32
33void rcu_note_context_switch(void); 33void rcu_note_context_switch(void);
34#ifndef CONFIG_RCU_NOCB_CPU_ALL
35int rcu_needs_cpu(unsigned long *delta_jiffies); 34int rcu_needs_cpu(unsigned long *delta_jiffies);
36#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
37void rcu_cpu_stall_reset(void); 35void rcu_cpu_stall_reset(void);
38 36
39/* 37/*
@@ -93,6 +91,11 @@ void rcu_force_quiescent_state(void);
93void rcu_bh_force_quiescent_state(void); 91void rcu_bh_force_quiescent_state(void);
94void rcu_sched_force_quiescent_state(void); 92void rcu_sched_force_quiescent_state(void);
95 93
94void rcu_idle_enter(void);
95void rcu_idle_exit(void);
96void rcu_irq_enter(void);
97void rcu_irq_exit(void);
98
96void exit_rcu(void); 99void exit_rcu(void);
97 100
98void rcu_scheduler_starting(void); 101void rcu_scheduler_starting(void);
diff --git a/init/Kconfig b/init/Kconfig
index dc24dec60232..4c08197044f1 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -465,13 +465,9 @@ endmenu # "CPU/Task time and stats accounting"
465 465
466menu "RCU Subsystem" 466menu "RCU Subsystem"
467 467
468choice
469 prompt "RCU Implementation"
470 default TREE_RCU
471
472config TREE_RCU 468config TREE_RCU
473 bool "Tree-based hierarchical RCU" 469 bool
474 depends on !PREEMPT && SMP 470 default y if !PREEMPT && SMP
475 help 471 help
476 This option selects the RCU implementation that is 472 This option selects the RCU implementation that is
477 designed for very large SMP system with hundreds or 473 designed for very large SMP system with hundreds or
@@ -479,8 +475,8 @@ config TREE_RCU
479 smaller systems. 475 smaller systems.
480 476
481config PREEMPT_RCU 477config PREEMPT_RCU
482 bool "Preemptible tree-based hierarchical RCU" 478 bool
483 depends on PREEMPT 479 default y if PREEMPT
484 help 480 help
485 This option selects the RCU implementation that is 481 This option selects the RCU implementation that is
486 designed for very large SMP systems with hundreds or 482 designed for very large SMP systems with hundreds or
@@ -491,15 +487,28 @@ config PREEMPT_RCU
491 Select this option if you are unsure. 487 Select this option if you are unsure.
492 488
493config TINY_RCU 489config TINY_RCU
494 bool "UP-only small-memory-footprint RCU" 490 bool
495 depends on !PREEMPT && !SMP 491 default y if !PREEMPT && !SMP
496 help 492 help
497 This option selects the RCU implementation that is 493 This option selects the RCU implementation that is
498 designed for UP systems from which real-time response 494 designed for UP systems from which real-time response
499 is not required. This option greatly reduces the 495 is not required. This option greatly reduces the
500 memory footprint of RCU. 496 memory footprint of RCU.
501 497
502endchoice 498config RCU_EXPERT
499 bool "Make expert-level adjustments to RCU configuration"
500 default n
501 help
502 This option needs to be enabled if you wish to make
503 expert-level adjustments to RCU configuration. By default,
504 no such adjustments can be made, which has the often-beneficial
505 side-effect of preventing "make oldconfig" from asking you all
506 sorts of detailed questions about how you would like numerous
507 obscure RCU options to be set up.
508
509 Say Y if you need to make expert-level adjustments to RCU.
510
511 Say N if you are unsure.
503 512
504config SRCU 513config SRCU
505 bool 514 bool
@@ -509,7 +518,7 @@ config SRCU
509 sections. 518 sections.
510 519
511config TASKS_RCU 520config TASKS_RCU
512 bool "Task_based RCU implementation using voluntary context switch" 521 bool
513 default n 522 default n
514 select SRCU 523 select SRCU
515 help 524 help
@@ -517,8 +526,6 @@ config TASKS_RCU
517 only voluntary context switch (not preemption!), idle, and 526 only voluntary context switch (not preemption!), idle, and
518 user-mode execution as quiescent states. 527 user-mode execution as quiescent states.
519 528
520 If unsure, say N.
521
522config RCU_STALL_COMMON 529config RCU_STALL_COMMON
523 def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE ) 530 def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE )
524 help 531 help
@@ -531,9 +538,7 @@ config CONTEXT_TRACKING
531 bool 538 bool
532 539
533config RCU_USER_QS 540config RCU_USER_QS
534 bool "Consider userspace as in RCU extended quiescent state" 541 bool
535 depends on HAVE_CONTEXT_TRACKING && SMP
536 select CONTEXT_TRACKING
537 help 542 help
538 This option sets hooks on kernel / userspace boundaries and 543 This option sets hooks on kernel / userspace boundaries and
539 puts RCU in extended quiescent state when the CPU runs in 544 puts RCU in extended quiescent state when the CPU runs in
@@ -541,12 +546,6 @@ config RCU_USER_QS
541 excluded from the global RCU state machine and thus doesn't 546 excluded from the global RCU state machine and thus doesn't
542 try to keep the timer tick on for RCU. 547 try to keep the timer tick on for RCU.
543 548
544 Unless you want to hack and help the development of the full
545 dynticks mode, you shouldn't enable this option. It also
546 adds unnecessary overhead.
547
548 If unsure say N
549
550config CONTEXT_TRACKING_FORCE 549config CONTEXT_TRACKING_FORCE
551 bool "Force context tracking" 550 bool "Force context tracking"
552 depends on CONTEXT_TRACKING 551 depends on CONTEXT_TRACKING
@@ -578,7 +577,7 @@ config RCU_FANOUT
578 int "Tree-based hierarchical RCU fanout value" 577 int "Tree-based hierarchical RCU fanout value"
579 range 2 64 if 64BIT 578 range 2 64 if 64BIT
580 range 2 32 if !64BIT 579 range 2 32 if !64BIT
581 depends on TREE_RCU || PREEMPT_RCU 580 depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT
582 default 64 if 64BIT 581 default 64 if 64BIT
583 default 32 if !64BIT 582 default 32 if !64BIT
584 help 583 help
@@ -596,9 +595,9 @@ config RCU_FANOUT
596 595
597config RCU_FANOUT_LEAF 596config RCU_FANOUT_LEAF
598 int "Tree-based hierarchical RCU leaf-level fanout value" 597 int "Tree-based hierarchical RCU leaf-level fanout value"
599 range 2 RCU_FANOUT if 64BIT 598 range 2 64 if 64BIT
600 range 2 RCU_FANOUT if !64BIT 599 range 2 32 if !64BIT
601 depends on TREE_RCU || PREEMPT_RCU 600 depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT
602 default 16 601 default 16
603 help 602 help
604 This option controls the leaf-level fanout of hierarchical 603 This option controls the leaf-level fanout of hierarchical
@@ -621,23 +620,9 @@ config RCU_FANOUT_LEAF
621 620
622 Take the default if unsure. 621 Take the default if unsure.
623 622
624config RCU_FANOUT_EXACT
625 bool "Disable tree-based hierarchical RCU auto-balancing"
626 depends on TREE_RCU || PREEMPT_RCU
627 default n
628 help
629 This option forces use of the exact RCU_FANOUT value specified,
630 regardless of imbalances in the hierarchy. This is useful for
631 testing RCU itself, and might one day be useful on systems with
632 strong NUMA behavior.
633
634 Without RCU_FANOUT_EXACT, the code will balance the hierarchy.
635
636 Say N if unsure.
637
638config RCU_FAST_NO_HZ 623config RCU_FAST_NO_HZ
639 bool "Accelerate last non-dyntick-idle CPU's grace periods" 624 bool "Accelerate last non-dyntick-idle CPU's grace periods"
640 depends on NO_HZ_COMMON && SMP 625 depends on NO_HZ_COMMON && SMP && RCU_EXPERT
641 default n 626 default n
642 help 627 help
643 This option permits CPUs to enter dynticks-idle state even if 628 This option permits CPUs to enter dynticks-idle state even if
@@ -663,7 +648,7 @@ config TREE_RCU_TRACE
663 648
664config RCU_BOOST 649config RCU_BOOST
665 bool "Enable RCU priority boosting" 650 bool "Enable RCU priority boosting"
666 depends on RT_MUTEXES && PREEMPT_RCU 651 depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
667 default n 652 default n
668 help 653 help
669 This option boosts the priority of preempted RCU readers that 654 This option boosts the priority of preempted RCU readers that
@@ -680,6 +665,7 @@ config RCU_KTHREAD_PRIO
680 range 0 99 if !RCU_BOOST 665 range 0 99 if !RCU_BOOST
681 default 1 if RCU_BOOST 666 default 1 if RCU_BOOST
682 default 0 if !RCU_BOOST 667 default 0 if !RCU_BOOST
668 depends on RCU_EXPERT
683 help 669 help
684 This option specifies the SCHED_FIFO priority value that will be 670 This option specifies the SCHED_FIFO priority value that will be
685 assigned to the rcuc/n and rcub/n threads and is also the value 671 assigned to the rcuc/n and rcub/n threads and is also the value
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 94bbe4695232..9c9c9fab16cc 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -398,7 +398,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
398 err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu)); 398 err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
399 if (err) { 399 if (err) {
400 /* CPU didn't die: tell everyone. Can't complain. */ 400 /* CPU didn't die: tell everyone. Can't complain. */
401 smpboot_unpark_threads(cpu);
402 cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu); 401 cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
403 goto out_release; 402 goto out_release;
404 } 403 }
@@ -463,6 +462,7 @@ static int smpboot_thread_call(struct notifier_block *nfb,
463 462
464 switch (action & ~CPU_TASKS_FROZEN) { 463 switch (action & ~CPU_TASKS_FROZEN) {
465 464
465 case CPU_DOWN_FAILED:
466 case CPU_ONLINE: 466 case CPU_ONLINE:
467 smpboot_unpark_threads(cpu); 467 smpboot_unpark_threads(cpu);
468 break; 468 break;
@@ -479,7 +479,7 @@ static struct notifier_block smpboot_thread_notifier = {
479 .priority = CPU_PRI_SMPBOOT, 479 .priority = CPU_PRI_SMPBOOT,
480}; 480};
481 481
482void __cpuinit smpboot_thread_init(void) 482void smpboot_thread_init(void)
483{ 483{
484 register_cpu_notifier(&smpboot_thread_notifier); 484 register_cpu_notifier(&smpboot_thread_notifier);
485} 485}
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 232f00f273cb..17fcb73c4a50 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -141,7 +141,7 @@ int perf_output_begin(struct perf_output_handle *handle,
141 perf_output_get_handle(handle); 141 perf_output_get_handle(handle);
142 142
143 do { 143 do {
144 tail = ACCESS_ONCE(rb->user_page->data_tail); 144 tail = READ_ONCE_CTRL(rb->user_page->data_tail);
145 offset = head = local_read(&rb->head); 145 offset = head = local_read(&rb->head);
146 if (!rb->overwrite && 146 if (!rb->overwrite &&
147 unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size)) 147 unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size))
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index ec8cce259779..32244186f1f2 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -122,12 +122,12 @@ static int torture_lock_busted_write_lock(void)
122 122
123static void torture_lock_busted_write_delay(struct torture_random_state *trsp) 123static void torture_lock_busted_write_delay(struct torture_random_state *trsp)
124{ 124{
125 const unsigned long longdelay_us = 100; 125 const unsigned long longdelay_ms = 100;
126 126
127 /* We want a long delay occasionally to force massive contention. */ 127 /* We want a long delay occasionally to force massive contention. */
128 if (!(torture_random(trsp) % 128 if (!(torture_random(trsp) %
129 (cxt.nrealwriters_stress * 2000 * longdelay_us))) 129 (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
130 mdelay(longdelay_us); 130 mdelay(longdelay_ms);
131#ifdef CONFIG_PREEMPT 131#ifdef CONFIG_PREEMPT
132 if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) 132 if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
133 preempt_schedule(); /* Allow test to be preempted. */ 133 preempt_schedule(); /* Allow test to be preempted. */
@@ -160,14 +160,14 @@ static int torture_spin_lock_write_lock(void) __acquires(torture_spinlock)
160static void torture_spin_lock_write_delay(struct torture_random_state *trsp) 160static void torture_spin_lock_write_delay(struct torture_random_state *trsp)
161{ 161{
162 const unsigned long shortdelay_us = 2; 162 const unsigned long shortdelay_us = 2;
163 const unsigned long longdelay_us = 100; 163 const unsigned long longdelay_ms = 100;
164 164
165 /* We want a short delay mostly to emulate likely code, and 165 /* We want a short delay mostly to emulate likely code, and
166 * we want a long delay occasionally to force massive contention. 166 * we want a long delay occasionally to force massive contention.
167 */ 167 */
168 if (!(torture_random(trsp) % 168 if (!(torture_random(trsp) %
169 (cxt.nrealwriters_stress * 2000 * longdelay_us))) 169 (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
170 mdelay(longdelay_us); 170 mdelay(longdelay_ms);
171 if (!(torture_random(trsp) % 171 if (!(torture_random(trsp) %
172 (cxt.nrealwriters_stress * 2 * shortdelay_us))) 172 (cxt.nrealwriters_stress * 2 * shortdelay_us)))
173 udelay(shortdelay_us); 173 udelay(shortdelay_us);
@@ -309,7 +309,7 @@ static int torture_rwlock_read_lock_irq(void) __acquires(torture_rwlock)
309static void torture_rwlock_read_unlock_irq(void) 309static void torture_rwlock_read_unlock_irq(void)
310__releases(torture_rwlock) 310__releases(torture_rwlock)
311{ 311{
312 write_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags); 312 read_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags);
313} 313}
314 314
315static struct lock_torture_ops rw_lock_irq_ops = { 315static struct lock_torture_ops rw_lock_irq_ops = {
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index a67ef6ff86b0..59e32684c23b 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -241,6 +241,7 @@ rcu_torture_free(struct rcu_torture *p)
241struct rcu_torture_ops { 241struct rcu_torture_ops {
242 int ttype; 242 int ttype;
243 void (*init)(void); 243 void (*init)(void);
244 void (*cleanup)(void);
244 int (*readlock)(void); 245 int (*readlock)(void);
245 void (*read_delay)(struct torture_random_state *rrsp); 246 void (*read_delay)(struct torture_random_state *rrsp);
246 void (*readunlock)(int idx); 247 void (*readunlock)(int idx);
@@ -477,10 +478,12 @@ static struct rcu_torture_ops rcu_busted_ops = {
477 */ 478 */
478 479
479DEFINE_STATIC_SRCU(srcu_ctl); 480DEFINE_STATIC_SRCU(srcu_ctl);
481static struct srcu_struct srcu_ctld;
482static struct srcu_struct *srcu_ctlp = &srcu_ctl;
480 483
481static int srcu_torture_read_lock(void) __acquires(&srcu_ctl) 484static int srcu_torture_read_lock(void) __acquires(srcu_ctlp)
482{ 485{
483 return srcu_read_lock(&srcu_ctl); 486 return srcu_read_lock(srcu_ctlp);
484} 487}
485 488
486static void srcu_read_delay(struct torture_random_state *rrsp) 489static void srcu_read_delay(struct torture_random_state *rrsp)
@@ -499,49 +502,49 @@ static void srcu_read_delay(struct torture_random_state *rrsp)
499 rcu_read_delay(rrsp); 502 rcu_read_delay(rrsp);
500} 503}
501 504
502static void srcu_torture_read_unlock(int idx) __releases(&srcu_ctl) 505static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp)
503{ 506{
504 srcu_read_unlock(&srcu_ctl, idx); 507 srcu_read_unlock(srcu_ctlp, idx);
505} 508}
506 509
507static unsigned long srcu_torture_completed(void) 510static unsigned long srcu_torture_completed(void)
508{ 511{
509 return srcu_batches_completed(&srcu_ctl); 512 return srcu_batches_completed(srcu_ctlp);
510} 513}
511 514
512static void srcu_torture_deferred_free(struct rcu_torture *rp) 515static void srcu_torture_deferred_free(struct rcu_torture *rp)
513{ 516{
514 call_srcu(&srcu_ctl, &rp->rtort_rcu, rcu_torture_cb); 517 call_srcu(srcu_ctlp, &rp->rtort_rcu, rcu_torture_cb);
515} 518}
516 519
517static void srcu_torture_synchronize(void) 520static void srcu_torture_synchronize(void)
518{ 521{
519 synchronize_srcu(&srcu_ctl); 522 synchronize_srcu(srcu_ctlp);
520} 523}
521 524
522static void srcu_torture_call(struct rcu_head *head, 525static void srcu_torture_call(struct rcu_head *head,
523 void (*func)(struct rcu_head *head)) 526 void (*func)(struct rcu_head *head))
524{ 527{
525 call_srcu(&srcu_ctl, head, func); 528 call_srcu(srcu_ctlp, head, func);
526} 529}
527 530
528static void srcu_torture_barrier(void) 531static void srcu_torture_barrier(void)
529{ 532{
530 srcu_barrier(&srcu_ctl); 533 srcu_barrier(srcu_ctlp);
531} 534}
532 535
533static void srcu_torture_stats(void) 536static void srcu_torture_stats(void)
534{ 537{
535 int cpu; 538 int cpu;
536 int idx = srcu_ctl.completed & 0x1; 539 int idx = srcu_ctlp->completed & 0x1;
537 540
538 pr_alert("%s%s per-CPU(idx=%d):", 541 pr_alert("%s%s per-CPU(idx=%d):",
539 torture_type, TORTURE_FLAG, idx); 542 torture_type, TORTURE_FLAG, idx);
540 for_each_possible_cpu(cpu) { 543 for_each_possible_cpu(cpu) {
541 long c0, c1; 544 long c0, c1;
542 545
543 c0 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[!idx]; 546 c0 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[!idx];
544 c1 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[idx]; 547 c1 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[idx];
545 pr_cont(" %d(%ld,%ld)", cpu, c0, c1); 548 pr_cont(" %d(%ld,%ld)", cpu, c0, c1);
546 } 549 }
547 pr_cont("\n"); 550 pr_cont("\n");
@@ -549,7 +552,7 @@ static void srcu_torture_stats(void)
549 552
550static void srcu_torture_synchronize_expedited(void) 553static void srcu_torture_synchronize_expedited(void)
551{ 554{
552 synchronize_srcu_expedited(&srcu_ctl); 555 synchronize_srcu_expedited(srcu_ctlp);
553} 556}
554 557
555static struct rcu_torture_ops srcu_ops = { 558static struct rcu_torture_ops srcu_ops = {
@@ -569,6 +572,38 @@ static struct rcu_torture_ops srcu_ops = {
569 .name = "srcu" 572 .name = "srcu"
570}; 573};
571 574
575static void srcu_torture_init(void)
576{
577 rcu_sync_torture_init();
578 WARN_ON(init_srcu_struct(&srcu_ctld));
579 srcu_ctlp = &srcu_ctld;
580}
581
582static void srcu_torture_cleanup(void)
583{
584 cleanup_srcu_struct(&srcu_ctld);
585 srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */
586}
587
588/* As above, but dynamically allocated. */
589static struct rcu_torture_ops srcud_ops = {
590 .ttype = SRCU_FLAVOR,
591 .init = srcu_torture_init,
592 .cleanup = srcu_torture_cleanup,
593 .readlock = srcu_torture_read_lock,
594 .read_delay = srcu_read_delay,
595 .readunlock = srcu_torture_read_unlock,
596 .started = NULL,
597 .completed = srcu_torture_completed,
598 .deferred_free = srcu_torture_deferred_free,
599 .sync = srcu_torture_synchronize,
600 .exp_sync = srcu_torture_synchronize_expedited,
601 .call = srcu_torture_call,
602 .cb_barrier = srcu_torture_barrier,
603 .stats = srcu_torture_stats,
604 .name = "srcud"
605};
606
572/* 607/*
573 * Definitions for sched torture testing. 608 * Definitions for sched torture testing.
574 */ 609 */
@@ -672,8 +707,8 @@ static void rcu_torture_boost_cb(struct rcu_head *head)
672 struct rcu_boost_inflight *rbip = 707 struct rcu_boost_inflight *rbip =
673 container_of(head, struct rcu_boost_inflight, rcu); 708 container_of(head, struct rcu_boost_inflight, rcu);
674 709
675 smp_mb(); /* Ensure RCU-core accesses precede clearing ->inflight */ 710 /* Ensure RCU-core accesses precede clearing ->inflight */
676 rbip->inflight = 0; 711 smp_store_release(&rbip->inflight, 0);
677} 712}
678 713
679static int rcu_torture_boost(void *arg) 714static int rcu_torture_boost(void *arg)
@@ -710,9 +745,9 @@ static int rcu_torture_boost(void *arg)
710 call_rcu_time = jiffies; 745 call_rcu_time = jiffies;
711 while (ULONG_CMP_LT(jiffies, endtime)) { 746 while (ULONG_CMP_LT(jiffies, endtime)) {
712 /* If we don't have a callback in flight, post one. */ 747 /* If we don't have a callback in flight, post one. */
713 if (!rbi.inflight) { 748 if (!smp_load_acquire(&rbi.inflight)) {
714 smp_mb(); /* RCU core before ->inflight = 1. */ 749 /* RCU core before ->inflight = 1. */
715 rbi.inflight = 1; 750 smp_store_release(&rbi.inflight, 1);
716 call_rcu(&rbi.rcu, rcu_torture_boost_cb); 751 call_rcu(&rbi.rcu, rcu_torture_boost_cb);
717 if (jiffies - call_rcu_time > 752 if (jiffies - call_rcu_time >
718 test_boost_duration * HZ - HZ / 2) { 753 test_boost_duration * HZ - HZ / 2) {
@@ -751,11 +786,10 @@ checkwait: stutter_wait("rcu_torture_boost");
751 } while (!torture_must_stop()); 786 } while (!torture_must_stop());
752 787
753 /* Clean up and exit. */ 788 /* Clean up and exit. */
754 while (!kthread_should_stop() || rbi.inflight) { 789 while (!kthread_should_stop() || smp_load_acquire(&rbi.inflight)) {
755 torture_shutdown_absorb("rcu_torture_boost"); 790 torture_shutdown_absorb("rcu_torture_boost");
756 schedule_timeout_uninterruptible(1); 791 schedule_timeout_uninterruptible(1);
757 } 792 }
758 smp_mb(); /* order accesses to ->inflight before stack-frame death. */
759 destroy_rcu_head_on_stack(&rbi.rcu); 793 destroy_rcu_head_on_stack(&rbi.rcu);
760 torture_kthread_stopping("rcu_torture_boost"); 794 torture_kthread_stopping("rcu_torture_boost");
761 return 0; 795 return 0;
@@ -1054,7 +1088,7 @@ static void rcu_torture_timer(unsigned long unused)
1054 p = rcu_dereference_check(rcu_torture_current, 1088 p = rcu_dereference_check(rcu_torture_current,
1055 rcu_read_lock_bh_held() || 1089 rcu_read_lock_bh_held() ||
1056 rcu_read_lock_sched_held() || 1090 rcu_read_lock_sched_held() ||
1057 srcu_read_lock_held(&srcu_ctl)); 1091 srcu_read_lock_held(srcu_ctlp));
1058 if (p == NULL) { 1092 if (p == NULL) {
1059 /* Leave because rcu_torture_writer is not yet underway */ 1093 /* Leave because rcu_torture_writer is not yet underway */
1060 cur_ops->readunlock(idx); 1094 cur_ops->readunlock(idx);
@@ -1128,7 +1162,7 @@ rcu_torture_reader(void *arg)
1128 p = rcu_dereference_check(rcu_torture_current, 1162 p = rcu_dereference_check(rcu_torture_current,
1129 rcu_read_lock_bh_held() || 1163 rcu_read_lock_bh_held() ||
1130 rcu_read_lock_sched_held() || 1164 rcu_read_lock_sched_held() ||
1131 srcu_read_lock_held(&srcu_ctl)); 1165 srcu_read_lock_held(srcu_ctlp));
1132 if (p == NULL) { 1166 if (p == NULL) {
1133 /* Wait for rcu_torture_writer to get underway */ 1167 /* Wait for rcu_torture_writer to get underway */
1134 cur_ops->readunlock(idx); 1168 cur_ops->readunlock(idx);
@@ -1413,12 +1447,15 @@ static int rcu_torture_barrier_cbs(void *arg)
1413 do { 1447 do {
1414 wait_event(barrier_cbs_wq[myid], 1448 wait_event(barrier_cbs_wq[myid],
1415 (newphase = 1449 (newphase =
1416 READ_ONCE(barrier_phase)) != lastphase || 1450 smp_load_acquire(&barrier_phase)) != lastphase ||
1417 torture_must_stop()); 1451 torture_must_stop());
1418 lastphase = newphase; 1452 lastphase = newphase;
1419 smp_mb(); /* ensure barrier_phase load before ->call(). */
1420 if (torture_must_stop()) 1453 if (torture_must_stop())
1421 break; 1454 break;
1455 /*
1456 * The above smp_load_acquire() ensures barrier_phase load
1457 * is ordered before the folloiwng ->call().
1458 */
1422 cur_ops->call(&rcu, rcu_torture_barrier_cbf); 1459 cur_ops->call(&rcu, rcu_torture_barrier_cbf);
1423 if (atomic_dec_and_test(&barrier_cbs_count)) 1460 if (atomic_dec_and_test(&barrier_cbs_count))
1424 wake_up(&barrier_wq); 1461 wake_up(&barrier_wq);
@@ -1439,8 +1476,8 @@ static int rcu_torture_barrier(void *arg)
1439 do { 1476 do {
1440 atomic_set(&barrier_cbs_invoked, 0); 1477 atomic_set(&barrier_cbs_invoked, 0);
1441 atomic_set(&barrier_cbs_count, n_barrier_cbs); 1478 atomic_set(&barrier_cbs_count, n_barrier_cbs);
1442 smp_mb(); /* Ensure barrier_phase after prior assignments. */ 1479 /* Ensure barrier_phase ordered after prior assignments. */
1443 barrier_phase = !barrier_phase; 1480 smp_store_release(&barrier_phase, !barrier_phase);
1444 for (i = 0; i < n_barrier_cbs; i++) 1481 for (i = 0; i < n_barrier_cbs; i++)
1445 wake_up(&barrier_cbs_wq[i]); 1482 wake_up(&barrier_cbs_wq[i]);
1446 wait_event(barrier_wq, 1483 wait_event(barrier_wq,
@@ -1588,10 +1625,14 @@ rcu_torture_cleanup(void)
1588 rcutorture_booster_cleanup(i); 1625 rcutorture_booster_cleanup(i);
1589 } 1626 }
1590 1627
1591 /* Wait for all RCU callbacks to fire. */ 1628 /*
1592 1629 * Wait for all RCU callbacks to fire, then do flavor-specific
1630 * cleanup operations.
1631 */
1593 if (cur_ops->cb_barrier != NULL) 1632 if (cur_ops->cb_barrier != NULL)
1594 cur_ops->cb_barrier(); 1633 cur_ops->cb_barrier();
1634 if (cur_ops->cleanup != NULL)
1635 cur_ops->cleanup();
1595 1636
1596 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ 1637 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
1597 1638
@@ -1668,8 +1709,8 @@ rcu_torture_init(void)
1668 int cpu; 1709 int cpu;
1669 int firsterr = 0; 1710 int firsterr = 0;
1670 static struct rcu_torture_ops *torture_ops[] = { 1711 static struct rcu_torture_ops *torture_ops[] = {
1671 &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops, 1712 &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
1672 RCUTORTURE_TASKS_OPS 1713 &sched_ops, RCUTORTURE_TASKS_OPS
1673 }; 1714 };
1674 1715
1675 if (!torture_init_begin(torture_type, verbose, &torture_runnable)) 1716 if (!torture_init_begin(torture_type, verbose, &torture_runnable))
@@ -1701,7 +1742,7 @@ rcu_torture_init(void)
1701 if (nreaders >= 0) { 1742 if (nreaders >= 0) {
1702 nrealreaders = nreaders; 1743 nrealreaders = nreaders;
1703 } else { 1744 } else {
1704 nrealreaders = num_online_cpus() - 1; 1745 nrealreaders = num_online_cpus() - 2 - nreaders;
1705 if (nrealreaders <= 0) 1746 if (nrealreaders <= 0)
1706 nrealreaders = 1; 1747 nrealreaders = 1;
1707 } 1748 }
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 069742d61c68..591af0cb7b9f 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -49,39 +49,6 @@ static void __call_rcu(struct rcu_head *head,
49 49
50#include "tiny_plugin.h" 50#include "tiny_plugin.h"
51 51
52/*
53 * Enter idle, which is an extended quiescent state if we have fully
54 * entered that mode.
55 */
56void rcu_idle_enter(void)
57{
58}
59EXPORT_SYMBOL_GPL(rcu_idle_enter);
60
61/*
62 * Exit an interrupt handler towards idle.
63 */
64void rcu_irq_exit(void)
65{
66}
67EXPORT_SYMBOL_GPL(rcu_irq_exit);
68
69/*
70 * Exit idle, so that we are no longer in an extended quiescent state.
71 */
72void rcu_idle_exit(void)
73{
74}
75EXPORT_SYMBOL_GPL(rcu_idle_exit);
76
77/*
78 * Enter an interrupt handler, moving away from idle.
79 */
80void rcu_irq_enter(void)
81{
82}
83EXPORT_SYMBOL_GPL(rcu_irq_enter);
84
85#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) 52#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE)
86 53
87/* 54/*
@@ -170,6 +137,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
170 137
171 /* Move the ready-to-invoke callbacks to a local list. */ 138 /* Move the ready-to-invoke callbacks to a local list. */
172 local_irq_save(flags); 139 local_irq_save(flags);
140 if (rcp->donetail == &rcp->rcucblist) {
141 /* No callbacks ready, so just leave. */
142 local_irq_restore(flags);
143 return;
144 }
173 RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1)); 145 RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
174 list = rcp->rcucblist; 146 list = rcp->rcucblist;
175 rcp->rcucblist = *rcp->donetail; 147 rcp->rcucblist = *rcp->donetail;
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 0628df155970..add042926a66 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -91,7 +91,7 @@ static const char *tp_##sname##_varname __used __tracepoint_string = sname##_var
91 91
92#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \ 92#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \
93DEFINE_RCU_TPS(sname) \ 93DEFINE_RCU_TPS(sname) \
94DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data); \ 94static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data); \
95struct rcu_state sname##_state = { \ 95struct rcu_state sname##_state = { \
96 .level = { &sname##_state.node[0] }, \ 96 .level = { &sname##_state.node[0] }, \
97 .rda = &sname##_data, \ 97 .rda = &sname##_data, \
@@ -110,11 +110,18 @@ struct rcu_state sname##_state = { \
110RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched); 110RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched);
111RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh); 111RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh);
112 112
113static struct rcu_state *rcu_state_p; 113static struct rcu_state *const rcu_state_p;
114static struct rcu_data __percpu *const rcu_data_p;
114LIST_HEAD(rcu_struct_flavors); 115LIST_HEAD(rcu_struct_flavors);
115 116
116/* Increase (but not decrease) the CONFIG_RCU_FANOUT_LEAF at boot time. */ 117/* Dump rcu_node combining tree at boot to verify correct setup. */
117static int rcu_fanout_leaf = CONFIG_RCU_FANOUT_LEAF; 118static bool dump_tree;
119module_param(dump_tree, bool, 0444);
120/* Control rcu_node-tree auto-balancing at boot time. */
121static bool rcu_fanout_exact;
122module_param(rcu_fanout_exact, bool, 0444);
123/* Increase (but not decrease) the RCU_FANOUT_LEAF at boot time. */
124static int rcu_fanout_leaf = RCU_FANOUT_LEAF;
118module_param(rcu_fanout_leaf, int, 0444); 125module_param(rcu_fanout_leaf, int, 0444);
119int rcu_num_lvls __read_mostly = RCU_NUM_LVLS; 126int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
120static int num_rcu_lvl[] = { /* Number of rcu_nodes at specified level. */ 127static int num_rcu_lvl[] = { /* Number of rcu_nodes at specified level. */
@@ -159,17 +166,46 @@ static void invoke_rcu_core(void);
159static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp); 166static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp);
160 167
161/* rcuc/rcub kthread realtime priority */ 168/* rcuc/rcub kthread realtime priority */
169#ifdef CONFIG_RCU_KTHREAD_PRIO
162static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO; 170static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO;
171#else /* #ifdef CONFIG_RCU_KTHREAD_PRIO */
172static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0;
173#endif /* #else #ifdef CONFIG_RCU_KTHREAD_PRIO */
163module_param(kthread_prio, int, 0644); 174module_param(kthread_prio, int, 0644);
164 175
165/* Delay in jiffies for grace-period initialization delays, debug only. */ 176/* Delay in jiffies for grace-period initialization delays, debug only. */
177
178#ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT
179static int gp_preinit_delay = CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT_DELAY;
180module_param(gp_preinit_delay, int, 0644);
181#else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT */
182static const int gp_preinit_delay;
183#endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT */
184
166#ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT 185#ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT
167static int gp_init_delay = CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY; 186static int gp_init_delay = CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY;
168module_param(gp_init_delay, int, 0644); 187module_param(gp_init_delay, int, 0644);
169#else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 188#else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */
170static const int gp_init_delay; 189static const int gp_init_delay;
171#endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 190#endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */
172#define PER_RCU_NODE_PERIOD 10 /* Number of grace periods between delays. */ 191
192#ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP
193static int gp_cleanup_delay = CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY;
194module_param(gp_cleanup_delay, int, 0644);
195#else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP */
196static const int gp_cleanup_delay;
197#endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP */
198
199/*
200 * Number of grace periods between delays, normalized by the duration of
201 * the delay. The longer the the delay, the more the grace periods between
202 * each delay. The reason for this normalization is that it means that,
203 * for non-zero delays, the overall slowdown of grace periods is constant
204 * regardless of the duration of the delay. This arrangement balances
205 * the need for long delays to increase some race probabilities with the
206 * need for fast grace periods to increase other race probabilities.
207 */
208#define PER_RCU_NODE_PERIOD 3 /* Number of grace periods between delays. */
173 209
174/* 210/*
175 * Track the rcutorture test sequence number and the update version 211 * Track the rcutorture test sequence number and the update version
@@ -585,7 +621,8 @@ static void rcu_eqs_enter_common(long long oldval, bool user)
585 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 621 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
586 622
587 trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting); 623 trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting);
588 if (!user && !is_idle_task(current)) { 624 if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
625 !user && !is_idle_task(current)) {
589 struct task_struct *idle __maybe_unused = 626 struct task_struct *idle __maybe_unused =
590 idle_task(smp_processor_id()); 627 idle_task(smp_processor_id());
591 628
@@ -604,7 +641,8 @@ static void rcu_eqs_enter_common(long long oldval, bool user)
604 smp_mb__before_atomic(); /* See above. */ 641 smp_mb__before_atomic(); /* See above. */
605 atomic_inc(&rdtp->dynticks); 642 atomic_inc(&rdtp->dynticks);
606 smp_mb__after_atomic(); /* Force ordering with next sojourn. */ 643 smp_mb__after_atomic(); /* Force ordering with next sojourn. */
607 WARN_ON_ONCE(atomic_read(&rdtp->dynticks) & 0x1); 644 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
645 atomic_read(&rdtp->dynticks) & 0x1);
608 rcu_dynticks_task_enter(); 646 rcu_dynticks_task_enter();
609 647
610 /* 648 /*
@@ -630,7 +668,8 @@ static void rcu_eqs_enter(bool user)
630 668
631 rdtp = this_cpu_ptr(&rcu_dynticks); 669 rdtp = this_cpu_ptr(&rcu_dynticks);
632 oldval = rdtp->dynticks_nesting; 670 oldval = rdtp->dynticks_nesting;
633 WARN_ON_ONCE((oldval & DYNTICK_TASK_NEST_MASK) == 0); 671 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
672 (oldval & DYNTICK_TASK_NEST_MASK) == 0);
634 if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) { 673 if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) {
635 rdtp->dynticks_nesting = 0; 674 rdtp->dynticks_nesting = 0;
636 rcu_eqs_enter_common(oldval, user); 675 rcu_eqs_enter_common(oldval, user);
@@ -703,7 +742,8 @@ void rcu_irq_exit(void)
703 rdtp = this_cpu_ptr(&rcu_dynticks); 742 rdtp = this_cpu_ptr(&rcu_dynticks);
704 oldval = rdtp->dynticks_nesting; 743 oldval = rdtp->dynticks_nesting;
705 rdtp->dynticks_nesting--; 744 rdtp->dynticks_nesting--;
706 WARN_ON_ONCE(rdtp->dynticks_nesting < 0); 745 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
746 rdtp->dynticks_nesting < 0);
707 if (rdtp->dynticks_nesting) 747 if (rdtp->dynticks_nesting)
708 trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting); 748 trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting);
709 else 749 else
@@ -728,10 +768,12 @@ static void rcu_eqs_exit_common(long long oldval, int user)
728 atomic_inc(&rdtp->dynticks); 768 atomic_inc(&rdtp->dynticks);
729 /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */ 769 /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */
730 smp_mb__after_atomic(); /* See above. */ 770 smp_mb__after_atomic(); /* See above. */
731 WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1)); 771 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
772 !(atomic_read(&rdtp->dynticks) & 0x1));
732 rcu_cleanup_after_idle(); 773 rcu_cleanup_after_idle();
733 trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting); 774 trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting);
734 if (!user && !is_idle_task(current)) { 775 if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
776 !user && !is_idle_task(current)) {
735 struct task_struct *idle __maybe_unused = 777 struct task_struct *idle __maybe_unused =
736 idle_task(smp_processor_id()); 778 idle_task(smp_processor_id());
737 779
@@ -755,7 +797,7 @@ static void rcu_eqs_exit(bool user)
755 797
756 rdtp = this_cpu_ptr(&rcu_dynticks); 798 rdtp = this_cpu_ptr(&rcu_dynticks);
757 oldval = rdtp->dynticks_nesting; 799 oldval = rdtp->dynticks_nesting;
758 WARN_ON_ONCE(oldval < 0); 800 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
759 if (oldval & DYNTICK_TASK_NEST_MASK) { 801 if (oldval & DYNTICK_TASK_NEST_MASK) {
760 rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE; 802 rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
761 } else { 803 } else {
@@ -828,7 +870,8 @@ void rcu_irq_enter(void)
828 rdtp = this_cpu_ptr(&rcu_dynticks); 870 rdtp = this_cpu_ptr(&rcu_dynticks);
829 oldval = rdtp->dynticks_nesting; 871 oldval = rdtp->dynticks_nesting;
830 rdtp->dynticks_nesting++; 872 rdtp->dynticks_nesting++;
831 WARN_ON_ONCE(rdtp->dynticks_nesting == 0); 873 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
874 rdtp->dynticks_nesting == 0);
832 if (oldval) 875 if (oldval)
833 trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting); 876 trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting);
834 else 877 else
@@ -1135,8 +1178,9 @@ static void rcu_check_gp_kthread_starvation(struct rcu_state *rsp)
1135 j = jiffies; 1178 j = jiffies;
1136 gpa = READ_ONCE(rsp->gp_activity); 1179 gpa = READ_ONCE(rsp->gp_activity);
1137 if (j - gpa > 2 * HZ) 1180 if (j - gpa > 2 * HZ)
1138 pr_err("%s kthread starved for %ld jiffies!\n", 1181 pr_err("%s kthread starved for %ld jiffies! g%lu c%lu f%#x\n",
1139 rsp->name, j - gpa); 1182 rsp->name, j - gpa,
1183 rsp->gpnum, rsp->completed, rsp->gp_flags);
1140} 1184}
1141 1185
1142/* 1186/*
@@ -1732,6 +1776,13 @@ static void note_gp_changes(struct rcu_state *rsp, struct rcu_data *rdp)
1732 rcu_gp_kthread_wake(rsp); 1776 rcu_gp_kthread_wake(rsp);
1733} 1777}
1734 1778
1779static void rcu_gp_slow(struct rcu_state *rsp, int delay)
1780{
1781 if (delay > 0 &&
1782 !(rsp->gpnum % (rcu_num_nodes * PER_RCU_NODE_PERIOD * delay)))
1783 schedule_timeout_uninterruptible(delay);
1784}
1785
1735/* 1786/*
1736 * Initialize a new grace period. Return 0 if no grace period required. 1787 * Initialize a new grace period. Return 0 if no grace period required.
1737 */ 1788 */
@@ -1774,6 +1825,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
1774 * will handle subsequent offline CPUs. 1825 * will handle subsequent offline CPUs.
1775 */ 1826 */
1776 rcu_for_each_leaf_node(rsp, rnp) { 1827 rcu_for_each_leaf_node(rsp, rnp) {
1828 rcu_gp_slow(rsp, gp_preinit_delay);
1777 raw_spin_lock_irq(&rnp->lock); 1829 raw_spin_lock_irq(&rnp->lock);
1778 smp_mb__after_unlock_lock(); 1830 smp_mb__after_unlock_lock();
1779 if (rnp->qsmaskinit == rnp->qsmaskinitnext && 1831 if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
@@ -1830,6 +1882,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
1830 * process finishes, because this kthread handles both. 1882 * process finishes, because this kthread handles both.
1831 */ 1883 */
1832 rcu_for_each_node_breadth_first(rsp, rnp) { 1884 rcu_for_each_node_breadth_first(rsp, rnp) {
1885 rcu_gp_slow(rsp, gp_init_delay);
1833 raw_spin_lock_irq(&rnp->lock); 1886 raw_spin_lock_irq(&rnp->lock);
1834 smp_mb__after_unlock_lock(); 1887 smp_mb__after_unlock_lock();
1835 rdp = this_cpu_ptr(rsp->rda); 1888 rdp = this_cpu_ptr(rsp->rda);
@@ -1847,9 +1900,6 @@ static int rcu_gp_init(struct rcu_state *rsp)
1847 raw_spin_unlock_irq(&rnp->lock); 1900 raw_spin_unlock_irq(&rnp->lock);
1848 cond_resched_rcu_qs(); 1901 cond_resched_rcu_qs();
1849 WRITE_ONCE(rsp->gp_activity, jiffies); 1902 WRITE_ONCE(rsp->gp_activity, jiffies);
1850 if (gp_init_delay > 0 &&
1851 !(rsp->gpnum % (rcu_num_nodes * PER_RCU_NODE_PERIOD)))
1852 schedule_timeout_uninterruptible(gp_init_delay);
1853 } 1903 }
1854 1904
1855 return 1; 1905 return 1;
@@ -1944,6 +1994,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
1944 raw_spin_unlock_irq(&rnp->lock); 1994 raw_spin_unlock_irq(&rnp->lock);
1945 cond_resched_rcu_qs(); 1995 cond_resched_rcu_qs();
1946 WRITE_ONCE(rsp->gp_activity, jiffies); 1996 WRITE_ONCE(rsp->gp_activity, jiffies);
1997 rcu_gp_slow(rsp, gp_cleanup_delay);
1947 } 1998 }
1948 rnp = rcu_get_root(rsp); 1999 rnp = rcu_get_root(rsp);
1949 raw_spin_lock_irq(&rnp->lock); 2000 raw_spin_lock_irq(&rnp->lock);
@@ -2138,6 +2189,7 @@ static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags)
2138 __releases(rcu_get_root(rsp)->lock) 2189 __releases(rcu_get_root(rsp)->lock)
2139{ 2190{
2140 WARN_ON_ONCE(!rcu_gp_in_progress(rsp)); 2191 WARN_ON_ONCE(!rcu_gp_in_progress(rsp));
2192 WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS);
2141 raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags); 2193 raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags);
2142 rcu_gp_kthread_wake(rsp); 2194 rcu_gp_kthread_wake(rsp);
2143} 2195}
@@ -2335,8 +2387,6 @@ rcu_check_quiescent_state(struct rcu_state *rsp, struct rcu_data *rdp)
2335 rcu_report_qs_rdp(rdp->cpu, rsp, rdp); 2387 rcu_report_qs_rdp(rdp->cpu, rsp, rdp);
2336} 2388}
2337 2389
2338#ifdef CONFIG_HOTPLUG_CPU
2339
2340/* 2390/*
2341 * Send the specified CPU's RCU callbacks to the orphanage. The 2391 * Send the specified CPU's RCU callbacks to the orphanage. The
2342 * specified CPU must be offline, and the caller must hold the 2392 * specified CPU must be offline, and the caller must hold the
@@ -2347,7 +2397,7 @@ rcu_send_cbs_to_orphanage(int cpu, struct rcu_state *rsp,
2347 struct rcu_node *rnp, struct rcu_data *rdp) 2397 struct rcu_node *rnp, struct rcu_data *rdp)
2348{ 2398{
2349 /* No-CBs CPUs do not have orphanable callbacks. */ 2399 /* No-CBs CPUs do not have orphanable callbacks. */
2350 if (rcu_is_nocb_cpu(rdp->cpu)) 2400 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || rcu_is_nocb_cpu(rdp->cpu))
2351 return; 2401 return;
2352 2402
2353 /* 2403 /*
@@ -2406,7 +2456,8 @@ static void rcu_adopt_orphan_cbs(struct rcu_state *rsp, unsigned long flags)
2406 struct rcu_data *rdp = raw_cpu_ptr(rsp->rda); 2456 struct rcu_data *rdp = raw_cpu_ptr(rsp->rda);
2407 2457
2408 /* No-CBs CPUs are handled specially. */ 2458 /* No-CBs CPUs are handled specially. */
2409 if (rcu_nocb_adopt_orphan_cbs(rsp, rdp, flags)) 2459 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) ||
2460 rcu_nocb_adopt_orphan_cbs(rsp, rdp, flags))
2410 return; 2461 return;
2411 2462
2412 /* Do the accounting first. */ 2463 /* Do the accounting first. */
@@ -2453,6 +2504,9 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
2453 RCU_TRACE(struct rcu_data *rdp = this_cpu_ptr(rsp->rda)); 2504 RCU_TRACE(struct rcu_data *rdp = this_cpu_ptr(rsp->rda));
2454 RCU_TRACE(struct rcu_node *rnp = rdp->mynode); 2505 RCU_TRACE(struct rcu_node *rnp = rdp->mynode);
2455 2506
2507 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
2508 return;
2509
2456 RCU_TRACE(mask = rdp->grpmask); 2510 RCU_TRACE(mask = rdp->grpmask);
2457 trace_rcu_grace_period(rsp->name, 2511 trace_rcu_grace_period(rsp->name,
2458 rnp->gpnum + 1 - !!(rnp->qsmask & mask), 2512 rnp->gpnum + 1 - !!(rnp->qsmask & mask),
@@ -2481,7 +2535,8 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
2481 long mask; 2535 long mask;
2482 struct rcu_node *rnp = rnp_leaf; 2536 struct rcu_node *rnp = rnp_leaf;
2483 2537
2484 if (rnp->qsmaskinit || rcu_preempt_has_tasks(rnp)) 2538 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) ||
2539 rnp->qsmaskinit || rcu_preempt_has_tasks(rnp))
2485 return; 2540 return;
2486 for (;;) { 2541 for (;;) {
2487 mask = rnp->grpmask; 2542 mask = rnp->grpmask;
@@ -2512,6 +2567,9 @@ static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp)
2512 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); 2567 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
2513 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ 2568 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
2514 2569
2570 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
2571 return;
2572
2515 /* Remove outgoing CPU from mask in the leaf rcu_node structure. */ 2573 /* Remove outgoing CPU from mask in the leaf rcu_node structure. */
2516 mask = rdp->grpmask; 2574 mask = rdp->grpmask;
2517 raw_spin_lock_irqsave(&rnp->lock, flags); 2575 raw_spin_lock_irqsave(&rnp->lock, flags);
@@ -2533,6 +2591,9 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
2533 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); 2591 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
2534 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ 2592 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
2535 2593
2594 if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
2595 return;
2596
2536 /* Adjust any no-longer-needed kthreads. */ 2597 /* Adjust any no-longer-needed kthreads. */
2537 rcu_boost_kthread_setaffinity(rnp, -1); 2598 rcu_boost_kthread_setaffinity(rnp, -1);
2538 2599
@@ -2547,26 +2608,6 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
2547 cpu, rdp->qlen, rdp->nxtlist); 2608 cpu, rdp->qlen, rdp->nxtlist);
2548} 2609}
2549 2610
2550#else /* #ifdef CONFIG_HOTPLUG_CPU */
2551
2552static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
2553{
2554}
2555
2556static void __maybe_unused rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
2557{
2558}
2559
2560static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp)
2561{
2562}
2563
2564static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
2565{
2566}
2567
2568#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */
2569
2570/* 2611/*
2571 * Invoke any RCU callbacks that have made it to the end of their grace 2612 * Invoke any RCU callbacks that have made it to the end of their grace
2572 * period. Thottle as specified by rdp->blimit. 2613 * period. Thottle as specified by rdp->blimit.
@@ -2731,10 +2772,6 @@ static void force_qs_rnp(struct rcu_state *rsp,
2731 mask = 0; 2772 mask = 0;
2732 raw_spin_lock_irqsave(&rnp->lock, flags); 2773 raw_spin_lock_irqsave(&rnp->lock, flags);
2733 smp_mb__after_unlock_lock(); 2774 smp_mb__after_unlock_lock();
2734 if (!rcu_gp_in_progress(rsp)) {
2735 raw_spin_unlock_irqrestore(&rnp->lock, flags);
2736 return;
2737 }
2738 if (rnp->qsmask == 0) { 2775 if (rnp->qsmask == 0) {
2739 if (rcu_state_p == &rcu_sched_state || 2776 if (rcu_state_p == &rcu_sched_state ||
2740 rsp != rcu_state_p || 2777 rsp != rcu_state_p ||
@@ -2764,8 +2801,6 @@ static void force_qs_rnp(struct rcu_state *rsp,
2764 bit = 1; 2801 bit = 1;
2765 for (; cpu <= rnp->grphi; cpu++, bit <<= 1) { 2802 for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
2766 if ((rnp->qsmask & bit) != 0) { 2803 if ((rnp->qsmask & bit) != 0) {
2767 if ((rnp->qsmaskinit & bit) == 0)
2768 *isidle = false; /* Pending hotplug. */
2769 if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj)) 2804 if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
2770 mask |= bit; 2805 mask |= bit;
2771 } 2806 }
@@ -3287,7 +3322,7 @@ void synchronize_sched_expedited(void)
3287 if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start), 3322 if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start),
3288 (ulong)atomic_long_read(&rsp->expedited_done) + 3323 (ulong)atomic_long_read(&rsp->expedited_done) +
3289 ULONG_MAX / 8)) { 3324 ULONG_MAX / 8)) {
3290 synchronize_sched(); 3325 wait_rcu_gp(call_rcu_sched);
3291 atomic_long_inc(&rsp->expedited_wrap); 3326 atomic_long_inc(&rsp->expedited_wrap);
3292 return; 3327 return;
3293 } 3328 }
@@ -3493,7 +3528,7 @@ static int rcu_pending(void)
3493 * non-NULL, store an indication of whether all callbacks are lazy. 3528 * non-NULL, store an indication of whether all callbacks are lazy.
3494 * (If there are no callbacks, all of them are deemed to be lazy.) 3529 * (If there are no callbacks, all of them are deemed to be lazy.)
3495 */ 3530 */
3496static int __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy) 3531static bool __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy)
3497{ 3532{
3498 bool al = true; 3533 bool al = true;
3499 bool hc = false; 3534 bool hc = false;
@@ -3780,7 +3815,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
3780 rdp->gpnum = rnp->completed; /* Make CPU later note any new GP. */ 3815 rdp->gpnum = rnp->completed; /* Make CPU later note any new GP. */
3781 rdp->completed = rnp->completed; 3816 rdp->completed = rnp->completed;
3782 rdp->passed_quiesce = false; 3817 rdp->passed_quiesce = false;
3783 rdp->rcu_qs_ctr_snap = __this_cpu_read(rcu_qs_ctr); 3818 rdp->rcu_qs_ctr_snap = per_cpu(rcu_qs_ctr, cpu);
3784 rdp->qs_pending = false; 3819 rdp->qs_pending = false;
3785 trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl")); 3820 trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl"));
3786 raw_spin_unlock_irqrestore(&rnp->lock, flags); 3821 raw_spin_unlock_irqrestore(&rnp->lock, flags);
@@ -3924,16 +3959,16 @@ void rcu_scheduler_starting(void)
3924 3959
3925/* 3960/*
3926 * Compute the per-level fanout, either using the exact fanout specified 3961 * Compute the per-level fanout, either using the exact fanout specified
3927 * or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT. 3962 * or balancing the tree, depending on the rcu_fanout_exact boot parameter.
3928 */ 3963 */
3929static void __init rcu_init_levelspread(struct rcu_state *rsp) 3964static void __init rcu_init_levelspread(struct rcu_state *rsp)
3930{ 3965{
3931 int i; 3966 int i;
3932 3967
3933 if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT)) { 3968 if (rcu_fanout_exact) {
3934 rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf; 3969 rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf;
3935 for (i = rcu_num_lvls - 2; i >= 0; i--) 3970 for (i = rcu_num_lvls - 2; i >= 0; i--)
3936 rsp->levelspread[i] = CONFIG_RCU_FANOUT; 3971 rsp->levelspread[i] = RCU_FANOUT;
3937 } else { 3972 } else {
3938 int ccur; 3973 int ccur;
3939 int cprv; 3974 int cprv;
@@ -3971,9 +4006,9 @@ static void __init rcu_init_one(struct rcu_state *rsp,
3971 4006
3972 BUILD_BUG_ON(MAX_RCU_LVLS > ARRAY_SIZE(buf)); /* Fix buf[] init! */ 4007 BUILD_BUG_ON(MAX_RCU_LVLS > ARRAY_SIZE(buf)); /* Fix buf[] init! */
3973 4008
3974 /* Silence gcc 4.8 warning about array index out of range. */ 4009 /* Silence gcc 4.8 false positive about array index out of range. */
3975 if (rcu_num_lvls > RCU_NUM_LVLS) 4010 if (rcu_num_lvls <= 0 || rcu_num_lvls > RCU_NUM_LVLS)
3976 panic("rcu_init_one: rcu_num_lvls overflow"); 4011 panic("rcu_init_one: rcu_num_lvls out of range");
3977 4012
3978 /* Initialize the level-tracking arrays. */ 4013 /* Initialize the level-tracking arrays. */
3979 4014
@@ -4059,7 +4094,7 @@ static void __init rcu_init_geometry(void)
4059 jiffies_till_next_fqs = d; 4094 jiffies_till_next_fqs = d;
4060 4095
4061 /* If the compile-time values are accurate, just leave. */ 4096 /* If the compile-time values are accurate, just leave. */
4062 if (rcu_fanout_leaf == CONFIG_RCU_FANOUT_LEAF && 4097 if (rcu_fanout_leaf == RCU_FANOUT_LEAF &&
4063 nr_cpu_ids == NR_CPUS) 4098 nr_cpu_ids == NR_CPUS)
4064 return; 4099 return;
4065 pr_info("RCU: Adjusting geometry for rcu_fanout_leaf=%d, nr_cpu_ids=%d\n", 4100 pr_info("RCU: Adjusting geometry for rcu_fanout_leaf=%d, nr_cpu_ids=%d\n",
@@ -4073,7 +4108,7 @@ static void __init rcu_init_geometry(void)
4073 rcu_capacity[0] = 1; 4108 rcu_capacity[0] = 1;
4074 rcu_capacity[1] = rcu_fanout_leaf; 4109 rcu_capacity[1] = rcu_fanout_leaf;
4075 for (i = 2; i <= MAX_RCU_LVLS; i++) 4110 for (i = 2; i <= MAX_RCU_LVLS; i++)
4076 rcu_capacity[i] = rcu_capacity[i - 1] * CONFIG_RCU_FANOUT; 4111 rcu_capacity[i] = rcu_capacity[i - 1] * RCU_FANOUT;
4077 4112
4078 /* 4113 /*
4079 * The boot-time rcu_fanout_leaf parameter is only permitted 4114 * The boot-time rcu_fanout_leaf parameter is only permitted
@@ -4083,7 +4118,7 @@ static void __init rcu_init_geometry(void)
4083 * the configured number of CPUs. Complain and fall back to the 4118 * the configured number of CPUs. Complain and fall back to the
4084 * compile-time values if these limits are exceeded. 4119 * compile-time values if these limits are exceeded.
4085 */ 4120 */
4086 if (rcu_fanout_leaf < CONFIG_RCU_FANOUT_LEAF || 4121 if (rcu_fanout_leaf < RCU_FANOUT_LEAF ||
4087 rcu_fanout_leaf > sizeof(unsigned long) * 8 || 4122 rcu_fanout_leaf > sizeof(unsigned long) * 8 ||
4088 n > rcu_capacity[MAX_RCU_LVLS]) { 4123 n > rcu_capacity[MAX_RCU_LVLS]) {
4089 WARN_ON(1); 4124 WARN_ON(1);
@@ -4109,6 +4144,28 @@ static void __init rcu_init_geometry(void)
4109 rcu_num_nodes -= n; 4144 rcu_num_nodes -= n;
4110} 4145}
4111 4146
4147/*
4148 * Dump out the structure of the rcu_node combining tree associated
4149 * with the rcu_state structure referenced by rsp.
4150 */
4151static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
4152{
4153 int level = 0;
4154 struct rcu_node *rnp;
4155
4156 pr_info("rcu_node tree layout dump\n");
4157 pr_info(" ");
4158 rcu_for_each_node_breadth_first(rsp, rnp) {
4159 if (rnp->level != level) {
4160 pr_cont("\n");
4161 pr_info(" ");
4162 level = rnp->level;
4163 }
4164 pr_cont("%d:%d ^%d ", rnp->grplo, rnp->grphi, rnp->grpnum);
4165 }
4166 pr_cont("\n");
4167}
4168
4112void __init rcu_init(void) 4169void __init rcu_init(void)
4113{ 4170{
4114 int cpu; 4171 int cpu;
@@ -4119,6 +4176,8 @@ void __init rcu_init(void)
4119 rcu_init_geometry(); 4176 rcu_init_geometry();
4120 rcu_init_one(&rcu_bh_state, &rcu_bh_data); 4177 rcu_init_one(&rcu_bh_state, &rcu_bh_data);
4121 rcu_init_one(&rcu_sched_state, &rcu_sched_data); 4178 rcu_init_one(&rcu_sched_state, &rcu_sched_data);
4179 if (dump_tree)
4180 rcu_dump_rcu_node_tree(&rcu_sched_state);
4122 __rcu_init_preempt(); 4181 __rcu_init_preempt();
4123 open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); 4182 open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
4124 4183
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index a69d3dab2ec4..4adb7ca0bf47 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -35,11 +35,33 @@
35 * In practice, this did work well going from three levels to four. 35 * In practice, this did work well going from three levels to four.
36 * Of course, your mileage may vary. 36 * Of course, your mileage may vary.
37 */ 37 */
38
38#define MAX_RCU_LVLS 4 39#define MAX_RCU_LVLS 4
39#define RCU_FANOUT_1 (CONFIG_RCU_FANOUT_LEAF) 40
40#define RCU_FANOUT_2 (RCU_FANOUT_1 * CONFIG_RCU_FANOUT) 41#ifdef CONFIG_RCU_FANOUT
41#define RCU_FANOUT_3 (RCU_FANOUT_2 * CONFIG_RCU_FANOUT) 42#define RCU_FANOUT CONFIG_RCU_FANOUT
42#define RCU_FANOUT_4 (RCU_FANOUT_3 * CONFIG_RCU_FANOUT) 43#else /* #ifdef CONFIG_RCU_FANOUT */
44# ifdef CONFIG_64BIT
45# define RCU_FANOUT 64
46# else
47# define RCU_FANOUT 32
48# endif
49#endif /* #else #ifdef CONFIG_RCU_FANOUT */
50
51#ifdef CONFIG_RCU_FANOUT_LEAF
52#define RCU_FANOUT_LEAF CONFIG_RCU_FANOUT_LEAF
53#else /* #ifdef CONFIG_RCU_FANOUT_LEAF */
54# ifdef CONFIG_64BIT
55# define RCU_FANOUT_LEAF 64
56# else
57# define RCU_FANOUT_LEAF 32
58# endif
59#endif /* #else #ifdef CONFIG_RCU_FANOUT_LEAF */
60
61#define RCU_FANOUT_1 (RCU_FANOUT_LEAF)
62#define RCU_FANOUT_2 (RCU_FANOUT_1 * RCU_FANOUT)
63#define RCU_FANOUT_3 (RCU_FANOUT_2 * RCU_FANOUT)
64#define RCU_FANOUT_4 (RCU_FANOUT_3 * RCU_FANOUT)
43 65
44#if NR_CPUS <= RCU_FANOUT_1 66#if NR_CPUS <= RCU_FANOUT_1
45# define RCU_NUM_LVLS 1 67# define RCU_NUM_LVLS 1
@@ -170,7 +192,6 @@ struct rcu_node {
170 /* if there is no such task. If there */ 192 /* if there is no such task. If there */
171 /* is no current expedited grace period, */ 193 /* is no current expedited grace period, */
172 /* then there can cannot be any such task. */ 194 /* then there can cannot be any such task. */
173#ifdef CONFIG_RCU_BOOST
174 struct list_head *boost_tasks; 195 struct list_head *boost_tasks;
175 /* Pointer to first task that needs to be */ 196 /* Pointer to first task that needs to be */
176 /* priority boosted, or NULL if no priority */ 197 /* priority boosted, or NULL if no priority */
@@ -208,7 +229,6 @@ struct rcu_node {
208 unsigned long n_balk_nos; 229 unsigned long n_balk_nos;
209 /* Refused to boost: not sure why, though. */ 230 /* Refused to boost: not sure why, though. */
210 /* This can happen due to race conditions. */ 231 /* This can happen due to race conditions. */
211#endif /* #ifdef CONFIG_RCU_BOOST */
212#ifdef CONFIG_RCU_NOCB_CPU 232#ifdef CONFIG_RCU_NOCB_CPU
213 wait_queue_head_t nocb_gp_wq[2]; 233 wait_queue_head_t nocb_gp_wq[2];
214 /* Place for rcu_nocb_kthread() to wait GP. */ 234 /* Place for rcu_nocb_kthread() to wait GP. */
@@ -519,14 +539,11 @@ extern struct list_head rcu_struct_flavors;
519 * RCU implementation internal declarations: 539 * RCU implementation internal declarations:
520 */ 540 */
521extern struct rcu_state rcu_sched_state; 541extern struct rcu_state rcu_sched_state;
522DECLARE_PER_CPU(struct rcu_data, rcu_sched_data);
523 542
524extern struct rcu_state rcu_bh_state; 543extern struct rcu_state rcu_bh_state;
525DECLARE_PER_CPU(struct rcu_data, rcu_bh_data);
526 544
527#ifdef CONFIG_PREEMPT_RCU 545#ifdef CONFIG_PREEMPT_RCU
528extern struct rcu_state rcu_preempt_state; 546extern struct rcu_state rcu_preempt_state;
529DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data);
530#endif /* #ifdef CONFIG_PREEMPT_RCU */ 547#endif /* #ifdef CONFIG_PREEMPT_RCU */
531 548
532#ifdef CONFIG_RCU_BOOST 549#ifdef CONFIG_RCU_BOOST
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 58b1ebdc4387..32664347091a 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -43,7 +43,17 @@ DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
43DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops); 43DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
44DEFINE_PER_CPU(char, rcu_cpu_has_work); 44DEFINE_PER_CPU(char, rcu_cpu_has_work);
45 45
46#endif /* #ifdef CONFIG_RCU_BOOST */ 46#else /* #ifdef CONFIG_RCU_BOOST */
47
48/*
49 * Some architectures do not define rt_mutexes, but if !CONFIG_RCU_BOOST,
50 * all uses are in dead code. Provide a definition to keep the compiler
51 * happy, but add WARN_ON_ONCE() to complain if used in the wrong place.
52 * This probably needs to be excluded from -rt builds.
53 */
54#define rt_mutex_owner(a) ({ WARN_ON_ONCE(1); NULL; })
55
56#endif /* #else #ifdef CONFIG_RCU_BOOST */
47 57
48#ifdef CONFIG_RCU_NOCB_CPU 58#ifdef CONFIG_RCU_NOCB_CPU
49static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ 59static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
@@ -60,11 +70,11 @@ static void __init rcu_bootup_announce_oddness(void)
60{ 70{
61 if (IS_ENABLED(CONFIG_RCU_TRACE)) 71 if (IS_ENABLED(CONFIG_RCU_TRACE))
62 pr_info("\tRCU debugfs-based tracing is enabled.\n"); 72 pr_info("\tRCU debugfs-based tracing is enabled.\n");
63 if ((IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 64) || 73 if ((IS_ENABLED(CONFIG_64BIT) && RCU_FANOUT != 64) ||
64 (!IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 32)) 74 (!IS_ENABLED(CONFIG_64BIT) && RCU_FANOUT != 32))
65 pr_info("\tCONFIG_RCU_FANOUT set to non-default value of %d\n", 75 pr_info("\tCONFIG_RCU_FANOUT set to non-default value of %d\n",
66 CONFIG_RCU_FANOUT); 76 RCU_FANOUT);
67 if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT)) 77 if (rcu_fanout_exact)
68 pr_info("\tHierarchical RCU autobalancing is disabled.\n"); 78 pr_info("\tHierarchical RCU autobalancing is disabled.\n");
69 if (IS_ENABLED(CONFIG_RCU_FAST_NO_HZ)) 79 if (IS_ENABLED(CONFIG_RCU_FAST_NO_HZ))
70 pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n"); 80 pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n");
@@ -76,10 +86,10 @@ static void __init rcu_bootup_announce_oddness(void)
76 pr_info("\tAdditional per-CPU info printed with stalls.\n"); 86 pr_info("\tAdditional per-CPU info printed with stalls.\n");
77 if (NUM_RCU_LVL_4 != 0) 87 if (NUM_RCU_LVL_4 != 0)
78 pr_info("\tFour-level hierarchy is enabled.\n"); 88 pr_info("\tFour-level hierarchy is enabled.\n");
79 if (CONFIG_RCU_FANOUT_LEAF != 16) 89 if (RCU_FANOUT_LEAF != 16)
80 pr_info("\tBuild-time adjustment of leaf fanout to %d.\n", 90 pr_info("\tBuild-time adjustment of leaf fanout to %d.\n",
81 CONFIG_RCU_FANOUT_LEAF); 91 RCU_FANOUT_LEAF);
82 if (rcu_fanout_leaf != CONFIG_RCU_FANOUT_LEAF) 92 if (rcu_fanout_leaf != RCU_FANOUT_LEAF)
83 pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf); 93 pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf);
84 if (nr_cpu_ids != NR_CPUS) 94 if (nr_cpu_ids != NR_CPUS)
85 pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids); 95 pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids);
@@ -90,7 +100,8 @@ static void __init rcu_bootup_announce_oddness(void)
90#ifdef CONFIG_PREEMPT_RCU 100#ifdef CONFIG_PREEMPT_RCU
91 101
92RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu); 102RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu);
93static struct rcu_state *rcu_state_p = &rcu_preempt_state; 103static struct rcu_state *const rcu_state_p = &rcu_preempt_state;
104static struct rcu_data __percpu *const rcu_data_p = &rcu_preempt_data;
94 105
95static int rcu_preempted_readers_exp(struct rcu_node *rnp); 106static int rcu_preempted_readers_exp(struct rcu_node *rnp);
96static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, 107static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
@@ -116,11 +127,11 @@ static void __init rcu_bootup_announce(void)
116 */ 127 */
117static void rcu_preempt_qs(void) 128static void rcu_preempt_qs(void)
118{ 129{
119 if (!__this_cpu_read(rcu_preempt_data.passed_quiesce)) { 130 if (!__this_cpu_read(rcu_data_p->passed_quiesce)) {
120 trace_rcu_grace_period(TPS("rcu_preempt"), 131 trace_rcu_grace_period(TPS("rcu_preempt"),
121 __this_cpu_read(rcu_preempt_data.gpnum), 132 __this_cpu_read(rcu_data_p->gpnum),
122 TPS("cpuqs")); 133 TPS("cpuqs"));
123 __this_cpu_write(rcu_preempt_data.passed_quiesce, 1); 134 __this_cpu_write(rcu_data_p->passed_quiesce, 1);
124 barrier(); /* Coordinate with rcu_preempt_check_callbacks(). */ 135 barrier(); /* Coordinate with rcu_preempt_check_callbacks(). */
125 current->rcu_read_unlock_special.b.need_qs = false; 136 current->rcu_read_unlock_special.b.need_qs = false;
126 } 137 }
@@ -150,7 +161,7 @@ static void rcu_preempt_note_context_switch(void)
150 !t->rcu_read_unlock_special.b.blocked) { 161 !t->rcu_read_unlock_special.b.blocked) {
151 162
152 /* Possibly blocking in an RCU read-side critical section. */ 163 /* Possibly blocking in an RCU read-side critical section. */
153 rdp = this_cpu_ptr(rcu_preempt_state.rda); 164 rdp = this_cpu_ptr(rcu_state_p->rda);
154 rnp = rdp->mynode; 165 rnp = rdp->mynode;
155 raw_spin_lock_irqsave(&rnp->lock, flags); 166 raw_spin_lock_irqsave(&rnp->lock, flags);
156 smp_mb__after_unlock_lock(); 167 smp_mb__after_unlock_lock();
@@ -180,10 +191,9 @@ static void rcu_preempt_note_context_switch(void)
180 if ((rnp->qsmask & rdp->grpmask) && rnp->gp_tasks != NULL) { 191 if ((rnp->qsmask & rdp->grpmask) && rnp->gp_tasks != NULL) {
181 list_add(&t->rcu_node_entry, rnp->gp_tasks->prev); 192 list_add(&t->rcu_node_entry, rnp->gp_tasks->prev);
182 rnp->gp_tasks = &t->rcu_node_entry; 193 rnp->gp_tasks = &t->rcu_node_entry;
183#ifdef CONFIG_RCU_BOOST 194 if (IS_ENABLED(CONFIG_RCU_BOOST) &&
184 if (rnp->boost_tasks != NULL) 195 rnp->boost_tasks != NULL)
185 rnp->boost_tasks = rnp->gp_tasks; 196 rnp->boost_tasks = rnp->gp_tasks;
186#endif /* #ifdef CONFIG_RCU_BOOST */
187 } else { 197 } else {
188 list_add(&t->rcu_node_entry, &rnp->blkd_tasks); 198 list_add(&t->rcu_node_entry, &rnp->blkd_tasks);
189 if (rnp->qsmask & rdp->grpmask) 199 if (rnp->qsmask & rdp->grpmask)
@@ -263,9 +273,7 @@ void rcu_read_unlock_special(struct task_struct *t)
263 bool empty_exp_now; 273 bool empty_exp_now;
264 unsigned long flags; 274 unsigned long flags;
265 struct list_head *np; 275 struct list_head *np;
266#ifdef CONFIG_RCU_BOOST
267 bool drop_boost_mutex = false; 276 bool drop_boost_mutex = false;
268#endif /* #ifdef CONFIG_RCU_BOOST */
269 struct rcu_node *rnp; 277 struct rcu_node *rnp;
270 union rcu_special special; 278 union rcu_special special;
271 279
@@ -307,9 +315,11 @@ void rcu_read_unlock_special(struct task_struct *t)
307 t->rcu_read_unlock_special.b.blocked = false; 315 t->rcu_read_unlock_special.b.blocked = false;
308 316
309 /* 317 /*
310 * Remove this task from the list it blocked on. The 318 * Remove this task from the list it blocked on. The task
311 * task can migrate while we acquire the lock, but at 319 * now remains queued on the rcu_node corresponding to
312 * most one time. So at most two passes through loop. 320 * the CPU it first blocked on, so the first attempt to
321 * acquire the task's rcu_node's ->lock will succeed.
322 * Keep the loop and add a WARN_ON() out of sheer paranoia.
313 */ 323 */
314 for (;;) { 324 for (;;) {
315 rnp = t->rcu_blocked_node; 325 rnp = t->rcu_blocked_node;
@@ -317,6 +327,7 @@ void rcu_read_unlock_special(struct task_struct *t)
317 smp_mb__after_unlock_lock(); 327 smp_mb__after_unlock_lock();
318 if (rnp == t->rcu_blocked_node) 328 if (rnp == t->rcu_blocked_node)
319 break; 329 break;
330 WARN_ON_ONCE(1);
320 raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ 331 raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
321 } 332 }
322 empty_norm = !rcu_preempt_blocked_readers_cgp(rnp); 333 empty_norm = !rcu_preempt_blocked_readers_cgp(rnp);
@@ -331,12 +342,12 @@ void rcu_read_unlock_special(struct task_struct *t)
331 rnp->gp_tasks = np; 342 rnp->gp_tasks = np;
332 if (&t->rcu_node_entry == rnp->exp_tasks) 343 if (&t->rcu_node_entry == rnp->exp_tasks)
333 rnp->exp_tasks = np; 344 rnp->exp_tasks = np;
334#ifdef CONFIG_RCU_BOOST 345 if (IS_ENABLED(CONFIG_RCU_BOOST)) {
335 if (&t->rcu_node_entry == rnp->boost_tasks) 346 if (&t->rcu_node_entry == rnp->boost_tasks)
336 rnp->boost_tasks = np; 347 rnp->boost_tasks = np;
337 /* Snapshot ->boost_mtx ownership with rcu_node lock held. */ 348 /* Snapshot ->boost_mtx ownership w/rnp->lock held. */
338 drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t; 349 drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t;
339#endif /* #ifdef CONFIG_RCU_BOOST */ 350 }
340 351
341 /* 352 /*
342 * If this was the last task on the current list, and if 353 * If this was the last task on the current list, and if
@@ -353,24 +364,21 @@ void rcu_read_unlock_special(struct task_struct *t)
353 rnp->grplo, 364 rnp->grplo,
354 rnp->grphi, 365 rnp->grphi,
355 !!rnp->gp_tasks); 366 !!rnp->gp_tasks);
356 rcu_report_unblock_qs_rnp(&rcu_preempt_state, 367 rcu_report_unblock_qs_rnp(rcu_state_p, rnp, flags);
357 rnp, flags);
358 } else { 368 } else {
359 raw_spin_unlock_irqrestore(&rnp->lock, flags); 369 raw_spin_unlock_irqrestore(&rnp->lock, flags);
360 } 370 }
361 371
362#ifdef CONFIG_RCU_BOOST
363 /* Unboost if we were boosted. */ 372 /* Unboost if we were boosted. */
364 if (drop_boost_mutex) 373 if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
365 rt_mutex_unlock(&rnp->boost_mtx); 374 rt_mutex_unlock(&rnp->boost_mtx);
366#endif /* #ifdef CONFIG_RCU_BOOST */
367 375
368 /* 376 /*
369 * If this was the last task on the expedited lists, 377 * If this was the last task on the expedited lists,
370 * then we need to report up the rcu_node hierarchy. 378 * then we need to report up the rcu_node hierarchy.
371 */ 379 */
372 if (!empty_exp && empty_exp_now) 380 if (!empty_exp && empty_exp_now)
373 rcu_report_exp_rnp(&rcu_preempt_state, rnp, true); 381 rcu_report_exp_rnp(rcu_state_p, rnp, true);
374 } else { 382 } else {
375 local_irq_restore(flags); 383 local_irq_restore(flags);
376 } 384 }
@@ -390,7 +398,7 @@ static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
390 raw_spin_unlock_irqrestore(&rnp->lock, flags); 398 raw_spin_unlock_irqrestore(&rnp->lock, flags);
391 return; 399 return;
392 } 400 }
393 t = list_entry(rnp->gp_tasks, 401 t = list_entry(rnp->gp_tasks->prev,
394 struct task_struct, rcu_node_entry); 402 struct task_struct, rcu_node_entry);
395 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) 403 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry)
396 sched_show_task(t); 404 sched_show_task(t);
@@ -447,7 +455,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp)
447 if (!rcu_preempt_blocked_readers_cgp(rnp)) 455 if (!rcu_preempt_blocked_readers_cgp(rnp))
448 return 0; 456 return 0;
449 rcu_print_task_stall_begin(rnp); 457 rcu_print_task_stall_begin(rnp);
450 t = list_entry(rnp->gp_tasks, 458 t = list_entry(rnp->gp_tasks->prev,
451 struct task_struct, rcu_node_entry); 459 struct task_struct, rcu_node_entry);
452 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { 460 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
453 pr_cont(" P%d", t->pid); 461 pr_cont(" P%d", t->pid);
@@ -491,8 +499,8 @@ static void rcu_preempt_check_callbacks(void)
491 return; 499 return;
492 } 500 }
493 if (t->rcu_read_lock_nesting > 0 && 501 if (t->rcu_read_lock_nesting > 0 &&
494 __this_cpu_read(rcu_preempt_data.qs_pending) && 502 __this_cpu_read(rcu_data_p->qs_pending) &&
495 !__this_cpu_read(rcu_preempt_data.passed_quiesce)) 503 !__this_cpu_read(rcu_data_p->passed_quiesce))
496 t->rcu_read_unlock_special.b.need_qs = true; 504 t->rcu_read_unlock_special.b.need_qs = true;
497} 505}
498 506
@@ -500,7 +508,7 @@ static void rcu_preempt_check_callbacks(void)
500 508
501static void rcu_preempt_do_callbacks(void) 509static void rcu_preempt_do_callbacks(void)
502{ 510{
503 rcu_do_batch(&rcu_preempt_state, this_cpu_ptr(&rcu_preempt_data)); 511 rcu_do_batch(rcu_state_p, this_cpu_ptr(rcu_data_p));
504} 512}
505 513
506#endif /* #ifdef CONFIG_RCU_BOOST */ 514#endif /* #ifdef CONFIG_RCU_BOOST */
@@ -510,7 +518,7 @@ static void rcu_preempt_do_callbacks(void)
510 */ 518 */
511void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) 519void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
512{ 520{
513 __call_rcu(head, func, &rcu_preempt_state, -1, 0); 521 __call_rcu(head, func, rcu_state_p, -1, 0);
514} 522}
515EXPORT_SYMBOL_GPL(call_rcu); 523EXPORT_SYMBOL_GPL(call_rcu);
516 524
@@ -711,7 +719,7 @@ sync_rcu_preempt_exp_init2(struct rcu_state *rsp, struct rcu_node *rnp)
711void synchronize_rcu_expedited(void) 719void synchronize_rcu_expedited(void)
712{ 720{
713 struct rcu_node *rnp; 721 struct rcu_node *rnp;
714 struct rcu_state *rsp = &rcu_preempt_state; 722 struct rcu_state *rsp = rcu_state_p;
715 unsigned long snap; 723 unsigned long snap;
716 int trycount = 0; 724 int trycount = 0;
717 725
@@ -798,7 +806,7 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
798 */ 806 */
799void rcu_barrier(void) 807void rcu_barrier(void)
800{ 808{
801 _rcu_barrier(&rcu_preempt_state); 809 _rcu_barrier(rcu_state_p);
802} 810}
803EXPORT_SYMBOL_GPL(rcu_barrier); 811EXPORT_SYMBOL_GPL(rcu_barrier);
804 812
@@ -807,7 +815,7 @@ EXPORT_SYMBOL_GPL(rcu_barrier);
807 */ 815 */
808static void __init __rcu_init_preempt(void) 816static void __init __rcu_init_preempt(void)
809{ 817{
810 rcu_init_one(&rcu_preempt_state, &rcu_preempt_data); 818 rcu_init_one(rcu_state_p, rcu_data_p);
811} 819}
812 820
813/* 821/*
@@ -830,7 +838,8 @@ void exit_rcu(void)
830 838
831#else /* #ifdef CONFIG_PREEMPT_RCU */ 839#else /* #ifdef CONFIG_PREEMPT_RCU */
832 840
833static struct rcu_state *rcu_state_p = &rcu_sched_state; 841static struct rcu_state *const rcu_state_p = &rcu_sched_state;
842static struct rcu_data __percpu *const rcu_data_p = &rcu_sched_data;
834 843
835/* 844/*
836 * Tell them what RCU they are running. 845 * Tell them what RCU they are running.
@@ -1172,7 +1181,7 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
1172 struct sched_param sp; 1181 struct sched_param sp;
1173 struct task_struct *t; 1182 struct task_struct *t;
1174 1183
1175 if (&rcu_preempt_state != rsp) 1184 if (rcu_state_p != rsp)
1176 return 0; 1185 return 0;
1177 1186
1178 if (!rcu_scheduler_fully_active || rcu_rnp_online_cpus(rnp) == 0) 1187 if (!rcu_scheduler_fully_active || rcu_rnp_online_cpus(rnp) == 0)
@@ -1366,13 +1375,12 @@ static void rcu_prepare_kthreads(int cpu)
1366 * Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs 1375 * Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs
1367 * any flavor of RCU. 1376 * any flavor of RCU.
1368 */ 1377 */
1369#ifndef CONFIG_RCU_NOCB_CPU_ALL
1370int rcu_needs_cpu(unsigned long *delta_jiffies) 1378int rcu_needs_cpu(unsigned long *delta_jiffies)
1371{ 1379{
1372 *delta_jiffies = ULONG_MAX; 1380 *delta_jiffies = ULONG_MAX;
1373 return rcu_cpu_has_callbacks(NULL); 1381 return IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL)
1382 ? 0 : rcu_cpu_has_callbacks(NULL);
1374} 1383}
1375#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
1376 1384
1377/* 1385/*
1378 * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up 1386 * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up
@@ -1479,11 +1487,15 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void)
1479 * 1487 *
1480 * The caller must have disabled interrupts. 1488 * The caller must have disabled interrupts.
1481 */ 1489 */
1482#ifndef CONFIG_RCU_NOCB_CPU_ALL
1483int rcu_needs_cpu(unsigned long *dj) 1490int rcu_needs_cpu(unsigned long *dj)
1484{ 1491{
1485 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1492 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
1486 1493
1494 if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL)) {
1495 *dj = ULONG_MAX;
1496 return 0;
1497 }
1498
1487 /* Snapshot to detect later posting of non-lazy callback. */ 1499 /* Snapshot to detect later posting of non-lazy callback. */
1488 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted; 1500 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
1489 1501
@@ -1510,7 +1522,6 @@ int rcu_needs_cpu(unsigned long *dj)
1510 } 1522 }
1511 return 0; 1523 return 0;
1512} 1524}
1513#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
1514 1525
1515/* 1526/*
1516 * Prepare a CPU for idle from an RCU perspective. The first major task 1527 * Prepare a CPU for idle from an RCU perspective. The first major task
@@ -1524,7 +1535,6 @@ int rcu_needs_cpu(unsigned long *dj)
1524 */ 1535 */
1525static void rcu_prepare_for_idle(void) 1536static void rcu_prepare_for_idle(void)
1526{ 1537{
1527#ifndef CONFIG_RCU_NOCB_CPU_ALL
1528 bool needwake; 1538 bool needwake;
1529 struct rcu_data *rdp; 1539 struct rcu_data *rdp;
1530 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1540 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
@@ -1532,6 +1542,9 @@ static void rcu_prepare_for_idle(void)
1532 struct rcu_state *rsp; 1542 struct rcu_state *rsp;
1533 int tne; 1543 int tne;
1534 1544
1545 if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL))
1546 return;
1547
1535 /* Handle nohz enablement switches conservatively. */ 1548 /* Handle nohz enablement switches conservatively. */
1536 tne = READ_ONCE(tick_nohz_active); 1549 tne = READ_ONCE(tick_nohz_active);
1537 if (tne != rdtp->tick_nohz_enabled_snap) { 1550 if (tne != rdtp->tick_nohz_enabled_snap) {
@@ -1579,7 +1592,6 @@ static void rcu_prepare_for_idle(void)
1579 if (needwake) 1592 if (needwake)
1580 rcu_gp_kthread_wake(rsp); 1593 rcu_gp_kthread_wake(rsp);
1581 } 1594 }
1582#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
1583} 1595}
1584 1596
1585/* 1597/*
@@ -1589,12 +1601,11 @@ static void rcu_prepare_for_idle(void)
1589 */ 1601 */
1590static void rcu_cleanup_after_idle(void) 1602static void rcu_cleanup_after_idle(void)
1591{ 1603{
1592#ifndef CONFIG_RCU_NOCB_CPU_ALL 1604 if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL) ||
1593 if (rcu_is_nocb_cpu(smp_processor_id())) 1605 rcu_is_nocb_cpu(smp_processor_id()))
1594 return; 1606 return;
1595 if (rcu_try_advance_all_cbs()) 1607 if (rcu_try_advance_all_cbs())
1596 invoke_rcu_core(); 1608 invoke_rcu_core();
1597#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
1598} 1609}
1599 1610
1600/* 1611/*
@@ -3048,9 +3059,9 @@ static bool rcu_nohz_full_cpu(struct rcu_state *rsp)
3048 if (tick_nohz_full_cpu(smp_processor_id()) && 3059 if (tick_nohz_full_cpu(smp_processor_id()) &&
3049 (!rcu_gp_in_progress(rsp) || 3060 (!rcu_gp_in_progress(rsp) ||
3050 ULONG_CMP_LT(jiffies, READ_ONCE(rsp->gp_start) + HZ))) 3061 ULONG_CMP_LT(jiffies, READ_ONCE(rsp->gp_start) + HZ)))
3051 return 1; 3062 return true;
3052#endif /* #ifdef CONFIG_NO_HZ_FULL */ 3063#endif /* #ifdef CONFIG_NO_HZ_FULL */
3053 return 0; 3064 return false;
3054} 3065}
3055 3066
3056/* 3067/*
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ba2b0c87e65b..b908048f8d6a 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1233,6 +1233,7 @@ config RCU_TORTURE_TEST
1233 depends on DEBUG_KERNEL 1233 depends on DEBUG_KERNEL
1234 select TORTURE_TEST 1234 select TORTURE_TEST
1235 select SRCU 1235 select SRCU
1236 select TASKS_RCU
1236 default n 1237 default n
1237 help 1238 help
1238 This option provides a kernel module that runs torture tests 1239 This option provides a kernel module that runs torture tests
@@ -1261,12 +1262,38 @@ config RCU_TORTURE_TEST_RUNNABLE
1261 Say N here if you want the RCU torture tests to start only 1262 Say N here if you want the RCU torture tests to start only
1262 after being manually enabled via /proc. 1263 after being manually enabled via /proc.
1263 1264
1265config RCU_TORTURE_TEST_SLOW_PREINIT
1266 bool "Slow down RCU grace-period pre-initialization to expose races"
1267 depends on RCU_TORTURE_TEST
1268 help
1269 This option delays grace-period pre-initialization (the
1270 propagation of CPU-hotplug changes up the rcu_node combining
1271 tree) for a few jiffies between initializing each pair of
1272 consecutive rcu_node structures. This helps to expose races
1273 involving grace-period pre-initialization, in other words, it
1274 makes your kernel less stable. It can also greatly increase
1275 grace-period latency, especially on systems with large numbers
1276 of CPUs. This is useful when torture-testing RCU, but in
1277 almost no other circumstance.
1278
1279 Say Y here if you want your system to crash and hang more often.
1280 Say N if you want a sane system.
1281
1282config RCU_TORTURE_TEST_SLOW_PREINIT_DELAY
1283 int "How much to slow down RCU grace-period pre-initialization"
1284 range 0 5
1285 default 3
1286 depends on RCU_TORTURE_TEST_SLOW_PREINIT
1287 help
1288 This option specifies the number of jiffies to wait between
1289 each rcu_node structure pre-initialization step.
1290
1264config RCU_TORTURE_TEST_SLOW_INIT 1291config RCU_TORTURE_TEST_SLOW_INIT
1265 bool "Slow down RCU grace-period initialization to expose races" 1292 bool "Slow down RCU grace-period initialization to expose races"
1266 depends on RCU_TORTURE_TEST 1293 depends on RCU_TORTURE_TEST
1267 help 1294 help
1268 This option makes grace-period initialization block for a 1295 This option delays grace-period initialization for a few
1269 few jiffies between initializing each pair of consecutive 1296 jiffies between initializing each pair of consecutive
1270 rcu_node structures. This helps to expose races involving 1297 rcu_node structures. This helps to expose races involving
1271 grace-period initialization, in other words, it makes your 1298 grace-period initialization, in other words, it makes your
1272 kernel less stable. It can also greatly increase grace-period 1299 kernel less stable. It can also greatly increase grace-period
@@ -1286,6 +1313,30 @@ config RCU_TORTURE_TEST_SLOW_INIT_DELAY
1286 This option specifies the number of jiffies to wait between 1313 This option specifies the number of jiffies to wait between
1287 each rcu_node structure initialization. 1314 each rcu_node structure initialization.
1288 1315
1316config RCU_TORTURE_TEST_SLOW_CLEANUP
1317 bool "Slow down RCU grace-period cleanup to expose races"
1318 depends on RCU_TORTURE_TEST
1319 help
1320 This option delays grace-period cleanup for a few jiffies
1321 between cleaning up each pair of consecutive rcu_node
1322 structures. This helps to expose races involving grace-period
1323 cleanup, in other words, it makes your kernel less stable.
1324 It can also greatly increase grace-period latency, especially
1325 on systems with large numbers of CPUs. This is useful when
1326 torture-testing RCU, but in almost no other circumstance.
1327
1328 Say Y here if you want your system to crash and hang more often.
1329 Say N if you want a sane system.
1330
1331config RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY
1332 int "How much to slow down RCU grace-period cleanup"
1333 range 0 5
1334 default 3
1335 depends on RCU_TORTURE_TEST_SLOW_CLEANUP
1336 help
1337 This option specifies the number of jiffies to wait between
1338 each rcu_node structure cleanup operation.
1339
1289config RCU_CPU_STALL_TIMEOUT 1340config RCU_CPU_STALL_TIMEOUT
1290 int "RCU CPU stall timeout in seconds" 1341 int "RCU CPU stall timeout in seconds"
1291 depends on RCU_STALL_COMMON 1342 depends on RCU_STALL_COMMON
@@ -1322,6 +1373,17 @@ config RCU_TRACE
1322 Say Y here if you want to enable RCU tracing 1373 Say Y here if you want to enable RCU tracing
1323 Say N if you are unsure. 1374 Say N if you are unsure.
1324 1375
1376config RCU_EQS_DEBUG
1377 bool "Use this when adding any sort of NO_HZ support to your arch"
1378 depends on DEBUG_KERNEL
1379 help
1380 This option provides consistency checks in RCU's handling of
1381 NO_HZ. These checks have proven quite helpful in detecting
1382 bugs in arch-specific NO_HZ code.
1383
1384 Say N here if you need ultimate kernel/user switch latencies
1385 Say Y if you are unsure
1386
1325endmenu # "RCU Debugging" 1387endmenu # "RCU Debugging"
1326 1388
1327config DEBUG_BLOCK_EXT_DEVT 1389config DEBUG_BLOCK_EXT_DEVT
diff --git a/tools/testing/selftests/rcutorture/bin/configinit.sh b/tools/testing/selftests/rcutorture/bin/configinit.sh
index 15f1a17ca96e..3f81a1095206 100755
--- a/tools/testing/selftests/rcutorture/bin/configinit.sh
+++ b/tools/testing/selftests/rcutorture/bin/configinit.sh
@@ -66,7 +66,7 @@ make $buildloc $TORTURE_DEFCONFIG > $builddir/Make.defconfig.out 2>&1
66mv $builddir/.config $builddir/.config.sav 66mv $builddir/.config $builddir/.config.sav
67sh $T/upd.sh < $builddir/.config.sav > $builddir/.config 67sh $T/upd.sh < $builddir/.config.sav > $builddir/.config
68cp $builddir/.config $builddir/.config.new 68cp $builddir/.config $builddir/.config.new
69yes '' | make $buildloc oldconfig > $builddir/Make.modconfig.out 2>&1 69yes '' | make $buildloc oldconfig > $builddir/Make.oldconfig.out 2> $builddir/Make.oldconfig.err
70 70
71# verify new config matches specification. 71# verify new config matches specification.
72configcheck.sh $builddir/.config $c 72configcheck.sh $builddir/.config $c
diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
index 4f5b20f367a9..d86bdd6b6cc2 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
@@ -43,6 +43,10 @@ do
43 if test -f "$i/console.log" 43 if test -f "$i/console.log"
44 then 44 then
45 configcheck.sh $i/.config $i/ConfigFragment 45 configcheck.sh $i/.config $i/ConfigFragment
46 if test -r $i/Make.oldconfig.err
47 then
48 cat $i/Make.oldconfig.err
49 fi
46 parse-build.sh $i/Make.out $configfile 50 parse-build.sh $i/Make.out $configfile
47 parse-torture.sh $i/console.log $configfile 51 parse-torture.sh $i/console.log $configfile
48 parse-console.sh $i/console.log $configfile 52 parse-console.sh $i/console.log $configfile
diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh
index dd2812ceb0ba..fbe2dbff1e21 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm.sh
@@ -55,7 +55,7 @@ usage () {
55 echo " --bootargs kernel-boot-arguments" 55 echo " --bootargs kernel-boot-arguments"
56 echo " --bootimage relative-path-to-kernel-boot-image" 56 echo " --bootimage relative-path-to-kernel-boot-image"
57 echo " --buildonly" 57 echo " --buildonly"
58 echo " --configs \"config-file list\"" 58 echo " --configs \"config-file list w/ repeat factor (3*TINY01)\""
59 echo " --cpus N" 59 echo " --cpus N"
60 echo " --datestamp string" 60 echo " --datestamp string"
61 echo " --defconfig string" 61 echo " --defconfig string"
@@ -178,13 +178,26 @@ fi
178touch $T/cfgcpu 178touch $T/cfgcpu
179for CF in $configs 179for CF in $configs
180do 180do
181 if test -f "$CONFIGFRAG/$CF" 181 case $CF in
182 [0-9]\**|[0-9][0-9]\**|[0-9][0-9][0-9]\**)
183 config_reps=`echo $CF | sed -e 's/\*.*$//'`
184 CF1=`echo $CF | sed -e 's/^[^*]*\*//'`
185 ;;
186 *)
187 config_reps=1
188 CF1=$CF
189 ;;
190 esac
191 if test -f "$CONFIGFRAG/$CF1"
182 then 192 then
183 cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF` 193 cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF1`
184 cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF" "$cpu_count"` 194 cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"`
185 echo $CF $cpu_count >> $T/cfgcpu 195 for ((cur_rep=0;cur_rep<$config_reps;cur_rep++))
196 do
197 echo $CF1 $cpu_count >> $T/cfgcpu
198 done
186 else 199 else
187 echo "The --configs file $CF does not exist, terminating." 200 echo "The --configs file $CF1 does not exist, terminating."
188 exit 1 201 exit 1
189 fi 202 fi
190done 203done
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/CFcommon b/tools/testing/selftests/rcutorture/configs/rcu/CFcommon
index 49701218dc62..f824b4c9d9d9 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/CFcommon
+++ b/tools/testing/selftests/rcutorture/configs/rcu/CFcommon
@@ -1,3 +1,5 @@
1CONFIG_RCU_TORTURE_TEST=y 1CONFIG_RCU_TORTURE_TEST=y
2CONFIG_PRINTK_TIME=y 2CONFIG_PRINTK_TIME=y
3CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP=y
3CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y 4CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y
5CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
index 9fbb41b9b314..1a087c3c8bb8 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
+++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y
5CONFIG_PREEMPT_NONE=y 5CONFIG_PREEMPT_NONE=y
6CONFIG_PREEMPT_VOLUNTARY=n 6CONFIG_PREEMPT_VOLUNTARY=n
7CONFIG_PREEMPT=n 7CONFIG_PREEMPT=n
8CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P
index 4b6f272dba27..4837430a71c0 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P
+++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P
@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y
5CONFIG_PREEMPT_NONE=n 5CONFIG_PREEMPT_NONE=n
6CONFIG_PREEMPT_VOLUNTARY=n 6CONFIG_PREEMPT_VOLUNTARY=n
7CONFIG_PREEMPT=y 7CONFIG_PREEMPT=y
8#CHECK#CONFIG_RCU_EXPERT=n
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot
index 238bfe3bd0cc..84a7d51b7481 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot
+++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot
@@ -1 +1 @@
rcutorture.torture_type=srcu rcutorture.torture_type=srcud
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS01 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01
index 97f0a0b27ef7..2cc0e60eba6e 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TASKS01
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01
@@ -5,5 +5,6 @@ CONFIG_PREEMPT_NONE=n
5CONFIG_PREEMPT_VOLUNTARY=n 5CONFIG_PREEMPT_VOLUNTARY=n
6CONFIG_PREEMPT=y 6CONFIG_PREEMPT=y
7CONFIG_DEBUG_LOCK_ALLOC=y 7CONFIG_DEBUG_LOCK_ALLOC=y
8CONFIG_PROVE_RCU=y 8CONFIG_PROVE_LOCKING=n
9CONFIG_TASKS_RCU=y 9#CHECK#CONFIG_PROVE_RCU=n
10CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS02 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02
index 696d2ea74d13..ad2be91e5ee7 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TASKS02
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02
@@ -2,4 +2,3 @@ CONFIG_SMP=n
2CONFIG_PREEMPT_NONE=y 2CONFIG_PREEMPT_NONE=y
3CONFIG_PREEMPT_VOLUNTARY=n 3CONFIG_PREEMPT_VOLUNTARY=n
4CONFIG_PREEMPT=n 4CONFIG_PREEMPT=n
5CONFIG_TASKS_RCU=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS03 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS03
index 9c60da5b5d1d..c70c51d5ded1 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TASKS03
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS03
@@ -6,8 +6,8 @@ CONFIG_HIBERNATION=n
6CONFIG_PREEMPT_NONE=n 6CONFIG_PREEMPT_NONE=n
7CONFIG_PREEMPT_VOLUNTARY=n 7CONFIG_PREEMPT_VOLUNTARY=n
8CONFIG_PREEMPT=y 8CONFIG_PREEMPT=y
9CONFIG_TASKS_RCU=y
10CONFIG_HZ_PERIODIC=n 9CONFIG_HZ_PERIODIC=n
11CONFIG_NO_HZ_IDLE=n 10CONFIG_NO_HZ_IDLE=n
12CONFIG_NO_HZ_FULL=y 11CONFIG_NO_HZ_FULL=y
13CONFIG_NO_HZ_FULL_ALL=y 12CONFIG_NO_HZ_FULL_ALL=y
13#CHECK#CONFIG_RCU_EXPERT=n
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TINY02 b/tools/testing/selftests/rcutorture/configs/rcu/TINY02
index 36e41df3d27a..f1892e0371c9 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TINY02
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TINY02
@@ -8,7 +8,7 @@ CONFIG_NO_HZ_IDLE=n
8CONFIG_NO_HZ_FULL=n 8CONFIG_NO_HZ_FULL=n
9CONFIG_RCU_TRACE=y 9CONFIG_RCU_TRACE=y
10CONFIG_PROVE_LOCKING=y 10CONFIG_PROVE_LOCKING=y
11CONFIG_PROVE_RCU=y 11#CHECK#CONFIG_PROVE_RCU=y
12CONFIG_DEBUG_LOCK_ALLOC=y 12CONFIG_DEBUG_LOCK_ALLOC=y
13CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 13CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
14CONFIG_PREEMPT_COUNT=y 14CONFIG_PREEMPT_COUNT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot b/tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot
index 0f0802730014..6c1a292a65fb 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot
@@ -1,2 +1,3 @@
1rcupdate.rcu_self_test=1 1rcupdate.rcu_self_test=1
2rcupdate.rcu_self_test_bh=1 2rcupdate.rcu_self_test_bh=1
3rcutorture.torture_type=rcu_bh
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE01 b/tools/testing/selftests/rcutorture/configs/rcu/TREE01
index f8a10a7500c6..8e9137f66831 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE01
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE01
@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n
16CONFIG_RCU_CPU_STALL_INFO=n 16CONFIG_RCU_CPU_STALL_INFO=n
17CONFIG_RCU_BOOST=n 17CONFIG_RCU_BOOST=n
18CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 18CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
19CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE02 b/tools/testing/selftests/rcutorture/configs/rcu/TREE02
index 629122fb8b4a..aeea6a204d14 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE02
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE02
@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n
14CONFIG_HIBERNATION=n 14CONFIG_HIBERNATION=n
15CONFIG_RCU_FANOUT=3 15CONFIG_RCU_FANOUT=3
16CONFIG_RCU_FANOUT_LEAF=3 16CONFIG_RCU_FANOUT_LEAF=3
17CONFIG_RCU_FANOUT_EXACT=n
18CONFIG_RCU_NOCB_CPU=n 17CONFIG_RCU_NOCB_CPU=n
19CONFIG_DEBUG_LOCK_ALLOC=y 18CONFIG_DEBUG_LOCK_ALLOC=y
20CONFIG_PROVE_LOCKING=n 19CONFIG_PROVE_LOCKING=n
21CONFIG_RCU_CPU_STALL_INFO=n 20CONFIG_RCU_CPU_STALL_INFO=n
22CONFIG_RCU_BOOST=n 21CONFIG_RCU_BOOST=n
23CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 22CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
23CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE02-T b/tools/testing/selftests/rcutorture/configs/rcu/TREE02-T
index a25de47888a4..2ac9e68ea3d1 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE02-T
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE02-T
@@ -14,7 +14,6 @@ CONFIG_SUSPEND=n
14CONFIG_HIBERNATION=n 14CONFIG_HIBERNATION=n
15CONFIG_RCU_FANOUT=3 15CONFIG_RCU_FANOUT=3
16CONFIG_RCU_FANOUT_LEAF=3 16CONFIG_RCU_FANOUT_LEAF=3
17CONFIG_RCU_FANOUT_EXACT=n
18CONFIG_RCU_NOCB_CPU=n 17CONFIG_RCU_NOCB_CPU=n
19CONFIG_DEBUG_LOCK_ALLOC=y 18CONFIG_DEBUG_LOCK_ALLOC=y
20CONFIG_PROVE_LOCKING=n 19CONFIG_PROVE_LOCKING=n
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE03 b/tools/testing/selftests/rcutorture/configs/rcu/TREE03
index 53f24e0a0ab6..72aa7d87ea99 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE03
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03
@@ -1,5 +1,5 @@
1CONFIG_SMP=y 1CONFIG_SMP=y
2CONFIG_NR_CPUS=8 2CONFIG_NR_CPUS=16
3CONFIG_PREEMPT_NONE=n 3CONFIG_PREEMPT_NONE=n
4CONFIG_PREEMPT_VOLUNTARY=n 4CONFIG_PREEMPT_VOLUNTARY=n
5CONFIG_PREEMPT=y 5CONFIG_PREEMPT=y
@@ -9,12 +9,12 @@ CONFIG_NO_HZ_IDLE=n
9CONFIG_NO_HZ_FULL=n 9CONFIG_NO_HZ_FULL=n
10CONFIG_RCU_TRACE=y 10CONFIG_RCU_TRACE=y
11CONFIG_HOTPLUG_CPU=y 11CONFIG_HOTPLUG_CPU=y
12CONFIG_RCU_FANOUT=4 12CONFIG_RCU_FANOUT=2
13CONFIG_RCU_FANOUT_LEAF=4 13CONFIG_RCU_FANOUT_LEAF=2
14CONFIG_RCU_FANOUT_EXACT=n
15CONFIG_RCU_NOCB_CPU=n 14CONFIG_RCU_NOCB_CPU=n
16CONFIG_DEBUG_LOCK_ALLOC=n 15CONFIG_DEBUG_LOCK_ALLOC=n
17CONFIG_RCU_CPU_STALL_INFO=n 16CONFIG_RCU_CPU_STALL_INFO=n
18CONFIG_RCU_BOOST=y 17CONFIG_RCU_BOOST=y
19CONFIG_RCU_KTHREAD_PRIO=2 18CONFIG_RCU_KTHREAD_PRIO=2
20CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 19CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
20CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
new file mode 100644
index 000000000000..120c0c88d100
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
@@ -0,0 +1 @@
rcutorture.onoff_interval=1 rcutorture.onoff_holdoff=30
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE04 b/tools/testing/selftests/rcutorture/configs/rcu/TREE04
index 0f84db35b36d..3f5112751cda 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE04
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE04
@@ -13,10 +13,10 @@ CONFIG_RCU_TRACE=y
13CONFIG_HOTPLUG_CPU=n 13CONFIG_HOTPLUG_CPU=n
14CONFIG_SUSPEND=n 14CONFIG_SUSPEND=n
15CONFIG_HIBERNATION=n 15CONFIG_HIBERNATION=n
16CONFIG_RCU_FANOUT=2 16CONFIG_RCU_FANOUT=4
17CONFIG_RCU_FANOUT_LEAF=2 17CONFIG_RCU_FANOUT_LEAF=4
18CONFIG_RCU_FANOUT_EXACT=n
19CONFIG_RCU_NOCB_CPU=n 18CONFIG_RCU_NOCB_CPU=n
20CONFIG_DEBUG_LOCK_ALLOC=n 19CONFIG_DEBUG_LOCK_ALLOC=n
21CONFIG_RCU_CPU_STALL_INFO=y 20CONFIG_RCU_CPU_STALL_INFO=n
22CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 21CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
22CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE05 b/tools/testing/selftests/rcutorture/configs/rcu/TREE05
index 212e3bfd2b2a..c04dfea6fd21 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE05
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE05
@@ -12,11 +12,11 @@ CONFIG_RCU_TRACE=n
12CONFIG_HOTPLUG_CPU=y 12CONFIG_HOTPLUG_CPU=y
13CONFIG_RCU_FANOUT=6 13CONFIG_RCU_FANOUT=6
14CONFIG_RCU_FANOUT_LEAF=6 14CONFIG_RCU_FANOUT_LEAF=6
15CONFIG_RCU_FANOUT_EXACT=n
16CONFIG_RCU_NOCB_CPU=y 15CONFIG_RCU_NOCB_CPU=y
17CONFIG_RCU_NOCB_CPU_NONE=y 16CONFIG_RCU_NOCB_CPU_NONE=y
18CONFIG_DEBUG_LOCK_ALLOC=y 17CONFIG_DEBUG_LOCK_ALLOC=y
19CONFIG_PROVE_LOCKING=y 18CONFIG_PROVE_LOCKING=y
20CONFIG_PROVE_RCU=y 19#CHECK#CONFIG_PROVE_RCU=y
21CONFIG_RCU_CPU_STALL_INFO=n 20CONFIG_RCU_CPU_STALL_INFO=n
22CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 21CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
22CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE06 b/tools/testing/selftests/rcutorture/configs/rcu/TREE06
index 7eee63b44218..f51d2c73a68e 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE06
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE06
@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n
14CONFIG_HIBERNATION=n 14CONFIG_HIBERNATION=n
15CONFIG_RCU_FANOUT=6 15CONFIG_RCU_FANOUT=6
16CONFIG_RCU_FANOUT_LEAF=6 16CONFIG_RCU_FANOUT_LEAF=6
17CONFIG_RCU_FANOUT_EXACT=y
18CONFIG_RCU_NOCB_CPU=n 17CONFIG_RCU_NOCB_CPU=n
19CONFIG_DEBUG_LOCK_ALLOC=y 18CONFIG_DEBUG_LOCK_ALLOC=y
20CONFIG_PROVE_LOCKING=y 19CONFIG_PROVE_LOCKING=y
21CONFIG_PROVE_RCU=y 20#CHECK#CONFIG_PROVE_RCU=y
22CONFIG_RCU_CPU_STALL_INFO=n 21CONFIG_RCU_CPU_STALL_INFO=n
23CONFIG_DEBUG_OBJECTS_RCU_HEAD=y 22CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
23CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot
index da9a03a398db..dd90f28ed700 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot
@@ -1,3 +1,4 @@
1rcupdate.rcu_self_test=1 1rcupdate.rcu_self_test=1
2rcupdate.rcu_self_test_bh=1 2rcupdate.rcu_self_test_bh=1
3rcupdate.rcu_self_test_sched=1 3rcupdate.rcu_self_test_sched=1
4rcutree.rcu_fanout_exact=1
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE07 b/tools/testing/selftests/rcutorture/configs/rcu/TREE07
index 92a97fa97dec..f422af4ff5a3 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE07
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE07
@@ -15,8 +15,8 @@ CONFIG_RCU_TRACE=y
15CONFIG_HOTPLUG_CPU=y 15CONFIG_HOTPLUG_CPU=y
16CONFIG_RCU_FANOUT=2 16CONFIG_RCU_FANOUT=2
17CONFIG_RCU_FANOUT_LEAF=2 17CONFIG_RCU_FANOUT_LEAF=2
18CONFIG_RCU_FANOUT_EXACT=n
19CONFIG_RCU_NOCB_CPU=n 18CONFIG_RCU_NOCB_CPU=n
20CONFIG_DEBUG_LOCK_ALLOC=n 19CONFIG_DEBUG_LOCK_ALLOC=n
21CONFIG_RCU_CPU_STALL_INFO=y 20CONFIG_RCU_CPU_STALL_INFO=n
22CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 21CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
22CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE08 b/tools/testing/selftests/rcutorture/configs/rcu/TREE08
index 5812027d6f9f..a24d2ca30646 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE08
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08
@@ -1,5 +1,5 @@
1CONFIG_SMP=y 1CONFIG_SMP=y
2CONFIG_NR_CPUS=16 2CONFIG_NR_CPUS=8
3CONFIG_PREEMPT_NONE=n 3CONFIG_PREEMPT_NONE=n
4CONFIG_PREEMPT_VOLUNTARY=n 4CONFIG_PREEMPT_VOLUNTARY=n
5CONFIG_PREEMPT=y 5CONFIG_PREEMPT=y
@@ -13,13 +13,13 @@ CONFIG_HOTPLUG_CPU=n
13CONFIG_SUSPEND=n 13CONFIG_SUSPEND=n
14CONFIG_HIBERNATION=n 14CONFIG_HIBERNATION=n
15CONFIG_RCU_FANOUT=3 15CONFIG_RCU_FANOUT=3
16CONFIG_RCU_FANOUT_EXACT=y
17CONFIG_RCU_FANOUT_LEAF=2 16CONFIG_RCU_FANOUT_LEAF=2
18CONFIG_RCU_NOCB_CPU=y 17CONFIG_RCU_NOCB_CPU=y
19CONFIG_RCU_NOCB_CPU_ALL=y 18CONFIG_RCU_NOCB_CPU_ALL=y
20CONFIG_DEBUG_LOCK_ALLOC=n 19CONFIG_DEBUG_LOCK_ALLOC=n
21CONFIG_PROVE_LOCKING=y 20CONFIG_PROVE_LOCKING=y
22CONFIG_PROVE_RCU=y 21#CHECK#CONFIG_PROVE_RCU=y
23CONFIG_RCU_CPU_STALL_INFO=n 22CONFIG_RCU_CPU_STALL_INFO=n
24CONFIG_RCU_BOOST=n 23CONFIG_RCU_BOOST=n
25CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 24CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
25CONFIG_RCU_EXPERT=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T
index 3eaeccacb083..b2b8cea69dc9 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T
@@ -13,7 +13,6 @@ CONFIG_HOTPLUG_CPU=n
13CONFIG_SUSPEND=n 13CONFIG_SUSPEND=n
14CONFIG_HIBERNATION=n 14CONFIG_HIBERNATION=n
15CONFIG_RCU_FANOUT=3 15CONFIG_RCU_FANOUT=3
16CONFIG_RCU_FANOUT_EXACT=y
17CONFIG_RCU_FANOUT_LEAF=2 16CONFIG_RCU_FANOUT_LEAF=2
18CONFIG_RCU_NOCB_CPU=y 17CONFIG_RCU_NOCB_CPU=y
19CONFIG_RCU_NOCB_CPU_ALL=y 18CONFIG_RCU_NOCB_CPU_ALL=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T.boot
new file mode 100644
index 000000000000..883149b5f2d1
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08-T.boot
@@ -0,0 +1 @@
rcutree.rcu_fanout_exact=1
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot
index 2561daf605ad..fb066dc82769 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot
@@ -1,3 +1,4 @@
1rcutorture.torture_type=sched 1rcutorture.torture_type=sched
2rcupdate.rcu_self_test=1 2rcupdate.rcu_self_test=1
3rcupdate.rcu_self_test_sched=1 3rcupdate.rcu_self_test_sched=1
4rcutree.rcu_fanout_exact=1
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE09 b/tools/testing/selftests/rcutorture/configs/rcu/TREE09
index 6076b36f6c0b..aa4ed08d999d 100644
--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE09
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE09
@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n
16CONFIG_RCU_CPU_STALL_INFO=n 16CONFIG_RCU_CPU_STALL_INFO=n
17CONFIG_RCU_BOOST=n 17CONFIG_RCU_BOOST=n
18CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 18CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
19#CHECK#CONFIG_RCU_EXPERT=n
diff --git a/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt b/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
index ec03c883db00..b24c0004fc49 100644
--- a/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
+++ b/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
@@ -12,13 +12,12 @@ CONFIG_NO_HZ_IDLE -- Do those not otherwise specified. (Groups of two.)
12CONFIG_NO_HZ_FULL -- Do two, one with CONFIG_NO_HZ_FULL_SYSIDLE. 12CONFIG_NO_HZ_FULL -- Do two, one with CONFIG_NO_HZ_FULL_SYSIDLE.
13CONFIG_NO_HZ_FULL_SYSIDLE -- Do one. 13CONFIG_NO_HZ_FULL_SYSIDLE -- Do one.
14CONFIG_PREEMPT -- Do half. (First three and #8.) 14CONFIG_PREEMPT -- Do half. (First three and #8.)
15CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not. 15CONFIG_PROVE_LOCKING -- Do several, covering CONFIG_DEBUG_LOCK_ALLOC=y and not.
16CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING. 16CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING.
17CONFIG_RCU_BOOST -- one of PREEMPT_RCU. 17CONFIG_RCU_BOOST -- one of PREEMPT_RCU.
18CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing. 18CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing.
19CONFIG_RCU_CPU_STALL_INFO -- Do one. 19CONFIG_RCU_CPU_STALL_INFO -- Now default, avoid at least twice.
20CONFIG_RCU_FANOUT -- Cover hierarchy as currently, but overlap with others. 20CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others.
21CONFIG_RCU_FANOUT_EXACT -- Do one.
22CONFIG_RCU_FANOUT_LEAF -- Do one non-default. 21CONFIG_RCU_FANOUT_LEAF -- Do one non-default.
23CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL. 22CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL.
24CONFIG_RCU_NOCB_CPU -- Do three, see below. 23CONFIG_RCU_NOCB_CPU -- Do three, see below.
@@ -27,28 +26,19 @@ CONFIG_RCU_NOCB_CPU_NONE -- Do one.
27CONFIG_RCU_NOCB_CPU_ZERO -- Do one. 26CONFIG_RCU_NOCB_CPU_ZERO -- Do one.
28CONFIG_RCU_TRACE -- Do half. 27CONFIG_RCU_TRACE -- Do half.
29CONFIG_SMP -- Need one !SMP for PREEMPT_RCU. 28CONFIG_SMP -- Need one !SMP for PREEMPT_RCU.
29!RCU_EXPERT -- Do a few, but these have to be vanilla configurations.
30RCU-bh: Do one with PREEMPT and one with !PREEMPT. 30RCU-bh: Do one with PREEMPT and one with !PREEMPT.
31RCU-sched: Do one with PREEMPT but not BOOST. 31RCU-sched: Do one with PREEMPT but not BOOST.
32 32
33 33
34Hierarchy: 34Boot parameters:
35 35
36TREE01. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=8, CONFIG_RCU_FANOUT_EXACT=n. 36nohz_full - do at least one.
37TREE02. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=n, 37maxcpu -- do at least one.
38 CONFIG_RCU_FANOUT_LEAF=3. 38rcupdate.rcu_self_test_bh -- Do at least one each, offloaded and not.
39TREE03. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=4, CONFIG_RCU_FANOUT_EXACT=n, 39rcupdate.rcu_self_test_sched -- Do at least one each, offloaded and not.
40 CONFIG_RCU_FANOUT_LEAF=4. 40rcupdate.rcu_self_test -- Do at least one each, offloaded and not.
41TREE04. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n, 41rcutree.rcu_fanout_exact -- Do at least one.
42 CONFIG_RCU_FANOUT_LEAF=2.
43TREE05. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=n
44 CONFIG_RCU_FANOUT_LEAF=6.
45TREE06. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=y
46 CONFIG_RCU_FANOUT_LEAF=6.
47TREE07. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n,
48 CONFIG_RCU_FANOUT_LEAF=2.
49TREE08. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=y,
50 CONFIG_RCU_FANOUT_LEAF=2.
51TREE09. CONFIG_NR_CPUS=1.
52 42
53 43
54Kconfig Parameters Ignored: 44Kconfig Parameters Ignored: