aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/rcutree_plugin.h
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>2010-09-07 13:38:22 -0400
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2011-05-06 02:16:54 -0400
commite59fb3120becfb36b22ddb8bd27d065d3cdca499 (patch)
tree37eaadfe112b64caae943fc7469274bc96553d92 /kernel/rcutree_plugin.h
parenta00e0d714fbded07a7a2254391ce9ed5a5cb9d82 (diff)
rcu: Decrease memory-barrier usage based on semi-formal proof
Commit d09b62d fixed grace-period synchronization, but left some smp_mb() invocations in rcu_process_callbacks() that are no longer needed, but sheer paranoia prevented them from being removed. This commit removes them and provides a proof of correctness in their absence. It also adds a memory barrier to rcu_report_qs_rsp() immediately before the update to rsp->completed in order to handle the theoretical possibility that the compiler or CPU might move massive quantities of code into a lock-based critical section. This also proves that the sheer paranoia was not entirely unjustified, at least from a theoretical point of view. In addition, the old dyntick-idle synchronization depended on the fact that grace periods were many milliseconds in duration, so that it could be assumed that no dyntick-idle CPU could reorder a memory reference across an entire grace period. Unfortunately for this design, the addition of expedited grace periods breaks this assumption, which has the unfortunate side-effect of requiring atomic operations in the functions that track dyntick-idle state for RCU. (There is some hope that the algorithms used in user-level RCU might be applied here, but some work is required to handle the NMIs that user-space applications can happily ignore. For the short term, better safe than sorry.) This proof assumes that neither compiler nor CPU will allow a lock acquisition and release to be reordered, as doing so can result in deadlock. The proof is as follows: 1. A given CPU declares a quiescent state under the protection of its leaf rcu_node's lock. 2. If there is more than one level of rcu_node hierarchy, the last CPU to declare a quiescent state will also acquire the ->lock of the next rcu_node up in the hierarchy, but only after releasing the lower level's lock. The acquisition of this lock clearly cannot occur prior to the acquisition of the leaf node's lock. 3. Step 2 repeats until we reach the root rcu_node structure. Please note again that only one lock is held at a time through this process. The acquisition of the root rcu_node's ->lock must occur after the release of that of the leaf rcu_node. 4. At this point, we set the ->completed field in the rcu_state structure in rcu_report_qs_rsp(). However, if the rcu_node hierarchy contains only one rcu_node, then in theory the code preceding the quiescent state could leak into the critical section. We therefore precede the update of ->completed with a memory barrier. All CPUs will therefore agree that any updates preceding any report of a quiescent state will have happened before the update of ->completed. 5. Regardless of whether a new grace period is needed, rcu_start_gp() will propagate the new value of ->completed to all of the leaf rcu_node structures, under the protection of each rcu_node's ->lock. If a new grace period is needed immediately, this propagation will occur in the same critical section that ->completed was set in, but courtesy of the memory barrier in #4 above, is still seen to follow any pre-quiescent-state activity. 6. When a given CPU invokes __rcu_process_gp_end(), it becomes aware of the end of the old grace period and therefore makes any RCU callbacks that were waiting on that grace period eligible for invocation. If this CPU is the same one that detected the end of the grace period, and if there is but a single rcu_node in the hierarchy, we will still be in the single critical section. In this case, the memory barrier in step #4 guarantees that all callbacks will be seen to execute after each CPU's quiescent state. On the other hand, if this is a different CPU, it will acquire the leaf rcu_node's ->lock, and will again be serialized after each CPU's quiescent state for the old grace period. On the strength of this proof, this commit therefore removes the memory barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp(). The effect is to reduce the number of memory barriers by one and to reduce the frequency of execution from about once per scheduling tick per CPU to once per grace period. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Diffstat (limited to 'kernel/rcutree_plugin.h')
-rw-r--r--kernel/rcutree_plugin.h7
1 files changed, 3 insertions, 4 deletions
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 38426ef1bcd6..764b5fcc7c56 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1182,7 +1182,6 @@ int rcu_needs_cpu(int cpu)
1182{ 1182{
1183 int c = 0; 1183 int c = 0;
1184 int snap; 1184 int snap;
1185 int snap_nmi;
1186 int thatcpu; 1185 int thatcpu;
1187 1186
1188 /* Check for being in the holdoff period. */ 1187 /* Check for being in the holdoff period. */
@@ -1193,10 +1192,10 @@ int rcu_needs_cpu(int cpu)
1193 for_each_online_cpu(thatcpu) { 1192 for_each_online_cpu(thatcpu) {
1194 if (thatcpu == cpu) 1193 if (thatcpu == cpu)
1195 continue; 1194 continue;
1196 snap = per_cpu(rcu_dynticks, thatcpu).dynticks; 1195 snap = atomic_add_return(0, &per_cpu(rcu_dynticks,
1197 snap_nmi = per_cpu(rcu_dynticks, thatcpu).dynticks_nmi; 1196 thatcpu).dynticks);
1198 smp_mb(); /* Order sampling of snap with end of grace period. */ 1197 smp_mb(); /* Order sampling of snap with end of grace period. */
1199 if (((snap & 0x1) != 0) || ((snap_nmi & 0x1) != 0)) { 1198 if ((snap & 0x1) != 0) {
1200 per_cpu(rcu_dyntick_drain, cpu) = 0; 1199 per_cpu(rcu_dyntick_drain, cpu) = 0;
1201 per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1; 1200 per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1;
1202 return rcu_needs_cpu_quick_check(cpu); 1201 return rcu_needs_cpu_quick_check(cpu);