diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2012-10-23 16:47:01 -0400 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2012-11-13 17:08:23 -0500 |
commit | f0a0e6f282c72247e7c8ec17c68d528c1bb4d49e (patch) | |
tree | 22b66fc8ac9b95586866ddb447dcc8712d441c14 /kernel/rcutree.c | |
parent | 67afeed2cab0e59712b4ebf1aef9a2e555a188ce (diff) |
rcu: Clarify memory-ordering properties of grace-period primitives
This commit explicitly states the memory-ordering properties of the
RCU grace-period primitives. Although these properties were in some
sense implied by the fundmental property of RCU ("a grace period must
wait for all pre-existing RCU read-side critical sections to complete"),
stating it explicitly will be a great labor-saving device.
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Diffstat (limited to 'kernel/rcutree.c')
-rw-r--r-- | kernel/rcutree.c | 29 |
1 files changed, 25 insertions, 4 deletions
diff --git a/kernel/rcutree.c b/kernel/rcutree.c index e4c2192b47c8..15a2beec320f 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c | |||
@@ -2228,10 +2228,28 @@ static inline int rcu_blocking_is_gp(void) | |||
2228 | * rcu_read_lock_sched(). | 2228 | * rcu_read_lock_sched(). |
2229 | * | 2229 | * |
2230 | * This means that all preempt_disable code sequences, including NMI and | 2230 | * This means that all preempt_disable code sequences, including NMI and |
2231 | * hardware-interrupt handlers, in progress on entry will have completed | 2231 | * non-threaded hardware-interrupt handlers, in progress on entry will |
2232 | * before this primitive returns. However, this does not guarantee that | 2232 | * have completed before this primitive returns. However, this does not |
2233 | * softirq handlers will have completed, since in some kernels, these | 2233 | * guarantee that softirq handlers will have completed, since in some |
2234 | * handlers can run in process context, and can block. | 2234 | * kernels, these handlers can run in process context, and can block. |
2235 | * | ||
2236 | * Note that this guarantee implies further memory-ordering guarantees. | ||
2237 | * On systems with more than one CPU, when synchronize_sched() returns, | ||
2238 | * each CPU is guaranteed to have executed a full memory barrier since the | ||
2239 | * end of its last RCU-sched read-side critical section whose beginning | ||
2240 | * preceded the call to synchronize_sched(). In addition, each CPU having | ||
2241 | * an RCU read-side critical section that extends beyond the return from | ||
2242 | * synchronize_sched() is guaranteed to have executed a full memory barrier | ||
2243 | * after the beginning of synchronize_sched() and before the beginning of | ||
2244 | * that RCU read-side critical section. Note that these guarantees include | ||
2245 | * CPUs that are offline, idle, or executing in user mode, as well as CPUs | ||
2246 | * that are executing in the kernel. | ||
2247 | * | ||
2248 | * Furthermore, if CPU A invoked synchronize_sched(), which returned | ||
2249 | * to its caller on CPU B, then both CPU A and CPU B are guaranteed | ||
2250 | * to have executed a full memory barrier during the execution of | ||
2251 | * synchronize_sched() -- even if CPU A and CPU B are the same CPU (but | ||
2252 | * again only if the system has more than one CPU). | ||
2235 | * | 2253 | * |
2236 | * This primitive provides the guarantees made by the (now removed) | 2254 | * This primitive provides the guarantees made by the (now removed) |
2237 | * synchronize_kernel() API. In contrast, synchronize_rcu() only | 2255 | * synchronize_kernel() API. In contrast, synchronize_rcu() only |
@@ -2259,6 +2277,9 @@ EXPORT_SYMBOL_GPL(synchronize_sched); | |||
2259 | * read-side critical sections have completed. RCU read-side critical | 2277 | * read-side critical sections have completed. RCU read-side critical |
2260 | * sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), | 2278 | * sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), |
2261 | * and may be nested. | 2279 | * and may be nested. |
2280 | * | ||
2281 | * See the description of synchronize_sched() for more detailed information | ||
2282 | * on memory ordering guarantees. | ||
2262 | */ | 2283 | */ |
2263 | void synchronize_rcu_bh(void) | 2284 | void synchronize_rcu_bh(void) |
2264 | { | 2285 | { |