aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>2015-02-02 11:08:25 -0500
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2015-02-26 14:57:31 -0500
commitdaf1aab9acfaaded09f53fa91dfe6e4e6926ec39 (patch)
tree4c2206bd517312dfbc4e654a62205cfaf49bb82b
parentf1360570f420b8b122e7f1cccf456ff7133a3007 (diff)
documentation: Clarify memory-barrier semantics of atomic operations
All value-returning atomic read-modify-write operations must provide full memory-barrier semantics on both sides of the operation. This commit clarifies the documentation to make it clear that these memory-barrier semantics are provided by the operations themselves, not by their callers. Reported-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-rw-r--r--Documentation/atomic_ops.txt45
1 files changed, 23 insertions, 22 deletions
diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 183e41bdcb69..dab6da3382d9 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -201,11 +201,11 @@ These routines add 1 and subtract 1, respectively, from the given
201atomic_t and return the new counter value after the operation is 201atomic_t and return the new counter value after the operation is
202performed. 202performed.
203 203
204Unlike the above routines, it is required that explicit memory 204Unlike the above routines, it is required that these primitives
205barriers are performed before and after the operation. It must be 205include explicit memory barriers that are performed before and after
206done such that all memory operations before and after the atomic 206the operation. It must be done such that all memory operations before
207operation calls are strongly ordered with respect to the atomic 207and after the atomic operation calls are strongly ordered with respect
208operation itself. 208to the atomic operation itself.
209 209
210For example, it should behave as if a smp_mb() call existed both 210For example, it should behave as if a smp_mb() call existed both
211before and after the atomic operation. 211before and after the atomic operation.
@@ -233,21 +233,21 @@ These two routines increment and decrement by 1, respectively, the
233given atomic counter. They return a boolean indicating whether the 233given atomic counter. They return a boolean indicating whether the
234resulting counter value was zero or not. 234resulting counter value was zero or not.
235 235
236It requires explicit memory barrier semantics around the operation as 236Again, these primitives provide explicit memory barrier semantics around
237above. 237the atomic operation.
238 238
239 int atomic_sub_and_test(int i, atomic_t *v); 239 int atomic_sub_and_test(int i, atomic_t *v);
240 240
241This is identical to atomic_dec_and_test() except that an explicit 241This is identical to atomic_dec_and_test() except that an explicit
242decrement is given instead of the implicit "1". It requires explicit 242decrement is given instead of the implicit "1". This primitive must
243memory barrier semantics around the operation. 243provide explicit memory barrier semantics around the operation.
244 244
245 int atomic_add_negative(int i, atomic_t *v); 245 int atomic_add_negative(int i, atomic_t *v);
246 246
247The given increment is added to the given atomic counter value. A 247The given increment is added to the given atomic counter value. A boolean
248boolean is return which indicates whether the resulting counter value 248is return which indicates whether the resulting counter value is negative.
249is negative. It requires explicit memory barrier semantics around the 249This primitive must provide explicit memory barrier semantics around
250operation. 250the operation.
251 251
252Then: 252Then:
253 253
@@ -257,7 +257,7 @@ This performs an atomic exchange operation on the atomic variable v, setting
257the given new value. It returns the old value that the atomic variable v had 257the given new value. It returns the old value that the atomic variable v had
258just before the operation. 258just before the operation.
259 259
260atomic_xchg requires explicit memory barriers around the operation. 260atomic_xchg must provide explicit memory barriers around the operation.
261 261
262 int atomic_cmpxchg(atomic_t *v, int old, int new); 262 int atomic_cmpxchg(atomic_t *v, int old, int new);
263 263
@@ -266,7 +266,7 @@ with the given old and new values. Like all atomic_xxx operations,
266atomic_cmpxchg will only satisfy its atomicity semantics as long as all 266atomic_cmpxchg will only satisfy its atomicity semantics as long as all
267other accesses of *v are performed through atomic_xxx operations. 267other accesses of *v are performed through atomic_xxx operations.
268 268
269atomic_cmpxchg requires explicit memory barriers around the operation. 269atomic_cmpxchg must provide explicit memory barriers around the operation.
270 270
271The semantics for atomic_cmpxchg are the same as those defined for 'cas' 271The semantics for atomic_cmpxchg are the same as those defined for 'cas'
272below. 272below.
@@ -279,8 +279,8 @@ If the atomic value v is not equal to u, this function adds a to v, and
279returns non zero. If v is equal to u then it returns zero. This is done as 279returns non zero. If v is equal to u then it returns zero. This is done as
280an atomic operation. 280an atomic operation.
281 281
282atomic_add_unless requires explicit memory barriers around the operation 282atomic_add_unless must provide explicit memory barriers around the
283unless it fails (returns 0). 283operation unless it fails (returns 0).
284 284
285atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) 285atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
286 286
@@ -460,9 +460,9 @@ the return value into an int. There are other places where things
460like this occur as well. 460like this occur as well.
461 461
462These routines, like the atomic_t counter operations returning values, 462These routines, like the atomic_t counter operations returning values,
463require explicit memory barrier semantics around their execution. All 463must provide explicit memory barrier semantics around their execution.
464memory operations before the atomic bit operation call must be made 464All memory operations before the atomic bit operation call must be
465visible globally before the atomic bit operation is made visible. 465made visible globally before the atomic bit operation is made visible.
466Likewise, the atomic bit operation must be visible globally before any 466Likewise, the atomic bit operation must be visible globally before any
467subsequent memory operation is made visible. For example: 467subsequent memory operation is made visible. For example:
468 468
@@ -536,8 +536,9 @@ except that two underscores are prefixed to the interface name.
536These non-atomic variants also do not require any special memory 536These non-atomic variants also do not require any special memory
537barrier semantics. 537barrier semantics.
538 538
539The routines xchg() and cmpxchg() need the same exact memory barriers 539The routines xchg() and cmpxchg() must provide the same exact
540as the atomic and bit operations returning values. 540memory-barrier semantics as the atomic and bit operations returning
541values.
541 542
542Spinlocks and rwlocks have memory barrier expectations as well. 543Spinlocks and rwlocks have memory barrier expectations as well.
543The rule to follow is simple: 544The rule to follow is simple: