aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/atomic_ops.txt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/atomic_ops.txt')
-rw-r--r--Documentation/atomic_ops.txt55
1 files changed, 50 insertions, 5 deletions
diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 938f99957052..d46306fea230 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -14,12 +14,15 @@ suffice:
14 14
15 typedef struct { volatile int counter; } atomic_t; 15 typedef struct { volatile int counter; } atomic_t;
16 16
17Historically, counter has been declared volatile. This is now discouraged.
18See Documentation/volatile-considered-harmful.txt for the complete rationale.
19
17local_t is very similar to atomic_t. If the counter is per CPU and only 20local_t is very similar to atomic_t. If the counter is per CPU and only
18updated by one CPU, local_t is probably more appropriate. Please see 21updated by one CPU, local_t is probably more appropriate. Please see
19Documentation/local_ops.txt for the semantics of local_t. 22Documentation/local_ops.txt for the semantics of local_t.
20 23
21 The first operations to implement for atomic_t's are the 24The first operations to implement for atomic_t's are the initializers and
22initializers and plain reads. 25plain reads.
23 26
24 #define ATOMIC_INIT(i) { (i) } 27 #define ATOMIC_INIT(i) { (i) }
25 #define atomic_set(v, i) ((v)->counter = (i)) 28 #define atomic_set(v, i) ((v)->counter = (i))
@@ -28,6 +31,12 @@ The first macro is used in definitions, such as:
28 31
29static atomic_t my_counter = ATOMIC_INIT(1); 32static atomic_t my_counter = ATOMIC_INIT(1);
30 33
34The initializer is atomic in that the return values of the atomic operations
35are guaranteed to be correct reflecting the initialized value if the
36initializer is used before runtime. If the initializer is used at runtime, a
37proper implicit or explicit read memory barrier is needed before reading the
38value with atomic_read from another thread.
39
31The second interface can be used at runtime, as in: 40The second interface can be used at runtime, as in:
32 41
33 struct foo { atomic_t counter; }; 42 struct foo { atomic_t counter; };
@@ -40,13 +49,43 @@ The second interface can be used at runtime, as in:
40 return -ENOMEM; 49 return -ENOMEM;
41 atomic_set(&k->counter, 0); 50 atomic_set(&k->counter, 0);
42 51
52The setting is atomic in that the return values of the atomic operations by
53all threads are guaranteed to be correct reflecting either the value that has
54been set with this operation or set with another operation. A proper implicit
55or explicit memory barrier is needed before the value set with the operation
56is guaranteed to be readable with atomic_read from another thread.
57
43Next, we have: 58Next, we have:
44 59
45 #define atomic_read(v) ((v)->counter) 60 #define atomic_read(v) ((v)->counter)
46 61
47which simply reads the current value of the counter. 62which simply reads the counter value currently visible to the calling thread.
48 63The read is atomic in that the return value is guaranteed to be one of the
49Now, we move onto the actual atomic operation interfaces. 64values initialized or modified with the interface operations if a proper
65implicit or explicit memory barrier is used after possible runtime
66initialization by any other thread and the value is modified only with the
67interface operations. atomic_read does not guarantee that the runtime
68initialization by any other thread is visible yet, so the user of the
69interface must take care of that with a proper implicit or explicit memory
70barrier.
71
72*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***
73
74Some architectures may choose to use the volatile keyword, barriers, or inline
75assembly to guarantee some degree of immediacy for atomic_read() and
76atomic_set(). This is not uniformly guaranteed, and may change in the future,
77so all users of atomic_t should treat atomic_read() and atomic_set() as simple
78C statements that may be reordered or optimized away entirely by the compiler
79or processor, and explicitly invoke the appropriate compiler and/or memory
80barrier for each use case. Failure to do so will result in code that may
81suddenly break when used with different architectures or compiler
82optimizations, or even changes in unrelated code which changes how the
83compiler optimizes the section accessing atomic_t variables.
84
85*** YOU HAVE BEEN WARNED! ***
86
87Now, we move onto the atomic operation interfaces typically implemented with
88the help of assembly code.
50 89
51 void atomic_add(int i, atomic_t *v); 90 void atomic_add(int i, atomic_t *v);
52 void atomic_sub(int i, atomic_t *v); 91 void atomic_sub(int i, atomic_t *v);
@@ -121,6 +160,12 @@ operation.
121 160
122Then: 161Then:
123 162
163 int atomic_xchg(atomic_t *v, int new);
164
165This performs an atomic exchange operation on the atomic variable v, setting
166the given new value. It returns the old value that the atomic variable v had
167just before the operation.
168
124 int atomic_cmpxchg(atomic_t *v, int old, int new); 169 int atomic_cmpxchg(atomic_t *v, int old, int new);
125 170
126This performs an atomic compare exchange operation on the atomic value v, 171This performs an atomic compare exchange operation on the atomic value v,