aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/atomic_ops.txt
diff options
context:
space:
mode:
authorJames Bottomley <jejb@mulgrave.il.steeleye.com>2006-06-28 14:06:39 -0400
committerJames Bottomley <jejb@mulgrave.il.steeleye.com>2006-06-28 14:06:39 -0400
commitf28e71617ddaf2483e3e5c5237103484a303743f (patch)
tree67627d2d8ddbf6a4449371e9261d796c013b1fa1 /Documentation/atomic_ops.txt
parentdc6a78f1af10d28fb8c395034ae1e099b85c05b0 (diff)
parenta39727f212426b9d5f9267b3318a2afaf9922d3b (diff)
Merge ../linux-2.6/
Conflicts: drivers/scsi/aacraid/comminit.c Fixed up by removing the now renamed CONFIG_IOMMU option from aacraid Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Diffstat (limited to 'Documentation/atomic_ops.txt')
-rw-r--r--Documentation/atomic_ops.txt28
1 files changed, 14 insertions, 14 deletions
diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 23a1c2402bcc..2a63d5662a93 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -157,13 +157,13 @@ For example, smp_mb__before_atomic_dec() can be used like so:
157 smp_mb__before_atomic_dec(); 157 smp_mb__before_atomic_dec();
158 atomic_dec(&obj->ref_count); 158 atomic_dec(&obj->ref_count);
159 159
160It makes sure that all memory operations preceeding the atomic_dec() 160It makes sure that all memory operations preceding the atomic_dec()
161call are strongly ordered with respect to the atomic counter 161call are strongly ordered with respect to the atomic counter
162operation. In the above example, it guarentees that the assignment of 162operation. In the above example, it guarantees that the assignment of
163"1" to obj->dead will be globally visible to other cpus before the 163"1" to obj->dead will be globally visible to other cpus before the
164atomic counter decrement. 164atomic counter decrement.
165 165
166Without the explicitl smp_mb__before_atomic_dec() call, the 166Without the explicit smp_mb__before_atomic_dec() call, the
167implementation could legally allow the atomic counter update visible 167implementation could legally allow the atomic counter update visible
168to other cpus before the "obj->dead = 1;" assignment. 168to other cpus before the "obj->dead = 1;" assignment.
169 169
@@ -173,11 +173,11 @@ ordering with respect to memory operations after an atomic_dec() call
173(smp_mb__{before,after}_atomic_inc()). 173(smp_mb__{before,after}_atomic_inc()).
174 174
175A missing memory barrier in the cases where they are required by the 175A missing memory barrier in the cases where they are required by the
176atomic_t implementation above can have disasterous results. Here is 176atomic_t implementation above can have disastrous results. Here is
177an example, which follows a pattern occuring frequently in the Linux 177an example, which follows a pattern occurring frequently in the Linux
178kernel. It is the use of atomic counters to implement reference 178kernel. It is the use of atomic counters to implement reference
179counting, and it works such that once the counter falls to zero it can 179counting, and it works such that once the counter falls to zero it can
180be guarenteed that no other entity can be accessing the object: 180be guaranteed that no other entity can be accessing the object:
181 181
182static void obj_list_add(struct obj *obj) 182static void obj_list_add(struct obj *obj)
183{ 183{
@@ -291,9 +291,9 @@ to the size of an "unsigned long" C data type, and are least of that
291size. The endianness of the bits within each "unsigned long" are the 291size. The endianness of the bits within each "unsigned long" are the
292native endianness of the cpu. 292native endianness of the cpu.
293 293
294 void set_bit(unsigned long nr, volatils unsigned long *addr); 294 void set_bit(unsigned long nr, volatile unsigned long *addr);
295 void clear_bit(unsigned long nr, volatils unsigned long *addr); 295 void clear_bit(unsigned long nr, volatile unsigned long *addr);
296 void change_bit(unsigned long nr, volatils unsigned long *addr); 296 void change_bit(unsigned long nr, volatile unsigned long *addr);
297 297
298These routines set, clear, and change, respectively, the bit number 298These routines set, clear, and change, respectively, the bit number
299indicated by "nr" on the bit mask pointed to by "ADDR". 299indicated by "nr" on the bit mask pointed to by "ADDR".
@@ -301,9 +301,9 @@ indicated by "nr" on the bit mask pointed to by "ADDR".
301They must execute atomically, yet there are no implicit memory barrier 301They must execute atomically, yet there are no implicit memory barrier
302semantics required of these interfaces. 302semantics required of these interfaces.
303 303
304 int test_and_set_bit(unsigned long nr, volatils unsigned long *addr); 304 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
305 int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr); 305 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
306 int test_and_change_bit(unsigned long nr, volatils unsigned long *addr); 306 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
307 307
308Like the above, except that these routines return a boolean which 308Like the above, except that these routines return a boolean which
309indicates whether the changed bit was set _BEFORE_ the atomic bit 309indicates whether the changed bit was set _BEFORE_ the atomic bit
@@ -335,7 +335,7 @@ subsequent memory operation is made visible. For example:
335 /* ... */; 335 /* ... */;
336 obj->killed = 1; 336 obj->killed = 1;
337 337
338The implementation of test_and_set_bit() must guarentee that 338The implementation of test_and_set_bit() must guarantee that
339"obj->dead = 1;" is visible to cpus before the atomic memory operation 339"obj->dead = 1;" is visible to cpus before the atomic memory operation
340done by test_and_set_bit() becomes visible. Likewise, the atomic 340done by test_and_set_bit() becomes visible. Likewise, the atomic
341memory operation done by test_and_set_bit() must become visible before 341memory operation done by test_and_set_bit() must become visible before
@@ -474,7 +474,7 @@ Now, as far as memory barriers go, as long as spin_lock()
474strictly orders all subsequent memory operations (including 474strictly orders all subsequent memory operations (including
475the cas()) with respect to itself, things will be fine. 475the cas()) with respect to itself, things will be fine.
476 476
477Said another way, _atomic_dec_and_lock() must guarentee that 477Said another way, _atomic_dec_and_lock() must guarantee that
478a counter dropping to zero is never made visible before the 478a counter dropping to zero is never made visible before the
479spinlock being acquired. 479spinlock being acquired.
480 480