aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2007-09-29 09:28:48 -0400
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-09-29 12:13:59 -0400
commit4827bbb06e4b59922c2b9bfb13ad1bf936bdebe5 (patch)
tree206facb68acff39b9cf15559e6e80227f0e12f31 /include
parent1bef7dc00caa7bcbff4fdb55e599e2591461fafa (diff)
i386: remove bogus comment about memory barrier
The comment being removed by this patch is incorrect and misleading. In the following situation: 1. load ... 2. store 1 -> X 3. wmb 4. rmb 5. load a <- Y 6. store ... 4 will only ensure ordering of 1 with 5. 3 will only ensure ordering of 2 with 6. Further, a CPU with strictly in-order stores will still only provide that 2 and 6 are ordered (effectively, it is the same as a weakly ordered CPU with wmb after every store). In all cases, 5 may still be executed before 2 is visible to other CPUs! The additional piece of the puzzle that mb() provides is the store/load ordering, which fundamentally cannot be achieved with any combination of rmb()s and wmb()s. This can be an unexpected result if one expected any sort of global ordering guarantee to barriers (eg. that the barriers themselves are sequentially consistent with other types of barriers). However sfence or lfence barriers need only provide an ordering partial ordering of memory operations -- Consider that wmb may be implemented as nothing more than inserting a special barrier entry in the store queue, or, in the case of x86, it can be a noop as the store queue is in order. And an rmb may be implemented as a directive to prevent subsequent loads only so long as their are no previous outstanding loads (while there could be stores still in store queues). I can actually see the occasional load/store being reordered around lfence on my core2. That doesn't prove my above assertions, but it does show the comment is wrong (unless my program is -- can send it out by request). So: mb() and smp_mb() always have and always will require a full mfence or lock prefixed instruction on x86. And we should remove this comment. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Paul McKenney <paulmck@us.ibm.com> Cc: David Howells <dhowells@redhat.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/asm-i386/system.h5
1 files changed, 0 insertions, 5 deletions
diff --git a/include/asm-i386/system.h b/include/asm-i386/system.h
index 609756c61676..d69ba937e092 100644
--- a/include/asm-i386/system.h
+++ b/include/asm-i386/system.h
@@ -214,11 +214,6 @@ static inline unsigned long get_limit(unsigned long segment)
214 */ 214 */
215 215
216 216
217/*
218 * Actually only lfence would be needed for mb() because all stores done
219 * by the kernel should be already ordered. But keep a full barrier for now.
220 */
221
222#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) 217#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
223#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) 218#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
224 219