summaryrefslogtreecommitdiffstats
path: root/Documentation/memory-barriers.txt
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>2015-07-14 21:35:23 -0400
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2015-08-04 11:49:21 -0400
commit12d560f4ea87030667438a169912380be00cea4b (patch)
tree3b60a7b97e849bd68573db48dd8608cb43f05694 /Documentation/memory-barriers.txt
parent3dbe43f6fba9f2a0e46e371733575a45704c22ab (diff)
rcu,locking: Privatize smp_mb__after_unlock_lock()
RCU is the only thing that uses smp_mb__after_unlock_lock(), and is likely the only thing that ever will use it, so this commit makes this macro private to RCU. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>
Diffstat (limited to 'Documentation/memory-barriers.txt')
-rw-r--r--Documentation/memory-barriers.txt71
1 files changed, 4 insertions, 67 deletions
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 318523872db5..eafa6a53f72c 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1854,16 +1854,10 @@ RELEASE are to the same lock variable, but only from the perspective of
1854another CPU not holding that lock. In short, a ACQUIRE followed by an 1854another CPU not holding that lock. In short, a ACQUIRE followed by an
1855RELEASE may -not- be assumed to be a full memory barrier. 1855RELEASE may -not- be assumed to be a full memory barrier.
1856 1856
1857Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not 1857Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
1858imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE 1858not imply a full memory barrier. Therefore, the CPU's execution of the
1859pair to produce a full barrier, the ACQUIRE can be followed by an 1859critical sections corresponding to the RELEASE and the ACQUIRE can cross,
1860smp_mb__after_unlock_lock() invocation. This will produce a full barrier 1860so that:
1861(including transitivity) if either (a) the RELEASE and the ACQUIRE are
1862executed by the same CPU or task, or (b) the RELEASE and ACQUIRE act on
1863the same variable. The smp_mb__after_unlock_lock() primitive is free
1864on many architectures. Without smp_mb__after_unlock_lock(), the CPU's
1865execution of the critical sections corresponding to the RELEASE and the
1866ACQUIRE can cross, so that:
1867 1861
1868 *A = a; 1862 *A = a;
1869 RELEASE M 1863 RELEASE M
@@ -1901,29 +1895,6 @@ the RELEASE would simply complete, thereby avoiding the deadlock.
1901 a sleep-unlock race, but the locking primitive needs to resolve 1895 a sleep-unlock race, but the locking primitive needs to resolve
1902 such races properly in any case. 1896 such races properly in any case.
1903 1897
1904With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1905For example, with the following code, the store to *A will always be
1906seen by other CPUs before the store to *B:
1907
1908 *A = a;
1909 RELEASE M
1910 ACQUIRE N
1911 smp_mb__after_unlock_lock();
1912 *B = b;
1913
1914The operations will always occur in one of the following orders:
1915
1916 STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1917 STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1918 ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1919
1920If the RELEASE and ACQUIRE were instead both operating on the same lock
1921variable, only the first of these alternatives can occur. In addition,
1922the more strongly ordered systems may rule out some of the above orders.
1923But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1924ensures that the store to *A will always be seen as happening before
1925the store to *B.
1926
1927Locks and semaphores may not provide any guarantee of ordering on UP compiled 1898Locks and semaphores may not provide any guarantee of ordering on UP compiled
1928systems, and so cannot be counted on in such a situation to actually achieve 1899systems, and so cannot be counted on in such a situation to actually achieve
1929anything at all - especially with respect to I/O accesses - unless combined 1900anything at all - especially with respect to I/O accesses - unless combined
@@ -2154,40 +2125,6 @@ But it won't see any of:
2154 *E, *F or *G following RELEASE Q 2125 *E, *F or *G following RELEASE Q
2155 2126
2156 2127
2157However, if the following occurs:
2158
2159 CPU 1 CPU 2
2160 =============================== ===============================
2161 WRITE_ONCE(*A, a);
2162 ACQUIRE M [1]
2163 WRITE_ONCE(*B, b);
2164 WRITE_ONCE(*C, c);
2165 RELEASE M [1]
2166 WRITE_ONCE(*D, d); WRITE_ONCE(*E, e);
2167 ACQUIRE M [2]
2168 smp_mb__after_unlock_lock();
2169 WRITE_ONCE(*F, f);
2170 WRITE_ONCE(*G, g);
2171 RELEASE M [2]
2172 WRITE_ONCE(*H, h);
2173
2174CPU 3 might see:
2175
2176 *E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2177 ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2178
2179But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2180
2181 *B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2182 *A, *B or *C following RELEASE M [1]
2183 *F, *G or *H preceding ACQUIRE M [2]
2184 *A, *B, *C, *E, *F or *G following RELEASE M [2]
2185
2186Note that the smp_mb__after_unlock_lock() is critically important
2187here: Without it CPU 3 might see some of the above orderings.
2188Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2189to be seen in order unless CPU 3 holds lock M.
2190
2191 2128
2192ACQUIRES VS I/O ACCESSES 2129ACQUIRES VS I/O ACCESSES
2193------------------------ 2130------------------------