summaryrefslogtreecommitdiffstats
path: root/tools/memory-model
diff options
context:
space:
mode:
authorAlan Stern <stern@rowland.harvard.edu>2018-09-26 14:29:17 -0400
committerIngo Molnar <mingo@kernel.org>2018-10-02 04:28:01 -0400
commit6e89e831a90172bc3d34ecbba52af5b9c4a447d1 (patch)
tree78cf833bc9fad144af9625e58ec74c83dd664e66 /tools/memory-model
parentc4f790f244070dbab486805276ba4d1f87a057af (diff)
tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
More than one kernel developer has expressed the opinion that the LKMM should enforce ordering of writes by locking. In other words, given the following code: WRITE_ONCE(x, 1); spin_unlock(&s): spin_lock(&s); WRITE_ONCE(y, 1); the stores to x and y should be propagated in order to all other CPUs, even though those other CPUs might not access the lock s. In terms of the memory model, this means expanding the cumul-fence relation. Locks should also provide read-read (and read-write) ordering in a similar way. Given: READ_ONCE(x); spin_unlock(&s); spin_lock(&s); READ_ONCE(y); // or WRITE_ONCE(y, 1); the load of x should be executed before the load of (or store to) y. The LKMM already provides this ordering, but it provides it even in the case where the two accesses are separated by a release/acquire pair of fences rather than unlock/lock. This would prevent architectures from using weakly ordered implementations of release and acquire, which seems like an unnecessary restriction. The patch therefore removes the ordering requirement from the LKMM for that case. There are several arguments both for and against this change. Let us refer to these enhanced ordering properties by saying that the LKMM would require locks to be RCtso (a bit of a misnomer, but analogous to RCpc and RCsc) and it would require ordinary acquire/release only to be RCpc. (Note: In the following, the phrase "all supported architectures" is meant not to include RISC-V. Although RISC-V is indeed supported by the kernel, the implementation is still somewhat in a state of flux and therefore statements about it would be premature.) Pros: The kernel already provides RCtso ordering for locks on all supported architectures, even though this is not stated explicitly anywhere. Therefore the LKMM should formalize it. In theory, guaranteeing RCtso ordering would reduce the need for additional barrier-like constructs meant to increase the ordering strength of locks. Will Deacon and Peter Zijlstra are strongly in favor of formalizing the RCtso requirement. Linus Torvalds and Will would like to go even further, requiring locks to have RCsc behavior (ordering preceding writes against later reads), but they recognize that this would incur a noticeable performance degradation on the POWER architecture. Linus also points out that people have made the mistake, in the past, of assuming that locking has stronger ordering properties than is currently guaranteed, and this change would reduce the likelihood of such mistakes. Not requiring ordinary acquire/release to be any stronger than RCpc may prove advantageous for future architectures, allowing them to implement smp_load_acquire() and smp_store_release() with more efficient machine instructions than would be possible if the operations had to be RCtso. Will and Linus approve this rationale, hypothetical though it is at the moment (it may end up affecting the RISC-V implementation). The same argument may or may not apply to RMW-acquire/release; see also the second Con entry below. Linus feels that locks should be easy for people to use without worrying about memory consistency issues, since they are so pervasive in the kernel, whereas acquire/release is much more of an "experts only" tool. Requiring locks to be RCtso is a step in this direction. Cons: Andrea Parri and Luc Maranget think that locks should have the same ordering properties as ordinary acquire/release (indeed, Luc points out that the names "acquire" and "release" derive from the usage of locks). Andrea points out that having different ordering properties for different forms of acquires and releases is not only unnecessary, it would also be confusing and unmaintainable. Locks are constructed from lower-level primitives, typically RMW-acquire (for locking) and ordinary release (for unlock). It is illogical to require stronger ordering properties from the high-level operations than from the low-level operations they comprise. Thus, this change would make while (cmpxchg_acquire(&s, 0, 1) != 0) cpu_relax(); an incorrect implementation of spin_lock(&s) as far as the LKMM is concerned. In theory this weakness can be ameliorated by changing the LKMM even further, requiring RMW-acquire/release also to be RCtso (which it already is on all supported architectures). As far as I know, nobody has singled out any examples of code in the kernel that actually relies on locks being RCtso. (People mumble about RCU and the scheduler, but nobody has pointed to any actual code. If there are any real cases, their number is likely quite small.) If RCtso ordering is not needed, why require it? A handful of locking constructs (qspinlocks, qrwlocks, and mcs_spinlocks) are built on top of smp_cond_load_acquire() instead of an RMW-acquire instruction. It currently provides only the ordinary acquire semantics, not the stronger ordering this patch would require of locks. In theory this could be ameliorated by requiring smp_cond_load_acquire() in combination with ordinary release also to be RCtso (which is currently true on all supported architectures). On future weakly ordered architectures, people may be able to implement locks in a non-RCtso fashion with significant performance improvement. Meeting the RCtso requirement would necessarily add run-time overhead. Overall, the technical aspects of these arguments seem relatively minor, and it appears mostly to boil down to a matter of opinion. Since the opinions of senior kernel maintainers such as Linus, Peter, and Will carry more weight than those of Luc and Andrea, this patch changes the model in accordance with the maintainers' wishes. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'tools/memory-model')
-rw-r--r--tools/memory-model/Documentation/explanation.txt186
-rw-r--r--tools/memory-model/linux-kernel.cat8
-rw-r--r--tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus7
3 files changed, 150 insertions, 51 deletions
diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index 0cbd1ef8f86d..35bff92cc773 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -28,7 +28,8 @@ Explanation of the Linux-Kernel Memory Consistency Model
28 20. THE HAPPENS-BEFORE RELATION: hb 28 20. THE HAPPENS-BEFORE RELATION: hb
29 21. THE PROPAGATES-BEFORE RELATION: pb 29 21. THE PROPAGATES-BEFORE RELATION: pb
30 22. RCU RELATIONS: rcu-link, gp, rscs, rcu-fence, and rb 30 22. RCU RELATIONS: rcu-link, gp, rscs, rcu-fence, and rb
31 23. ODDS AND ENDS 31 23. LOCKING
32 24. ODDS AND ENDS
32 33
33 34
34 35
@@ -1067,28 +1068,6 @@ allowing out-of-order writes like this to occur. The model avoided
1067violating the write-write coherence rule by requiring the CPU not to 1068violating the write-write coherence rule by requiring the CPU not to
1068send the W write to the memory subsystem at all!) 1069send the W write to the memory subsystem at all!)
1069 1070
1070There is one last example of preserved program order in the LKMM: when
1071a load-acquire reads from an earlier store-release. For example:
1072
1073 smp_store_release(&x, 123);
1074 r1 = smp_load_acquire(&x);
1075
1076If the smp_load_acquire() ends up obtaining the 123 value that was
1077stored by the smp_store_release(), the LKMM says that the load must be
1078executed after the store; the store cannot be forwarded to the load.
1079This requirement does not arise from the operational model, but it
1080yields correct predictions on all architectures supported by the Linux
1081kernel, although for differing reasons.
1082
1083On some architectures, including x86 and ARMv8, it is true that the
1084store cannot be forwarded to the load. On others, including PowerPC
1085and ARMv7, smp_store_release() generates object code that starts with
1086a fence and smp_load_acquire() generates object code that ends with a
1087fence. The upshot is that even though the store may be forwarded to
1088the load, it is still true that any instruction preceding the store
1089will be executed before the load or any following instructions, and
1090the store will be executed before any instruction following the load.
1091
1092 1071
1093AND THEN THERE WAS ALPHA 1072AND THEN THERE WAS ALPHA
1094------------------------ 1073------------------------
@@ -1766,6 +1745,147 @@ before it does, and the critical section in P2 both starts after P1's
1766grace period does and ends after it does. 1745grace period does and ends after it does.
1767 1746
1768 1747
1748LOCKING
1749-------
1750
1751The LKMM includes locking. In fact, there is special code for locking
1752in the formal model, added in order to make tools run faster.
1753However, this special code is intended to be more or less equivalent
1754to concepts we have already covered. A spinlock_t variable is treated
1755the same as an int, and spin_lock(&s) is treated almost the same as:
1756
1757 while (cmpxchg_acquire(&s, 0, 1) != 0)
1758 cpu_relax();
1759
1760This waits until s is equal to 0 and then atomically sets it to 1,
1761and the read part of the cmpxchg operation acts as an acquire fence.
1762An alternate way to express the same thing would be:
1763
1764 r = xchg_acquire(&s, 1);
1765
1766along with a requirement that at the end, r = 0. Similarly,
1767spin_trylock(&s) is treated almost the same as:
1768
1769 return !cmpxchg_acquire(&s, 0, 1);
1770
1771which atomically sets s to 1 if it is currently equal to 0 and returns
1772true if it succeeds (the read part of the cmpxchg operation acts as an
1773acquire fence only if the operation is successful). spin_unlock(&s)
1774is treated almost the same as:
1775
1776 smp_store_release(&s, 0);
1777
1778The "almost" qualifiers above need some explanation. In the LKMM, the
1779store-release in a spin_unlock() and the load-acquire which forms the
1780first half of the atomic rmw update in a spin_lock() or a successful
1781spin_trylock() -- we can call these things lock-releases and
1782lock-acquires -- have two properties beyond those of ordinary releases
1783and acquires.
1784
1785First, when a lock-acquire reads from a lock-release, the LKMM
1786requires that every instruction po-before the lock-release must
1787execute before any instruction po-after the lock-acquire. This would
1788naturally hold if the release and acquire operations were on different
1789CPUs, but the LKMM says it holds even when they are on the same CPU.
1790For example:
1791
1792 int x, y;
1793 spinlock_t s;
1794
1795 P0()
1796 {
1797 int r1, r2;
1798
1799 spin_lock(&s);
1800 r1 = READ_ONCE(x);
1801 spin_unlock(&s);
1802 spin_lock(&s);
1803 r2 = READ_ONCE(y);
1804 spin_unlock(&s);
1805 }
1806
1807 P1()
1808 {
1809 WRITE_ONCE(y, 1);
1810 smp_wmb();
1811 WRITE_ONCE(x, 1);
1812 }
1813
1814Here the second spin_lock() reads from the first spin_unlock(), and
1815therefore the load of x must execute before the load of y. Thus we
1816cannot have r1 = 1 and r2 = 0 at the end (this is an instance of the
1817MP pattern).
1818
1819This requirement does not apply to ordinary release and acquire
1820fences, only to lock-related operations. For instance, suppose P0()
1821in the example had been written as:
1822
1823 P0()
1824 {
1825 int r1, r2, r3;
1826
1827 r1 = READ_ONCE(x);
1828 smp_store_release(&s, 1);
1829 r3 = smp_load_acquire(&s);
1830 r2 = READ_ONCE(y);
1831 }
1832
1833Then the CPU would be allowed to forward the s = 1 value from the
1834smp_store_release() to the smp_load_acquire(), executing the
1835instructions in the following order:
1836
1837 r3 = smp_load_acquire(&s); // Obtains r3 = 1
1838 r2 = READ_ONCE(y);
1839 r1 = READ_ONCE(x);
1840 smp_store_release(&s, 1); // Value is forwarded
1841
1842and thus it could load y before x, obtaining r2 = 0 and r1 = 1.
1843
1844Second, when a lock-acquire reads from a lock-release, and some other
1845stores W and W' occur po-before the lock-release and po-after the
1846lock-acquire respectively, the LKMM requires that W must propagate to
1847each CPU before W' does. For example, consider:
1848
1849 int x, y;
1850 spinlock_t x;
1851
1852 P0()
1853 {
1854 spin_lock(&s);
1855 WRITE_ONCE(x, 1);
1856 spin_unlock(&s);
1857 }
1858
1859 P1()
1860 {
1861 int r1;
1862
1863 spin_lock(&s);
1864 r1 = READ_ONCE(x);
1865 WRITE_ONCE(y, 1);
1866 spin_unlock(&s);
1867 }
1868
1869 P2()
1870 {
1871 int r2, r3;
1872
1873 r2 = READ_ONCE(y);
1874 smp_rmb();
1875 r3 = READ_ONCE(x);
1876 }
1877
1878If r1 = 1 at the end then the spin_lock() in P1 must have read from
1879the spin_unlock() in P0. Hence the store to x must propagate to P2
1880before the store to y does, so we cannot have r2 = 1 and r3 = 0.
1881
1882These two special requirements for lock-release and lock-acquire do
1883not arise from the operational model. Nevertheless, kernel developers
1884have come to expect and rely on them because they do hold on all
1885architectures supported by the Linux kernel, albeit for various
1886differing reasons.
1887
1888
1769ODDS AND ENDS 1889ODDS AND ENDS
1770------------- 1890-------------
1771 1891
@@ -1831,26 +1951,6 @@ they behave as follows:
1831 events and the events preceding them against all po-later 1951 events and the events preceding them against all po-later
1832 events. 1952 events.
1833 1953
1834The LKMM includes locking. In fact, there is special code for locking
1835in the formal model, added in order to make tools run faster.
1836However, this special code is intended to be exactly equivalent to
1837concepts we have already covered. A spinlock_t variable is treated
1838the same as an int, and spin_lock(&s) is treated the same as:
1839
1840 while (cmpxchg_acquire(&s, 0, 1) != 0)
1841 cpu_relax();
1842
1843which waits until s is equal to 0 and then atomically sets it to 1,
1844and where the read part of the atomic update is also an acquire fence.
1845An alternate way to express the same thing would be:
1846
1847 r = xchg_acquire(&s, 1);
1848
1849along with a requirement that at the end, r = 0. spin_unlock(&s) is
1850treated the same as:
1851
1852 smp_store_release(&s, 0);
1853
1854Interestingly, RCU and locking each introduce the possibility of 1954Interestingly, RCU and locking each introduce the possibility of
1855deadlock. When faced with code sequences such as: 1955deadlock. When faced with code sequences such as:
1856 1956
diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index 59b5cbe6b624..882fc33274ac 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -38,7 +38,7 @@ let strong-fence = mb | gp
38(* Release Acquire *) 38(* Release Acquire *)
39let acq-po = [Acquire] ; po ; [M] 39let acq-po = [Acquire] ; po ; [M]
40let po-rel = [M] ; po ; [Release] 40let po-rel = [M] ; po ; [Release]
41let rfi-rel-acq = [Release] ; rfi ; [Acquire] 41let po-unlock-rf-lock-po = po ; [UL] ; rf ; [LKR] ; po
42 42
43(**********************************) 43(**********************************)
44(* Fundamental coherence ordering *) 44(* Fundamental coherence ordering *)
@@ -60,13 +60,13 @@ let dep = addr | data
60let rwdep = (dep | ctrl) ; [W] 60let rwdep = (dep | ctrl) ; [W]
61let overwrite = co | fr 61let overwrite = co | fr
62let to-w = rwdep | (overwrite & int) 62let to-w = rwdep | (overwrite & int)
63let to-r = addr | (dep ; rfi) | rfi-rel-acq 63let to-r = addr | (dep ; rfi)
64let fence = strong-fence | wmb | po-rel | rmb | acq-po 64let fence = strong-fence | wmb | po-rel | rmb | acq-po
65let ppo = to-r | to-w | fence 65let ppo = to-r | to-w | fence | (po-unlock-rf-lock-po & int)
66 66
67(* Propagation: Ordering from release operations and strong fences. *) 67(* Propagation: Ordering from release operations and strong fences. *)
68let A-cumul(r) = rfe? ; r 68let A-cumul(r) = rfe? ; r
69let cumul-fence = A-cumul(strong-fence | po-rel) | wmb 69let cumul-fence = A-cumul(strong-fence | po-rel) | wmb | po-unlock-rf-lock-po
70let prop = (overwrite & ext)? ; cumul-fence* ; rfe? 70let prop = (overwrite & ext)? ; cumul-fence* ; rfe?
71 71
72(* 72(*
diff --git a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
index 0f749e419b34..094d58df7789 100644
--- a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
+++ b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
@@ -1,11 +1,10 @@
1C ISA2+pooncelock+pooncelock+pombonce 1C ISA2+pooncelock+pooncelock+pombonce
2 2
3(* 3(*
4 * Result: Sometimes 4 * Result: Never
5 * 5 *
6 * This test shows that the ordering provided by a lock-protected S 6 * This test shows that write-write ordering provided by locks
7 * litmus test (P0() and P1()) are not visible to external process P2(). 7 * (in P0() and P1()) is visible to external process P2().
8 * This is likely to change soon.
9 *) 8 *)
10 9
11{} 10{}