aboutsummaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2014-02-07 13:12:32 -0500
committerRussell King <rmk+kernel@arm.linux.org.uk>2014-02-10 06:44:50 -0500
commit7c8746a9eb287642deaad0e7c2cdf482dce5e4be (patch)
treed8740bb7222926df4bae9f93a6564839b08dc6c3 /arch
parentbae0ca2bc550d1ec6a118fb8f2696f18c4da3d8e (diff)
ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev
When unlocking a spinlock, we require the following, strictly ordered sequence of events: <barrier> /* dmb */ <unlock> <barrier> /* dsb */ <sev> Whilst the code does indeed reflect this in terms of the architecture, the final <barrier> + <sev> have been contracted into a single inline asm without a "memory" clobber, therefore the compiler is at liberty to reorder the unlock to the end of the above sequence. In such a case, a waiting CPU may be woken up before the lock has been unlocked, leading to extremely poor performance. This patch reworks the dsb_sev() function to make use of the dsb() macro and ensure ordering against the unlock. Cc: <stable@vger.kernel.org> Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch')
-rw-r--r--arch/arm/include/asm/spinlock.h15
1 files changed, 3 insertions, 12 deletions
diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h
index ef3c6072aa45..ac4bfae26702 100644
--- a/arch/arm/include/asm/spinlock.h
+++ b/arch/arm/include/asm/spinlock.h
@@ -37,18 +37,9 @@
37 37
38static inline void dsb_sev(void) 38static inline void dsb_sev(void)
39{ 39{
40#if __LINUX_ARM_ARCH__ >= 7 40
41 __asm__ __volatile__ ( 41 dsb(ishst);
42 "dsb ishst\n" 42 __asm__(SEV);
43 SEV
44 );
45#else
46 __asm__ __volatile__ (
47 "mcr p15, 0, %0, c7, c10, 4\n"
48 SEV
49 : : "r" (0)
50 );
51#endif
52} 43}
53 44
54/* 45/*