aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2019-02-18 17:04:01 -0500
committerThomas Gleixner <tglx@linutronix.de>2019-03-06 15:52:13 -0500
commit07f07f55a29cb705e221eda7894dd67ab81ef343 (patch)
treec6df3a8a2ac60c2d319647c2df9e5e1abb03c852
parent650b68a0622f933444a6d66936abb3103029413b (diff)
x86/speculation/mds: Conditionally clear CPU buffers on idle entry
Add a static key which controls the invocation of the CPU buffer clear mechanism on idle entry. This is independent of other MDS mitigations because the idle entry invocation to mitigate the potential leakage due to store buffer repartitioning is only necessary on SMT systems. Add the actual invocations to the different halt/mwait variants which covers all usage sites. mwaitx is not patched as it's not available on Intel CPUs. The buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU is spilled to the Hyper-Thread sibling after the Store buffer got repartitioned and all entries are available to the non idle sibling. When coming out of idle the store buffer is partitioned again so each sibling has half of it available. Now CPU which returned from idle could be speculatively exposed to contents of the sibling, but the buffers are flushed either on exit to user space or on VMENTER. When later on conditional buffer clearing is implemented on top of this, then there is no action required either because before returning to user space the context switch will set the condition flag which causes a flush on the return to user path. Note, that the buffer clearing on idle is only sensible on CPUs which are solely affected by MSBDS and not any other variant of MDS because the other MDS variants cannot be mitigated when SMT is enabled, so the buffer clearing on idle would be a window dressing exercise. This intentionally does not handle the case in the acpi/processor_idle driver which uses the legacy IO port interface for C-State transitions for two reasons: - The acpi/processor_idle driver was replaced by the intel_idle driver almost a decade ago. Anything Nehalem upwards supports it and defaults to that new driver. - The legacy IO port interface is likely to be used on older and therefore unaffected CPUs or on systems which do not receive microcode updates anymore, so there is no point in adding that. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Reviewed-by: Jon Masters <jcm@redhat.com> Tested-by: Jon Masters <jcm@redhat.com>
-rw-r--r--Documentation/x86/mds.rst42
-rw-r--r--arch/x86/include/asm/irqflags.h4
-rw-r--r--arch/x86/include/asm/mwait.h7
-rw-r--r--arch/x86/include/asm/nospec-branch.h12
-rw-r--r--arch/x86/kernel/cpu/bugs.c3
5 files changed, 68 insertions, 0 deletions
diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
index 54d935bf283b..87ce8ac9f36e 100644
--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -149,3 +149,45 @@ Mitigation points
149 This takes the paranoid exit path only when the INT1 breakpoint is in 149 This takes the paranoid exit path only when the INT1 breakpoint is in
150 kernel space. #DB on a user space address takes the regular exit path, 150 kernel space. #DB on a user space address takes the regular exit path,
151 so no extra mitigation required. 151 so no extra mitigation required.
152
153
1542. C-State transition
155^^^^^^^^^^^^^^^^^^^^^
156
157 When a CPU goes idle and enters a C-State the CPU buffers need to be
158 cleared on affected CPUs when SMT is active. This addresses the
159 repartitioning of the store buffer when one of the Hyper-Threads enters
160 a C-State.
161
162 When SMT is inactive, i.e. either the CPU does not support it or all
163 sibling threads are offline CPU buffer clearing is not required.
164
165 The idle clearing is enabled on CPUs which are only affected by MSBDS
166 and not by any other MDS variant. The other MDS variants cannot be
167 protected against cross Hyper-Thread attacks because the Fill Buffer and
168 the Load Ports are shared. So on CPUs affected by other variants, the
169 idle clearing would be a window dressing exercise and is therefore not
170 activated.
171
172 The invocation is controlled by the static key mds_idle_clear which is
173 switched depending on the chosen mitigation mode and the SMT state of
174 the system.
175
176 The buffer clear is only invoked before entering the C-State to prevent
177 that stale data from the idling CPU from spilling to the Hyper-Thread
178 sibling after the store buffer got repartitioned and all entries are
179 available to the non idle sibling.
180
181 When coming out of idle the store buffer is partitioned again so each
182 sibling has half of it available. The back from idle CPU could be then
183 speculatively exposed to contents of the sibling. The buffers are
184 flushed either on exit to user space or on VMENTER so malicious code
185 in user space or the guest cannot speculatively access them.
186
187 The mitigation is hooked into all variants of halt()/mwait(), but does
188 not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
189 has been superseded by the intel_idle driver around 2010 and is
190 preferred on all affected CPUs which are expected to gain the MD_CLEAR
191 functionality in microcode. Aside of that the IO-Port mechanism is a
192 legacy interface which is only used on older systems which are either
193 not affected or do not receive microcode updates anymore.
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 058e40fed167..8a0e56e1dcc9 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -6,6 +6,8 @@
6 6
7#ifndef __ASSEMBLY__ 7#ifndef __ASSEMBLY__
8 8
9#include <asm/nospec-branch.h>
10
9/* Provide __cpuidle; we can't safely include <linux/cpu.h> */ 11/* Provide __cpuidle; we can't safely include <linux/cpu.h> */
10#define __cpuidle __attribute__((__section__(".cpuidle.text"))) 12#define __cpuidle __attribute__((__section__(".cpuidle.text")))
11 13
@@ -54,11 +56,13 @@ static inline void native_irq_enable(void)
54 56
55static inline __cpuidle void native_safe_halt(void) 57static inline __cpuidle void native_safe_halt(void)
56{ 58{
59 mds_idle_clear_cpu_buffers();
57 asm volatile("sti; hlt": : :"memory"); 60 asm volatile("sti; hlt": : :"memory");
58} 61}
59 62
60static inline __cpuidle void native_halt(void) 63static inline __cpuidle void native_halt(void)
61{ 64{
65 mds_idle_clear_cpu_buffers();
62 asm volatile("hlt": : :"memory"); 66 asm volatile("hlt": : :"memory");
63} 67}
64 68
diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 39a2fb29378a..eb0f80ce8524 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -6,6 +6,7 @@
6#include <linux/sched/idle.h> 6#include <linux/sched/idle.h>
7 7
8#include <asm/cpufeature.h> 8#include <asm/cpufeature.h>
9#include <asm/nospec-branch.h>
9 10
10#define MWAIT_SUBSTATE_MASK 0xf 11#define MWAIT_SUBSTATE_MASK 0xf
11#define MWAIT_CSTATE_MASK 0xf 12#define MWAIT_CSTATE_MASK 0xf
@@ -40,6 +41,8 @@ static inline void __monitorx(const void *eax, unsigned long ecx,
40 41
41static inline void __mwait(unsigned long eax, unsigned long ecx) 42static inline void __mwait(unsigned long eax, unsigned long ecx)
42{ 43{
44 mds_idle_clear_cpu_buffers();
45
43 /* "mwait %eax, %ecx;" */ 46 /* "mwait %eax, %ecx;" */
44 asm volatile(".byte 0x0f, 0x01, 0xc9;" 47 asm volatile(".byte 0x0f, 0x01, 0xc9;"
45 :: "a" (eax), "c" (ecx)); 48 :: "a" (eax), "c" (ecx));
@@ -74,6 +77,8 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
74static inline void __mwaitx(unsigned long eax, unsigned long ebx, 77static inline void __mwaitx(unsigned long eax, unsigned long ebx,
75 unsigned long ecx) 78 unsigned long ecx)
76{ 79{
80 /* No MDS buffer clear as this is AMD/HYGON only */
81
77 /* "mwaitx %eax, %ebx, %ecx;" */ 82 /* "mwaitx %eax, %ebx, %ecx;" */
78 asm volatile(".byte 0x0f, 0x01, 0xfb;" 83 asm volatile(".byte 0x0f, 0x01, 0xfb;"
79 :: "a" (eax), "b" (ebx), "c" (ecx)); 84 :: "a" (eax), "b" (ebx), "c" (ecx));
@@ -81,6 +86,8 @@ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
81 86
82static inline void __sti_mwait(unsigned long eax, unsigned long ecx) 87static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
83{ 88{
89 mds_idle_clear_cpu_buffers();
90
84 trace_hardirqs_on(); 91 trace_hardirqs_on();
85 /* "mwait %eax, %ecx;" */ 92 /* "mwait %eax, %ecx;" */
86 asm volatile("sti; .byte 0x0f, 0x01, 0xc9;" 93 asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 65b747286d96..4e970390110f 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -319,6 +319,7 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
319DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); 319DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
320 320
321DECLARE_STATIC_KEY_FALSE(mds_user_clear); 321DECLARE_STATIC_KEY_FALSE(mds_user_clear);
322DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
322 323
323#include <asm/segment.h> 324#include <asm/segment.h>
324 325
@@ -356,6 +357,17 @@ static inline void mds_user_clear_cpu_buffers(void)
356 mds_clear_cpu_buffers(); 357 mds_clear_cpu_buffers();
357} 358}
358 359
360/**
361 * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
362 *
363 * Clear CPU buffers if the corresponding static key is enabled
364 */
365static inline void mds_idle_clear_cpu_buffers(void)
366{
367 if (static_branch_likely(&mds_idle_clear))
368 mds_clear_cpu_buffers();
369}
370
359#endif /* __ASSEMBLY__ */ 371#endif /* __ASSEMBLY__ */
360 372
361/* 373/*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 29ed8e8dfee2..916995167301 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -66,6 +66,9 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
66/* Control MDS CPU buffer clear before returning to user space */ 66/* Control MDS CPU buffer clear before returning to user space */
67DEFINE_STATIC_KEY_FALSE(mds_user_clear); 67DEFINE_STATIC_KEY_FALSE(mds_user_clear);
68EXPORT_SYMBOL_GPL(mds_user_clear); 68EXPORT_SYMBOL_GPL(mds_user_clear);
69/* Control MDS CPU buffer clear before idling (halt, mwait) */
70DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
71EXPORT_SYMBOL_GPL(mds_idle_clear);
69 72
70void __init check_bugs(void) 73void __init check_bugs(void)
71{ 74{