aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAndy Lutomirski <luto@kernel.org>2016-04-26 15:23:30 -0400
committerIngo Molnar <mingo@kernel.org>2016-04-29 05:56:42 -0400
commitc9867f863e19dd07c6085b457f6775047924c3b5 (patch)
tree253fd80b27efbbf88bfb7275baca19c7cdafc73d
parent296f781a4b7801ad9c1c0219f9e87b6c25e196fe (diff)
x86/tls: Synchronize segment registers in set_thread_area()
The current behavior of set_thread_area() when it modifies a segment that is currently loaded is a bit confused. If CS [1] or SS is modified, the change will take effect on return to userspace because CS and SS are fundamentally always reloaded on return to userspace. Similarly, on 32-bit kernels, if DS, ES, FS, or (depending on configuration) GS refers to a modified segment, the change will take effect immediately on return to user mode because the entry code reloads these registers. If set_thread_area() modifies DS, ES [2], FS, or GS on 64-bit kernels or GS on 32-bit lazy-GS [3] kernels, however, the segment registers will be left alone until something (most likely a context switch) causes them to be reloaded. This means that behavior visible to user space is inconsistent. If set_thread_area() is implicitly called via CLONE_SETTLS, then all segment registers will be reloaded before the thread starts because CLONE_SETTLS happens before the initial context switch into the newly created thread. Empirically, glibc requires the immediate reload on CLONE_SETTLS -- 32-bit glibc on my system does *not* manually reload GS when creating a new thread. Before enabling FSGSBASE, we need to figure out what the behavior will be, as FSGSBASE requires that we reconsider our behavior when, e.g., GS and GSBASE are out of sync in user mode. Given that we must preserve the existing behavior of CLONE_SETTLS, it makes sense to me that we simply extend similar behavior to all invocations of set_thread_area(). This patch explicitly updates any segment register referring to a segment that is targetted by set_thread_area(). If set_thread_area() deletes the segment, then the segment register will be nulled out. [1] This can't actually happen since 0e58af4e1d21 ("x86/tls: Disallow unusual TLS segments") but, if it did, this is how it would behave. [2] I strongly doubt that any existing non-malicious program loads a TLS segment into DS or ES on a 64-bit kernel because the context switch code was badly broken until recently, but that's not an excuse to leave the current code alone. [3] One way or another, that config option should to go away. Yuck! Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/27d119b0d396e9b82009e40dff8333a249038225.1461698311.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r--arch/x86/kernel/tls.c42
1 files changed, 42 insertions, 0 deletions
diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
index 7fc5e843f247..9692a5e9fdab 100644
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -114,6 +114,7 @@ int do_set_thread_area(struct task_struct *p, int idx,
114 int can_allocate) 114 int can_allocate)
115{ 115{
116 struct user_desc info; 116 struct user_desc info;
117 unsigned short __maybe_unused sel, modified_sel;
117 118
118 if (copy_from_user(&info, u_info, sizeof(info))) 119 if (copy_from_user(&info, u_info, sizeof(info)))
119 return -EFAULT; 120 return -EFAULT;
@@ -141,6 +142,47 @@ int do_set_thread_area(struct task_struct *p, int idx,
141 142
142 set_tls_desc(p, idx, &info, 1); 143 set_tls_desc(p, idx, &info, 1);
143 144
145 /*
146 * If DS, ES, FS, or GS points to the modified segment, forcibly
147 * refresh it. Only needed on x86_64 because x86_32 reloads them
148 * on return to user mode.
149 */
150 modified_sel = (idx << 3) | 3;
151
152 if (p == current) {
153#ifdef CONFIG_X86_64
154 savesegment(ds, sel);
155 if (sel == modified_sel)
156 loadsegment(ds, sel);
157
158 savesegment(es, sel);
159 if (sel == modified_sel)
160 loadsegment(es, sel);
161
162 savesegment(fs, sel);
163 if (sel == modified_sel)
164 loadsegment(fs, sel);
165
166 savesegment(gs, sel);
167 if (sel == modified_sel)
168 load_gs_index(sel);
169#endif
170
171#ifdef CONFIG_X86_32_LAZY_GS
172 savesegment(gs, sel);
173 if (sel == modified_sel)
174 loadsegment(gs, sel);
175#endif
176 } else {
177#ifdef CONFIG_X86_64
178 if (p->thread.fsindex == modified_sel)
179 p->thread.fsbase = info.base_addr;
180
181 if (p->thread.gsindex == modified_sel)
182 p->thread.gsbase = info.base_addr;
183#endif
184 }
185
144 return 0; 186 return 0;
145} 187}
146 188