aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-x86/system.h
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2008-07-11 13:41:19 -0400
committerIngo Molnar <mingo@elte.hu>2008-07-11 13:51:47 -0400
commitd9fc3fd3fab186447b5d2e7db3c2ee149064cc7c (patch)
treecc7a450b7ebdd7c745ac8ad7a3c5b01494b40a53 /include/asm-x86/system.h
parentb6ad92d4faade38619e89acc509ca1416b81a0bd (diff)
x86: fix savesegment() bug causing crashes on 64-bit
i spent a fair amount of time chasing a 64-bit bootup crash that manifested itself as bootup segfaults: S10network[1825]: segfault at 7f3e2b5d16b8 ip 00000031108748c9 sp 00007fffb9c14c70 error 4 in libc-2.7.so[3110800000+14d000] eventually causing init to die and panic the system: Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.26-rc9-tip #13878 after a maratonic bisection session, the bad commit turned out to be: | b7675791859075418199c7af86a116ea34eaf5bd is first bad commit | commit b7675791859075418199c7af86a116ea34eaf5bd | Author: Jeremy Fitzhardinge <jeremy@goop.org> | Date: Wed Jun 25 00:19:00 2008 -0400 | | x86: remove open-coded save/load segment operations | | This removes a pile of buggy open-coded implementations of savesegment | and loadsegment. after some more bisection of this patch itself, it turns out that what makes the difference are the savesegment() changes to __switch_to(). Taking a look at this portion of arch/x86/kernel/process_64.o revealed this crutial difference: | good: 99c: 8c e0 mov %fs,%eax | 99e: 89 45 cc mov %eax,-0x34(%rbp) | | bad: 99c: 8c 65 cc mov %fs,-0x34(%rbp) which is due to: | unsigned fsindex; | - asm volatile("movl %%fs,%0" : "=r" (fsindex)); | + savesegment(fs, fsindex); savesegment() is implemented as: #define savesegment(seg, value) \ asm("mov %%" #seg ",%0":"=rm" (value) : : "memory") note the "m" modifier - it allows GCC to generate the segment move into a memory operand as well. But regarding segment operands there's a subtle detail in the x86 instruction set: the above 16-bit moves are zero-extend, but only if it goes to a register. If it goes to a memory operand, -0x34(%rbp) in the above case, there's no zero-extend to 32-bit and the instruction will only save 16 bits instead of the intended 32-bit. The other 16 bits is random data - which can cause problems when that value is used later on. The solution is to only allow segment operands to go to registers. This fix allows my test-system to boot up without crashing. Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/asm-x86/system.h')
-rw-r--r--include/asm-x86/system.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-x86/system.h b/include/asm-x86/system.h
index c4946c5964bf..983ce37c491f 100644
--- a/include/asm-x86/system.h
+++ b/include/asm-x86/system.h
@@ -160,7 +160,7 @@ extern void native_load_gs_index(unsigned);
160 * Save a segment register away 160 * Save a segment register away
161 */ 161 */
162#define savesegment(seg, value) \ 162#define savesegment(seg, value) \
163 asm("mov %%" #seg ",%0":"=rm" (value) : : "memory") 163 asm("mov %%" #seg ",%0":"=r" (value) : : "memory")
164 164
165static inline unsigned long get_limit(unsigned long segment) 165static inline unsigned long get_limit(unsigned long segment)
166{ 166{