aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2007-10-19 06:19:26 -0400
committerThomas Gleixner <tglx@linutronix.de>2007-10-19 06:19:26 -0400
commit9a24d04a3c26c223f22493492c5c9085b8773d4a (patch)
tree2c541bdeac1f11973ab4fbde5f7c19024c4fecfe /arch/x86
parent4fa4d23fa20de67df919030c1216295664866ad7 (diff)
x86: fix global_flush_tlb() bug
While we were reviewing pageattr_32/64.c for unification, Thomas Gleixner noticed the following serious SMP bug in global_flush_tlb(): down_read(&init_mm.mmap_sem); list_replace_init(&deferred_pages, &l); up_read(&init_mm.mmap_sem); this is SMP-unsafe because list_replace_init() done on two CPUs in parallel can corrupt the list. This bug has been introduced about a year ago in the 64-bit tree: commit ea7322decb974a4a3e804f96a0201e893ff88ce3 Author: Andi Kleen <ak@suse.de> Date: Thu Dec 7 02:14:05 2006 +0100 [PATCH] x86-64: Speed and clean up cache flushing in change_page_attr down_read(&init_mm.mmap_sem); - dpage = xchg(&deferred_pages, NULL); + list_replace_init(&deferred_pages, &l); up_read(&init_mm.mmap_sem); the xchg() based version was SMP-safe, but list_replace_init() is not. So this "cleanup" introduced a nasty bug. why this bug never become prominent is a mystery - it can probably be explained with the (still) relative obscurity of the x86_64 architecture. the safe fix for now is to write-lock init_mm.mmap_sem. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/mm/pageattr_64.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/x86/mm/pageattr_64.c b/arch/x86/mm/pageattr_64.c
index 8a4f65bf956e..c7b7dfe1d405 100644
--- a/arch/x86/mm/pageattr_64.c
+++ b/arch/x86/mm/pageattr_64.c
@@ -230,9 +230,14 @@ void global_flush_tlb(void)
230 struct page *pg, *next; 230 struct page *pg, *next;
231 struct list_head l; 231 struct list_head l;
232 232
233 down_read(&init_mm.mmap_sem); 233 /*
234 * Write-protect the semaphore, to exclude two contexts
235 * doing a list_replace_init() call in parallel and to
236 * exclude new additions to the deferred_pages list:
237 */
238 down_write(&init_mm.mmap_sem);
234 list_replace_init(&deferred_pages, &l); 239 list_replace_init(&deferred_pages, &l);
235 up_read(&init_mm.mmap_sem); 240 up_write(&init_mm.mmap_sem);
236 241
237 flush_map(&l); 242 flush_map(&l);
238 243