diff options
author | Andy Lutomirski <luto@kernel.org> | 2017-06-29 11:53:14 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-06-30 04:12:35 -0400 |
commit | 8781fb7e9749da424e01daacd14834b674658c63 (patch) | |
tree | 881f4617906f67af6cd8e69057e1c157cdad7576 | |
parent | bc0d5a89fbe3c83ac45438d7ba88309f4713615d (diff) |
x86/mm: Delete a big outdated comment about TLB flushing
The comment describes the old explicit IPI-based flush logic, which
is long gone.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/55e44997e56086528140c5180f8337dc53fb7ffc.1498751203.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | arch/x86/mm/tlb.c | 36 |
1 files changed, 0 insertions, 36 deletions
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 1cc47838d1e8..014d07a80053 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c | |||
@@ -153,42 +153,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, | |||
153 | switch_ldt(real_prev, next); | 153 | switch_ldt(real_prev, next); |
154 | } | 154 | } |
155 | 155 | ||
156 | /* | ||
157 | * The flush IPI assumes that a thread switch happens in this order: | ||
158 | * [cpu0: the cpu that switches] | ||
159 | * 1) switch_mm() either 1a) or 1b) | ||
160 | * 1a) thread switch to a different mm | ||
161 | * 1a1) set cpu_tlbstate to TLBSTATE_OK | ||
162 | * Now the tlb flush NMI handler flush_tlb_func won't call leave_mm | ||
163 | * if cpu0 was in lazy tlb mode. | ||
164 | * 1a2) update cpu active_mm | ||
165 | * Now cpu0 accepts tlb flushes for the new mm. | ||
166 | * 1a3) cpu_set(cpu, new_mm->cpu_vm_mask); | ||
167 | * Now the other cpus will send tlb flush ipis. | ||
168 | * 1a4) change cr3. | ||
169 | * 1a5) cpu_clear(cpu, old_mm->cpu_vm_mask); | ||
170 | * Stop ipi delivery for the old mm. This is not synchronized with | ||
171 | * the other cpus, but flush_tlb_func ignore flush ipis for the wrong | ||
172 | * mm, and in the worst case we perform a superfluous tlb flush. | ||
173 | * 1b) thread switch without mm change | ||
174 | * cpu active_mm is correct, cpu0 already handles flush ipis. | ||
175 | * 1b1) set cpu_tlbstate to TLBSTATE_OK | ||
176 | * 1b2) test_and_set the cpu bit in cpu_vm_mask. | ||
177 | * Atomically set the bit [other cpus will start sending flush ipis], | ||
178 | * and test the bit. | ||
179 | * 1b3) if the bit was 0: leave_mm was called, flush the tlb. | ||
180 | * 2) switch %%esp, ie current | ||
181 | * | ||
182 | * The interrupt must handle 2 special cases: | ||
183 | * - cr3 is changed before %%esp, ie. it cannot use current->{active_,}mm. | ||
184 | * - the cpu performs speculative tlb reads, i.e. even if the cpu only | ||
185 | * runs in kernel space, the cpu could load tlb entries for user space | ||
186 | * pages. | ||
187 | * | ||
188 | * The good news is that cpu_tlbstate is local to each cpu, no | ||
189 | * write/read ordering problems. | ||
190 | */ | ||
191 | |||
192 | static void flush_tlb_func_common(const struct flush_tlb_info *f, | 156 | static void flush_tlb_func_common(const struct flush_tlb_info *f, |
193 | bool local, enum tlb_flush_reason reason) | 157 | bool local, enum tlb_flush_reason reason) |
194 | { | 158 | { |