diff options
author | Rik van Riel <riel@redhat.com> | 2012-11-06 04:55:18 -0500 |
---|---|---|
committer | Mel Gorman <mgorman@suse.de> | 2012-12-11 09:28:33 -0500 |
commit | e4a1cc56e4d728eb87072c71c07581524e5160b1 (patch) | |
tree | 291232b64431eeb2c815adc38b20d66cb3355364 | |
parent | 0f9a921cf9bf3b524feddc484e2b4d070b7ca0d0 (diff) |
x86: mm: drop TLB flush from ptep_set_access_flags
Intel has an architectural guarantee that the TLB entry causing
a page fault gets invalidated automatically. This means
we should be able to drop the local TLB invalidation.
Because of the way other areas of the page fault code work,
chances are good that all x86 CPUs do this. However, if
someone somewhere has an x86 CPU that does not invalidate
the TLB entry causing a page fault, this one-liner should
be easy to revert.
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
-rw-r--r-- | arch/x86/mm/pgtable.c | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index be3bb4690887..7353de3d98a7 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c | |||
@@ -317,7 +317,6 @@ int ptep_set_access_flags(struct vm_area_struct *vma, | |||
317 | if (changed && dirty) { | 317 | if (changed && dirty) { |
318 | *ptep = entry; | 318 | *ptep = entry; |
319 | pte_update_defer(vma->vm_mm, address, ptep); | 319 | pte_update_defer(vma->vm_mm, address, ptep); |
320 | __flush_tlb_one(address); | ||
321 | } | 320 | } |
322 | 321 | ||
323 | return changed; | 322 | return changed; |