diff options
author | Jérôme Glisse <jglisse@redhat.com> | 2017-11-15 20:34:11 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-15 21:21:03 -0500 |
commit | 4645b9fe84bf4878f04c7959a75df7c3c2d1bbb9 (patch) | |
tree | 83fc19e23f907ea229fd8a99aedbbf967057eae4 /mm/migrate.c | |
parent | 0f10851ea475e08896ee5d9a2036d1bb46a8f3a4 (diff) |
mm/mmu_notifier: avoid call to invalidate_range() in range_end()
This is an optimization patch that only affect mmu_notifier users which
rely on the invalidate_range() callback. This patch avoids calling that
callback twice in a row from inside __mmu_notifier_invalidate_range_end
Existing pattern (before this patch):
mmu_notifier_invalidate_range_start()
pte/pmd/pud_clear_flush_notify()
mmu_notifier_invalidate_range()
mmu_notifier_invalidate_range_end()
mmu_notifier_invalidate_range()
New pattern (after this patch):
mmu_notifier_invalidate_range_start()
pte/pmd/pud_clear_flush_notify()
mmu_notifier_invalidate_range()
mmu_notifier_invalidate_range_only_end()
We call the invalidate_range callback after clearing the page table
under the page table lock and we skip the call to invalidate_range
inside the __mmu_notifier_invalidate_range_end() function.
Idea from Andrea Arcangeli
Link: http://lkml.kernel.org/r/20171017031003.7481-3-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Alistair Popple <alistair@popple.id.au>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 15 |
1 files changed, 12 insertions, 3 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 1236449b4777..4d0be47a322a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c | |||
@@ -2089,7 +2089,11 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, | |||
2089 | set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED); | 2089 | set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED); |
2090 | 2090 | ||
2091 | spin_unlock(ptl); | 2091 | spin_unlock(ptl); |
2092 | mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); | 2092 | /* |
2093 | * No need to double call mmu_notifier->invalidate_range() callback as | ||
2094 | * the above pmdp_huge_clear_flush_notify() did already call it. | ||
2095 | */ | ||
2096 | mmu_notifier_invalidate_range_only_end(mm, mmun_start, mmun_end); | ||
2093 | 2097 | ||
2094 | /* Take an "isolate" reference and put new page on the LRU. */ | 2098 | /* Take an "isolate" reference and put new page on the LRU. */ |
2095 | get_page(new_page); | 2099 | get_page(new_page); |
@@ -2805,9 +2809,14 @@ static void migrate_vma_pages(struct migrate_vma *migrate) | |||
2805 | migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; | 2809 | migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; |
2806 | } | 2810 | } |
2807 | 2811 | ||
2812 | /* | ||
2813 | * No need to double call mmu_notifier->invalidate_range() callback as | ||
2814 | * the above ptep_clear_flush_notify() inside migrate_vma_insert_page() | ||
2815 | * did already call it. | ||
2816 | */ | ||
2808 | if (notified) | 2817 | if (notified) |
2809 | mmu_notifier_invalidate_range_end(mm, mmu_start, | 2818 | mmu_notifier_invalidate_range_only_end(mm, mmu_start, |
2810 | migrate->end); | 2819 | migrate->end); |
2811 | } | 2820 | } |
2812 | 2821 | ||
2813 | /* | 2822 | /* |