aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2014-10-28 16:16:28 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2014-10-28 16:16:28 -0400
commitce9ec37bddb633404a0c23e1acb181a264e7f7f2 (patch)
tree8e00c228efe5282b251e2b21ce02a5f4e535e664
parentf7e87a44ef60ad379e39b45437604141453bf0ec (diff)
zap_pte_range: update addr when forcing flush after TLB batching faiure
When unmapping a range of pages in zap_pte_range, the page being unmapped is added to an mmu_gather_batch structure for asynchronous freeing. If we run out of space in the batch structure before the range has been completely unmapped, then we break out of the loop, force a TLB flush and free the pages that we have batched so far. If there are further pages to unmap, then we resume the loop where we left off. Unfortunately, we forget to update addr when we break out of the loop, which causes us to truncate the range being invalidated as the end address is exclusive. When we re-enter the loop at the same address, the page has already been freed and the pte_present test will fail, meaning that we do not reconsider the address for invalidation. This patch fixes the problem by incrementing addr by the PAGE_SIZE before breaking out of the loop on batch failure. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/memory.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 1cc6bfbd872e..3e503831e042 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1147,6 +1147,7 @@ again:
1147 print_bad_pte(vma, addr, ptent, page); 1147 print_bad_pte(vma, addr, ptent, page);
1148 if (unlikely(!__tlb_remove_page(tlb, page))) { 1148 if (unlikely(!__tlb_remove_page(tlb, page))) {
1149 force_flush = 1; 1149 force_flush = 1;
1150 addr += PAGE_SIZE;
1150 break; 1151 break;
1151 } 1152 }
1152 continue; 1153 continue;