aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
diff options
context:
space:
mode:
authorAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>2013-06-19 02:34:26 -0400
committerBenjamin Herrenschmidt <benh@kernel.crashing.org>2013-06-20 01:25:21 -0400
commit8bbd9f04b7d982d1c6aeb5c08f5983b3d0b9e2fe (patch)
tree11ffc9d94892c617433e70bd64c977071edec0c2 /arch/powerpc/mm
parentc0691143dfe1d42ec9bd89de5921ccb6a27ea1b3 (diff)
powerpc: Fix bad pmd error with book3E config
Book3E uses the hugepd at PMD level and don't encode pte directly at the pmd level. So it will find the lower bits of pmd set and the pmd_bad check throws error. Infact the current code will never take the free_hugepd_range call at all because it will clear the pmd if it find a hugepd pointer. Fix this by clearing bad pmd only if it is not a hugepd pointer. This is regression introduced by e2b3d202d1dba8f3546ed28224ce485bc50010be "powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format" Reported-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/mm')
-rw-r--r--arch/powerpc/mm/hugetlbpage.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 237c8e5f2640..77fdd2cef33b 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -592,8 +592,14 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
592 do { 592 do {
593 pmd = pmd_offset(pud, addr); 593 pmd = pmd_offset(pud, addr);
594 next = pmd_addr_end(addr, end); 594 next = pmd_addr_end(addr, end);
595 if (pmd_none_or_clear_bad(pmd)) 595 if (!is_hugepd(pmd)) {
596 /*
597 * if it is not hugepd pointer, we should already find
598 * it cleared.
599 */
600 WARN_ON(!pmd_none_or_clear_bad(pmd));
596 continue; 601 continue;
602 }
597#ifdef CONFIG_PPC_FSL_BOOK3E 603#ifdef CONFIG_PPC_FSL_BOOK3E
598 /* 604 /*
599 * Increment next by the size of the huge mapping since 605 * Increment next by the size of the huge mapping since