aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBenjamin Herrenschmidt <benh@kernel.crashing.org>2017-02-03 01:10:28 -0500
committerMichael Ellerman <mpe@ellerman.id.au>2017-02-08 07:36:29 -0500
commitd7df2443cd5f67fc6ee7c05a88e4996e8177f91b (patch)
tree098a7c0ca4fceb8a65cb1f693c9d71990388933d
parenta0615a16f7d0ceb5804d295203c302d496d8ee91 (diff)
powerpc/mm: Fix spurrious segfaults on radix with autonuma
When autonuma (Automatic NUMA balancing) marks a PTE inaccessible it clears all the protection bits but leave the PTE valid. With the Radix MMU, an attempt at executing from such a PTE will take a fault with bit 35 of SRR1 set "SRR1_ISI_N_OR_G". It is thus incorrect to treat all such faults as errors. We should pass them to handle_mm_fault() for autonuma to deal with. The case of pages that are really not executable is handled by the existing test for VM_EXEC further down. That leaves us with catching the kernel attempts at executing user pages. We can catch that earlier, even before we do find_vma. It is never valid on powerpc for the kernel to take an exec fault to begin with. So fold that test with the existing test for the kernel faulting on kernel addresses to bail out early. Fixes: 1d18ad026844 ("powerpc/mm: Detect instruction fetch denied and report") Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-rw-r--r--arch/powerpc/mm/fault.c21
1 files changed, 5 insertions, 16 deletions
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 6fd30ac7d14a..62a50d6d1053 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -253,8 +253,11 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
253 if (unlikely(debugger_fault_handler(regs))) 253 if (unlikely(debugger_fault_handler(regs)))
254 goto bail; 254 goto bail;
255 255
256 /* On a kernel SLB miss we can only check for a valid exception entry */ 256 /*
257 if (!user_mode(regs) && (address >= TASK_SIZE)) { 257 * The kernel should never take an execute fault nor should it
258 * take a page fault to a kernel address.
259 */
260 if (!user_mode(regs) && (is_exec || (address >= TASK_SIZE))) {
258 rc = SIGSEGV; 261 rc = SIGSEGV;
259 goto bail; 262 goto bail;
260 } 263 }
@@ -391,20 +394,6 @@ good_area:
391 394
392 if (is_exec) { 395 if (is_exec) {
393 /* 396 /*
394 * An execution fault + no execute ?
395 *
396 * On CPUs that don't have CPU_FTR_COHERENT_ICACHE we
397 * deliberately create NX mappings, and use the fault to do the
398 * cache flush. This is usually handled in hash_page_do_lazy_icache()
399 * but we could end up here if that races with a concurrent PTE
400 * update. In that case we need to fall through here to the VMA
401 * check below.
402 */
403 if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE) &&
404 (regs->msr & SRR1_ISI_N_OR_G))
405 goto bad_area;
406
407 /*
408 * Allow execution from readable areas if the MMU does not 397 * Allow execution from readable areas if the MMU does not
409 * provide separate controls over reading and executing. 398 * provide separate controls over reading and executing.
410 * 399 *