diff options
| author | Peter Zijlstra <peterz@infradead.org> | 2014-01-10 15:06:03 -0500 |
|---|---|---|
| committer | Ingo Molnar <mingo@kernel.org> | 2014-01-16 03:19:48 -0500 |
| commit | c026b3591e4f2a4993df773183704bb31634e0bd (patch) | |
| tree | 2115229bf8310f0f66d7d924e01865bee4af6911 /arch | |
| parent | 85ce70fdf48aa290b4845311c2dd815d7f8d1fa5 (diff) | |
x86, mm, perf: Allow recursive faults from interrupts
Waiman managed to trigger a PMI while in a emulate_vsyscall() fault,
the PMI in turn managed to trigger a fault while obtaining a stack
trace. This triggered the sig_on_uaccess_error recursive fault logic
and killed the process dead.
Fix this by explicitly excluding interrupts from the recursive fault
logic.
Reported-and-Tested-by: Waiman Long <waiman.long@hp.com>
Fixes: e00b12e64be9 ("perf/x86: Further optimize copy_from_user_nmi()")
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20140110200603.GJ7572@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
| -rw-r--r-- | arch/x86/mm/fault.c | 18 |
1 files changed, 18 insertions, 0 deletions
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9ff85bb8dd69..9d591c895803 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c | |||
| @@ -641,6 +641,20 @@ no_context(struct pt_regs *regs, unsigned long error_code, | |||
| 641 | 641 | ||
| 642 | /* Are we prepared to handle this kernel fault? */ | 642 | /* Are we prepared to handle this kernel fault? */ |
| 643 | if (fixup_exception(regs)) { | 643 | if (fixup_exception(regs)) { |
| 644 | /* | ||
| 645 | * Any interrupt that takes a fault gets the fixup. This makes | ||
| 646 | * the below recursive fault logic only apply to a faults from | ||
| 647 | * task context. | ||
| 648 | */ | ||
| 649 | if (in_interrupt()) | ||
| 650 | return; | ||
| 651 | |||
| 652 | /* | ||
| 653 | * Per the above we're !in_interrupt(), aka. task context. | ||
| 654 | * | ||
| 655 | * In this case we need to make sure we're not recursively | ||
| 656 | * faulting through the emulate_vsyscall() logic. | ||
| 657 | */ | ||
| 644 | if (current_thread_info()->sig_on_uaccess_error && signal) { | 658 | if (current_thread_info()->sig_on_uaccess_error && signal) { |
| 645 | tsk->thread.trap_nr = X86_TRAP_PF; | 659 | tsk->thread.trap_nr = X86_TRAP_PF; |
| 646 | tsk->thread.error_code = error_code | PF_USER; | 660 | tsk->thread.error_code = error_code | PF_USER; |
| @@ -649,6 +663,10 @@ no_context(struct pt_regs *regs, unsigned long error_code, | |||
| 649 | /* XXX: hwpoison faults will set the wrong code. */ | 663 | /* XXX: hwpoison faults will set the wrong code. */ |
| 650 | force_sig_info_fault(signal, si_code, address, tsk, 0); | 664 | force_sig_info_fault(signal, si_code, address, tsk, 0); |
| 651 | } | 665 | } |
| 666 | |||
| 667 | /* | ||
| 668 | * Barring that, we can do the fixup and be happy. | ||
| 669 | */ | ||
| 652 | return; | 670 | return; |
| 653 | } | 671 | } |
| 654 | 672 | ||
