diff options
| author | Peter Zijlstra <peterz@infradead.org> | 2013-11-14 10:23:04 -0500 |
|---|---|---|
| committer | Ingo Molnar <mingo@kernel.org> | 2013-11-19 10:57:40 -0500 |
| commit | d5b5f391d434c5cc8bcb1ab2d759738797b85f52 (patch) | |
| tree | 9be9680fd08dd943cac38b278dde12d83b4a9856 /arch/x86/include | |
| parent | 801a76050bcf8d4e500eb8d048ff6265f37a61c8 (diff) | |
ftrace, perf: Avoid infinite event generation loop
Vince's perf-trinity fuzzer found yet another 'interesting' problem.
When we sample the irq_work_exit tracepoint with period==1 (or
PERF_SAMPLE_PERIOD) and we add an fasync SIGNAL handler we create an
infinite event generation loop:
,-> <IPI>
| irq_work_exit() ->
| trace_irq_work_exit() ->
| ...
| __perf_event_overflow() -> (due to fasync)
| irq_work_queue() -> (irq_work_list must be empty)
'--------- arch_irq_work_raise()
Similar things can happen due to regular poll() wakeups if we exceed
the ring-buffer wakeup watermark, or have an event_limit.
To avoid this, dis-allow sampling this particular tracepoint.
In order to achieve this, create a special perf_perm function pointer
for each event and call this (when set) on trying to create a
tracepoint perf event.
[ roasted: use expr... to allow for ',' in your expression ]
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Dave Jones <davej@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Link: http://lkml.kernel.org/r/20131114152304.GC5364@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include')
| -rw-r--r-- | arch/x86/include/asm/trace/irq_vectors.h | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/arch/x86/include/asm/trace/irq_vectors.h b/arch/x86/include/asm/trace/irq_vectors.h index 2874df24e7a4..4cab890007a7 100644 --- a/arch/x86/include/asm/trace/irq_vectors.h +++ b/arch/x86/include/asm/trace/irq_vectors.h | |||
| @@ -72,6 +72,17 @@ DEFINE_IRQ_VECTOR_EVENT(x86_platform_ipi); | |||
| 72 | DEFINE_IRQ_VECTOR_EVENT(irq_work); | 72 | DEFINE_IRQ_VECTOR_EVENT(irq_work); |
| 73 | 73 | ||
| 74 | /* | 74 | /* |
| 75 | * We must dis-allow sampling irq_work_exit() because perf event sampling | ||
| 76 | * itself can cause irq_work, which would lead to an infinite loop; | ||
| 77 | * | ||
| 78 | * 1) irq_work_exit happens | ||
| 79 | * 2) generates perf sample | ||
| 80 | * 3) generates irq_work | ||
| 81 | * 4) goto 1 | ||
| 82 | */ | ||
| 83 | TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0); | ||
| 84 | |||
| 85 | /* | ||
| 75 | * call_function - called when entering/exiting a call function interrupt | 86 | * call_function - called when entering/exiting a call function interrupt |
| 76 | * vector handler | 87 | * vector handler |
| 77 | */ | 88 | */ |
