diff options
author | Steven Rostedt <rostedt@goodmis.org> | 2008-11-03 23:15:57 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-11-04 04:09:50 -0500 |
commit | b2a866f9344cb30d7ddf5d67b5b8393daf8bef07 (patch) | |
tree | 5265b5bcc3e6daca52f2951a85394bbc58f789fd /kernel/trace/trace.h | |
parent | 182e9f5f704ed6b9175142fe8da33c9ce0c52b52 (diff) |
ftrace: function tracer with irqs disabled
Impact: disable interrupts during trace entry creation (as opposed to preempt)
To help with performance, I set the ftracer to not disable interrupts,
and only to disable preemption. If an interrupt occurred, it would not
be traced, because the function tracer protects itself from recursion.
This may be faster, but the trace output might miss some traces.
This patch makes the fuction trace disable interrupts, but it also
adds a runtime feature to disable preemption instead. It does this by
having two different tracer functions. When the function tracer is
enabled, it will check to see which version is requested (irqs disabled
or preemption disabled). Then it will use the corresponding function
as the tracer.
Irq disabling is the default behaviour, but if the user wants better
performance, with the chance of missing traces, then they can choose
the preempt disabled version.
Running hackbench 3 times with the irqs disabled and 3 times with
the preempt disabled function tracer yielded:
tracing type times entries recorded
------------ -------- ----------------
irq disabled 43.393 166433066
43.282 166172618
43.298 166256704
preempt disabled 38.969 159871710
38.943 159972935
39.325 161056510
Average:
irqs disabled: 43.324 166287462
preempt disabled: 39.079 160300385
preempt is 10.8 percent faster than irqs disabled.
I wrote a patch to count function trace recursion and reran hackbench.
With irq disabled: 1,150 times the function tracer did not trace due to
recursion.
with preempt disabled: 5,117,718 times.
The thousand times with irq disabled could be due to NMIs, or simply a case
where it called a function that was not protected by notrace.
But we also see that a large amount of the trace is lost with the
preempt version.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/trace/trace.h')
-rw-r--r-- | kernel/trace/trace.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 10c6dae76894..bb547e933af7 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h | |||
@@ -415,6 +415,7 @@ enum trace_iterator_flags { | |||
415 | TRACE_ITER_STACKTRACE = 0x100, | 415 | TRACE_ITER_STACKTRACE = 0x100, |
416 | TRACE_ITER_SCHED_TREE = 0x200, | 416 | TRACE_ITER_SCHED_TREE = 0x200, |
417 | TRACE_ITER_PRINTK = 0x400, | 417 | TRACE_ITER_PRINTK = 0x400, |
418 | TRACE_ITER_PREEMPTONLY = 0x800, | ||
418 | }; | 419 | }; |
419 | 420 | ||
420 | extern struct tracer nop_trace; | 421 | extern struct tracer nop_trace; |