diff options
author | Steven Rostedt <srostedt@redhat.com> | 2010-06-03 09:36:50 -0400 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2010-06-03 19:32:38 -0400 |
commit | 5168ae50a66e3ff7184c2b16d661bd6d70367e50 (patch) | |
tree | 2fb21fc3bd346e4f589605d940dfb1bacac30bf5 /kernel/trace/trace.h | |
parent | d1f74e20b5b064a130cd0743a256c2d3cfe84010 (diff) |
tracing: Remove ftrace_preempt_disable/enable
The ftrace_preempt_disable/enable functions were to address a
recursive race caused by the function tracer. The function tracer
traces all functions which makes it easily susceptible to recursion.
One area was preempt_enable(). This would call the scheduler and
the schedulre would call the function tracer and loop.
(So was it thought).
The ftrace_preempt_disable/enable was made to protect against recursion
inside the scheduler by storing the NEED_RESCHED flag. If it was
set before the ftrace_preempt_disable() it would not call schedule
on ftrace_preempt_enable(), thinking that if it was set before then
it would have already scheduled unless it was already in the scheduler.
This worked fine except in the case of SMP, where another task would set
the NEED_RESCHED flag for a task on another CPU, and then kick off an
IPI to trigger it. This could cause the NEED_RESCHED to be saved at
ftrace_preempt_disable() but the IPI to arrive in the the preempt
disabled section. The ftrace_preempt_enable() would not call the scheduler
because the flag was already set before entring the section.
This bug would cause a missed preemption check and cause lower latencies.
Investigating further, I found that the recusion caused by the function
tracer was not due to schedule(), but due to preempt_schedule(). Now
that preempt_schedule is completely annotated with notrace, the recusion
no longer is an issue.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace/trace.h')
-rw-r--r-- | kernel/trace/trace.h | 48 |
1 files changed, 0 insertions, 48 deletions
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 2cd96399463f..6c45e55097ce 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h | |||
@@ -628,54 +628,6 @@ enum trace_iterator_flags { | |||
628 | 628 | ||
629 | extern struct tracer nop_trace; | 629 | extern struct tracer nop_trace; |
630 | 630 | ||
631 | /** | ||
632 | * ftrace_preempt_disable - disable preemption scheduler safe | ||
633 | * | ||
634 | * When tracing can happen inside the scheduler, there exists | ||
635 | * cases that the tracing might happen before the need_resched | ||
636 | * flag is checked. If this happens and the tracer calls | ||
637 | * preempt_enable (after a disable), a schedule might take place | ||
638 | * causing an infinite recursion. | ||
639 | * | ||
640 | * To prevent this, we read the need_resched flag before | ||
641 | * disabling preemption. When we want to enable preemption we | ||
642 | * check the flag, if it is set, then we call preempt_enable_no_resched. | ||
643 | * Otherwise, we call preempt_enable. | ||
644 | * | ||
645 | * The rational for doing the above is that if need_resched is set | ||
646 | * and we have yet to reschedule, we are either in an atomic location | ||
647 | * (where we do not need to check for scheduling) or we are inside | ||
648 | * the scheduler and do not want to resched. | ||
649 | */ | ||
650 | static inline int ftrace_preempt_disable(void) | ||
651 | { | ||
652 | int resched; | ||
653 | |||
654 | resched = need_resched(); | ||
655 | preempt_disable_notrace(); | ||
656 | |||
657 | return resched; | ||
658 | } | ||
659 | |||
660 | /** | ||
661 | * ftrace_preempt_enable - enable preemption scheduler safe | ||
662 | * @resched: the return value from ftrace_preempt_disable | ||
663 | * | ||
664 | * This is a scheduler safe way to enable preemption and not miss | ||
665 | * any preemption checks. The disabled saved the state of preemption. | ||
666 | * If resched is set, then we are either inside an atomic or | ||
667 | * are inside the scheduler (we would have already scheduled | ||
668 | * otherwise). In this case, we do not want to call normal | ||
669 | * preempt_enable, but preempt_enable_no_resched instead. | ||
670 | */ | ||
671 | static inline void ftrace_preempt_enable(int resched) | ||
672 | { | ||
673 | if (resched) | ||
674 | preempt_enable_no_resched_notrace(); | ||
675 | else | ||
676 | preempt_enable_notrace(); | ||
677 | } | ||
678 | |||
679 | #ifdef CONFIG_BRANCH_TRACER | 631 | #ifdef CONFIG_BRANCH_TRACER |
680 | extern int enable_branch_tracing(struct trace_array *tr); | 632 | extern int enable_branch_tracing(struct trace_array *tr); |
681 | extern void disable_branch_tracing(void); | 633 | extern void disable_branch_tracing(void); |