diff options
author | Steven Rostedt <rostedt@goodmis.org> | 2008-11-03 23:15:55 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-11-04 04:09:48 -0500 |
commit | 8f0a056fcb2f83a069fb5d60c2383304b7456687 (patch) | |
tree | a427700400ab674ed7de882214b501176ec7507f /kernel/trace/trace.h | |
parent | 7a895f53cda9d9362c30144e42c124a1ae996b9e (diff) |
ftrace: introduce ftrace_preempt_disable()/enable()
Impact: add new ftrace-plugin internal APIs
Parts of the tracer needs to be careful about schedule recursion.
If the NEED_RESCHED flag is set, a preempt_enable will call schedule.
Inside the schedule function, the NEED_RESCHED flag is cleared.
The problem arises when a trace happens in the schedule function but before
NEED_RESCHED is cleared. The race is as follows:
schedule()
>> tracer called
trace_function()
preempt_disable()
[ record trace ]
preempt_enable() <<- here's the issue.
[check NEED_RESCHED]
schedule()
[ Repeat the above, over and over again ]
The naive approach is simply to use preempt_enable_no_schedule instead.
The problem with that approach is that, although we solve the schedule
recursion issue, we now might lose a preemption check when not in the
schedule function.
trace_function()
preempt_disable()
[ record trace ]
[Interrupt comes in and sets NEED_RESCHED]
preempt_enable_no_resched()
[continue without scheduling]
The way ftrace handles this problem is with the following approach:
int resched;
resched = need_resched();
preempt_disable_notrace();
[record trace]
if (resched)
preempt_enable_no_sched_notrace();
else
preempt_enable_notrace();
This may seem like the opposite of what we want. If resched is set
then we call the "no_sched" version?? The reason we do this is because
if NEED_RESCHED is set before we disable preemption, there's two reasons
for that:
1) we are in an atomic code path
2) we are already on our way to the schedule function, and maybe even
in the schedule function, but have yet to clear the flag.
Both the above cases we do not want to schedule.
This solution has already been implemented within the ftrace infrastructure.
But the problem is that it has been implemented several times. This patch
encapsulates this code to two nice functions.
resched = ftrace_preempt_disable();
[ record trace]
ftrace_preempt_enable(resched);
This way the tracers do not need to worry about getting it right.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/trace/trace.h')
-rw-r--r-- | kernel/trace/trace.h | 48 |
1 files changed, 48 insertions, 0 deletions
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 8465ad052707..10c6dae76894 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h | |||
@@ -419,4 +419,52 @@ enum trace_iterator_flags { | |||
419 | 419 | ||
420 | extern struct tracer nop_trace; | 420 | extern struct tracer nop_trace; |
421 | 421 | ||
422 | /** | ||
423 | * ftrace_preempt_disable - disable preemption scheduler safe | ||
424 | * | ||
425 | * When tracing can happen inside the scheduler, there exists | ||
426 | * cases that the tracing might happen before the need_resched | ||
427 | * flag is checked. If this happens and the tracer calls | ||
428 | * preempt_enable (after a disable), a schedule might take place | ||
429 | * causing an infinite recursion. | ||
430 | * | ||
431 | * To prevent this, we read the need_recshed flag before | ||
432 | * disabling preemption. When we want to enable preemption we | ||
433 | * check the flag, if it is set, then we call preempt_enable_no_resched. | ||
434 | * Otherwise, we call preempt_enable. | ||
435 | * | ||
436 | * The rational for doing the above is that if need resched is set | ||
437 | * and we have yet to reschedule, we are either in an atomic location | ||
438 | * (where we do not need to check for scheduling) or we are inside | ||
439 | * the scheduler and do not want to resched. | ||
440 | */ | ||
441 | static inline int ftrace_preempt_disable(void) | ||
442 | { | ||
443 | int resched; | ||
444 | |||
445 | resched = need_resched(); | ||
446 | preempt_disable_notrace(); | ||
447 | |||
448 | return resched; | ||
449 | } | ||
450 | |||
451 | /** | ||
452 | * ftrace_preempt_enable - enable preemption scheduler safe | ||
453 | * @resched: the return value from ftrace_preempt_disable | ||
454 | * | ||
455 | * This is a scheduler safe way to enable preemption and not miss | ||
456 | * any preemption checks. The disabled saved the state of preemption. | ||
457 | * If resched is set, then we were either inside an atomic or | ||
458 | * are inside the scheduler (we would have already scheduled | ||
459 | * otherwise). In this case, we do not want to call normal | ||
460 | * preempt_enable, but preempt_enable_no_resched instead. | ||
461 | */ | ||
462 | static inline void ftrace_preempt_enable(int resched) | ||
463 | { | ||
464 | if (resched) | ||
465 | preempt_enable_no_resched_notrace(); | ||
466 | else | ||
467 | preempt_enable_notrace(); | ||
468 | } | ||
469 | |||
422 | #endif /* _LINUX_KERNEL_TRACE_H */ | 470 | #endif /* _LINUX_KERNEL_TRACE_H */ |