aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/perf_counter.h
diff options
context:
space:
mode:
authorPaul Mackerras <paulus@samba.org>2009-02-13 06:10:34 -0500
committerIngo Molnar <mingo@elte.hu>2009-02-13 06:20:38 -0500
commitc07c99b67233ccaad38a961c17405dc1e1542aa4 (patch)
tree29682173de8f81b030a0d68006b56b115eea0ce9 /include/linux/perf_counter.h
parentb1864e9a1afef41709886072c6e6248def0386f4 (diff)
perfcounters: make context switch and migration software counters work again
Jaswinder Singh Rajput reported that commit 23a185ca8abbeef caused the context switch and migration software counters to report zero always. With that commit, the software counters only count events that occur between sched-in and sched-out for a task. This is necessary for the counter enable/disable prctls and ioctls to work. However, the context switch and migration counts are incremented after sched-out for one task and before sched-in for the next. Since the increment doesn't occur while a task is scheduled in (as far as the software counters are concerned) it doesn't count towards any counter. Thus the context switch and migration counters need to count events that occur at any time, provided the counter is enabled, not just those that occur while the task is scheduled in (from the perf_counter subsystem's point of view). The problem though is that the software counter code can't tell the difference between being enabled and being scheduled in, and between being disabled and being scheduled out, since we use the one pair of enable/disable entry points for both. That is, the high-level disable operation simply arranges for the counter to not be scheduled in any more, and the high-level enable operation arranges for it to be scheduled in again. One way to solve this would be to have sched_in/out operations in the hw_perf_counter_ops struct as well as enable/disable. However, this takes a simpler approach: it adds a 'prev_state' field to the perf_counter struct that allows a counter's enable method to know whether the counter was previously disabled or just inactive (scheduled out), and therefore whether the enable method is being called as a result of a high-level enable or a schedule-in operation. This then allows the context switch, migration and page fault counters to reset their hw.prev_count value in their enable functions only if they are called as a result of a high-level enable operation. Although page faults would normally only occur while the counter is scheduled in, this changes the page fault counter code too in case there are ever circumstances where page faults get counted against a task while its counters are not scheduled in. Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux/perf_counter.h')
-rw-r--r--include/linux/perf_counter.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index c83f51d6e359..32cd1acb7386 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -173,6 +173,7 @@ struct perf_counter {
173 const struct hw_perf_counter_ops *hw_ops; 173 const struct hw_perf_counter_ops *hw_ops;
174 174
175 enum perf_counter_active_state state; 175 enum perf_counter_active_state state;
176 enum perf_counter_active_state prev_state;
176 atomic64_t count; 177 atomic64_t count;
177 178
178 struct perf_counter_hw_event hw_event; 179 struct perf_counter_hw_event hw_event;