diff options
author | Paul Mackerras <paulus@samba.org> | 2009-08-14 01:39:10 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-08-17 05:38:13 -0400 |
commit | e1ac3614ff606ae03677f47459113f98a19af63c (patch) | |
tree | e2317686271461fad79cf88280f8a2eadf5bb88d /kernel/perf_counter.c | |
parent | 2932cffc89e9a1476b28a59896fa4f81e0d4f131 (diff) |
perf_counter: Check task on counter read IPI
In general, code in perf_counter.c that is called through an
IPI checks, for per-task counters, that the counter's task is
still the current task. This is to handle the race condition
where the cpu switches from the task we want to another task in
the interval between sending the IPI and the IPI arriving and
being handled on the target CPU.
For some reason, __perf_counter_read is missing this check, yet
there is no reason why the race condition can't occur. This
adds a check that the current task is the one we want. If it
isn't, we just return. In that case the counter->count value
should be up to date, since it will have been updated when the
counter was scheduled out, which must have happened since the
IPI was sent.
I don't have an example of an actual failure due to this race,
but it seems obvious that it could occur and we need to guard
against it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <19076.63614.277861.368125@drongo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/perf_counter.c')
-rw-r--r-- | kernel/perf_counter.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c index 534e20d14d63..b8fe7397b902 100644 --- a/kernel/perf_counter.c +++ b/kernel/perf_counter.c | |||
@@ -1503,10 +1503,21 @@ static void perf_counter_enable_on_exec(struct task_struct *task) | |||
1503 | */ | 1503 | */ |
1504 | static void __perf_counter_read(void *info) | 1504 | static void __perf_counter_read(void *info) |
1505 | { | 1505 | { |
1506 | struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context); | ||
1506 | struct perf_counter *counter = info; | 1507 | struct perf_counter *counter = info; |
1507 | struct perf_counter_context *ctx = counter->ctx; | 1508 | struct perf_counter_context *ctx = counter->ctx; |
1508 | unsigned long flags; | 1509 | unsigned long flags; |
1509 | 1510 | ||
1511 | /* | ||
1512 | * If this is a task context, we need to check whether it is | ||
1513 | * the current task context of this cpu. If not it has been | ||
1514 | * scheduled out before the smp call arrived. In that case | ||
1515 | * counter->count would have been updated to a recent sample | ||
1516 | * when the counter was scheduled out. | ||
1517 | */ | ||
1518 | if (ctx->task && cpuctx->task_ctx != ctx) | ||
1519 | return; | ||
1520 | |||
1510 | local_irq_save(flags); | 1521 | local_irq_save(flags); |
1511 | if (ctx->is_active) | 1522 | if (ctx->is_active) |
1512 | update_context_time(ctx); | 1523 | update_context_time(ctx); |