diff options
author | Ingo Molnar <mingo@elte.hu> | 2009-05-17 04:04:45 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-05-17 06:27:37 -0400 |
commit | d2517a49d55536b38c7a87e5289550cfedaa4dcc (patch) | |
tree | ee36f662094f0f09575cd2fbc4e6a67a716081d0 | |
parent | 0203026b58b4299ba7281c0b4b417207c1f05d0e (diff) |
perf_counter, x86: fix zero irq_period counters
The quirk to irq_period unearthed an unrobustness we had in the
hw_counter initialization sequence: we left irq_period at 0, which
was then quirked up to 2 ... which then generated a _lot_ of
interrupts during 'perf stat' runs, slowed them down and skewed
the counter results in general.
Initialize irq_period to the maximum instead.
[ Impact: fix perf stat results ]
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
-rw-r--r-- | arch/x86/kernel/cpu/perf_counter.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c index 886dcf334bc..5bfd30ab392 100644 --- a/arch/x86/kernel/cpu/perf_counter.c +++ b/arch/x86/kernel/cpu/perf_counter.c | |||
@@ -286,6 +286,9 @@ static int __hw_perf_counter_init(struct perf_counter *counter) | |||
286 | hwc->nmi = 1; | 286 | hwc->nmi = 1; |
287 | } | 287 | } |
288 | 288 | ||
289 | if (!hwc->irq_period) | ||
290 | hwc->irq_period = x86_pmu.max_period; | ||
291 | |||
289 | atomic64_set(&hwc->period_left, | 292 | atomic64_set(&hwc->period_left, |
290 | min(x86_pmu.max_period, hwc->irq_period)); | 293 | min(x86_pmu.max_period, hwc->irq_period)); |
291 | 294 | ||