aboutsummaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2009-05-15 02:25:22 -0400
committerIngo Molnar <mingo@elte.hu>2009-05-15 03:47:05 -0400
commit1c80f4b598d9b075a2a0be694e28be93a6702bcc (patch)
treeb48d59fecd365342d6c2c8aab7912de9cd7aa030 /arch
parenta4016a79fcbd139e7378944c0d86a39fdbc70ecc (diff)
perf_counter: x86: Disallow interval of 1
On certain CPUs i have observed a stuck PMU if interval was set to 1 and NMIs were used. The PMU had PMC0 set in MSR_CORE_PERF_GLOBAL_STATUS, but it was not possible to ack it via MSR_CORE_PERF_GLOBAL_OVF_CTRL, and the NMI loop got stuck infinitely. [ Impact: fix rare hangs during high perfcounter load ] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/kernel/cpu/perf_counter.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c
index 1dcf67057f16..46a82d1e4cbe 100644
--- a/arch/x86/kernel/cpu/perf_counter.c
+++ b/arch/x86/kernel/cpu/perf_counter.c
@@ -473,6 +473,11 @@ x86_perf_counter_set_period(struct perf_counter *counter,
473 left += period; 473 left += period;
474 atomic64_set(&hwc->period_left, left); 474 atomic64_set(&hwc->period_left, left);
475 } 475 }
476 /*
477 * Quirk: certain CPUs dont like it if just 1 event is left:
478 */
479 if (unlikely(left < 2))
480 left = 2;
476 481
477 per_cpu(prev_left[idx], smp_processor_id()) = left; 482 per_cpu(prev_left[idx], smp_processor_id()) = left;
478 483