diff options
author | Palik, Imre <imrep@amazon.de> | 2015-06-08 08:46:49 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2015-06-29 15:35:28 -0400 |
commit | b675101824a103b3cf8f05c407a53ad50f987730 (patch) | |
tree | fbaca86cd7970109404122607f8d72bdd6dda7fa /arch | |
parent | 55c9e52cf3b056109ab84de8d2a5b4b6e0a4cefa (diff) |
perf/x86: Honor the architectural performance monitoring version
commit 2c33645d366d13b969d936b68b9f4875b1fdddea upstream.
Architectural performance monitoring, version 1, doesn't support fixed counters.
Currently, even if a hypervisor advertises support for architectural
performance monitoring version 1, perf may still try to use the fixed
counters, as the constraints are set up based on the CPU model.
This patch ensures that perf honors the architectural performance monitoring
version returned by CPUID, and it only uses the fixed counters for version 2
and above.
(Some of the ideas in this patch came from Peter Zijlstra.)
Signed-off-by: Imre Palik <imrep@amazon.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1433767609-1039-1-git-send-email-imrep.amz@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kernel/cpu/perf_event_intel.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index afdd7c08d0fc..2813ea0f142e 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c | |||
@@ -3324,13 +3324,13 @@ __init int intel_pmu_init(void) | |||
3324 | * counter, so do not extend mask to generic counters | 3324 | * counter, so do not extend mask to generic counters |
3325 | */ | 3325 | */ |
3326 | for_each_event_constraint(c, x86_pmu.event_constraints) { | 3326 | for_each_event_constraint(c, x86_pmu.event_constraints) { |
3327 | if (c->cmask != FIXED_EVENT_FLAGS | 3327 | if (c->cmask == FIXED_EVENT_FLAGS |
3328 | || c->idxmsk64 == INTEL_PMC_MSK_FIXED_REF_CYCLES) { | 3328 | && c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) { |
3329 | continue; | 3329 | c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; |
3330 | } | 3330 | } |
3331 | 3331 | c->idxmsk64 &= | |
3332 | c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; | 3332 | ~(~0UL << (INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed)); |
3333 | c->weight += x86_pmu.num_counters; | 3333 | c->weight = hweight64(c->idxmsk64); |
3334 | } | 3334 | } |
3335 | } | 3335 | } |
3336 | 3336 | ||