diff options
author | Deng-Cheng Zhu <dczhu@mips.com> | 2011-11-21 14:28:46 -0500 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2011-12-07 17:04:41 -0500 |
commit | 74653ccf231a3100dd03e16e7a4178868a37332e (patch) | |
tree | d4709f0b9a7fdfcd9a70d9a6b34dfd3f8f906270 /arch/mips | |
parent | 2c1b54d331bde7afbf8da24789cce2402e155495 (diff) |
MIPS/Perf-events: Remove erroneous check on active_events
Port the following patch for ARM by Mark Rutland:
- 57ce9bb39b476accf8fba6e16aea67ed76ea523d
ARM: 6902/1: perf: Remove erroneous check on active_events
When initialising a PMU, there is a check to protect against races with
other CPUs filling all of the available event slots. Since armpmu_add
checks that an event can be scheduled, we do not need to do this at
initialisation time. Furthermore the current code is broken because it
assumes that atomic_inc_not_zero will unconditionally increment
active_counts and then tries to decrement it again on failure.
This patch removes the broken, redundant code.
Signed-off-by: Deng-Cheng Zhu <dczhu@mips.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: David Daney <david.daney@cavium.com>
Cc: Eyal Barzilay <eyal@mips.com>
Cc: Zenon Fortuna <zenon@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/3106/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch/mips')
-rw-r--r-- | arch/mips/kernel/perf_event_mipsxx.c | 5 |
1 files changed, 0 insertions, 5 deletions
diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c index ab4c761cfedc..b5d6b3fa5a41 100644 --- a/arch/mips/kernel/perf_event_mipsxx.c +++ b/arch/mips/kernel/perf_event_mipsxx.c | |||
@@ -621,11 +621,6 @@ static int mipspmu_event_init(struct perf_event *event) | |||
621 | return -ENODEV; | 621 | return -ENODEV; |
622 | 622 | ||
623 | if (!atomic_inc_not_zero(&active_events)) { | 623 | if (!atomic_inc_not_zero(&active_events)) { |
624 | if (atomic_read(&active_events) > MIPS_MAX_HWEVENTS) { | ||
625 | atomic_dec(&active_events); | ||
626 | return -ENOSPC; | ||
627 | } | ||
628 | |||
629 | mutex_lock(&pmu_reserve_mutex); | 624 | mutex_lock(&pmu_reserve_mutex); |
630 | if (atomic_read(&active_events) == 0) | 625 | if (atomic_read(&active_events) == 0) |
631 | err = mipspmu_get_irq(); | 626 | err = mipspmu_get_irq(); |