Commit 57ce9bb3 authored by Mark Rutland's avatar Mark Rutland Committed by Russell King

ARM: 6902/1: perf: Remove erroneous check on active_events

When initialising a PMU, there is a check to protect against races with
other CPUs filling all of the available event slots. Since armpmu_add
checks that an event can be scheduled, we do not need to do this at
initialisation time. Furthermore the current code is broken because it
assumes that atomic_inc_not_zero will unconditionally increment
active_counts and then tries to decrement it again on failure.

This patch removes the broken, redundant code.
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Acked-by: default avatarWill Deacon <will.deacon@arm.com>
Cc: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
parent 31bee4cf
...@@ -560,11 +560,6 @@ static int armpmu_event_init(struct perf_event *event) ...@@ -560,11 +560,6 @@ static int armpmu_event_init(struct perf_event *event)
event->destroy = hw_perf_event_destroy; event->destroy = hw_perf_event_destroy;
if (!atomic_inc_not_zero(&active_events)) { if (!atomic_inc_not_zero(&active_events)) {
if (atomic_read(&active_events) > armpmu->num_events) {
atomic_dec(&active_events);
return -ENOSPC;
}
mutex_lock(&pmu_reserve_mutex); mutex_lock(&pmu_reserve_mutex);
if (atomic_read(&active_events) == 0) { if (atomic_read(&active_events) == 0) {
err = armpmu_reserve_hardware(); err = armpmu_reserve_hardware();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment