Commit bad7192b authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

perf: Fix PERF_EVENT_IOC_PERIOD to force-reset the period

Vince Weaver reports that, on all architectures apart from ARM,
PERF_EVENT_IOC_PERIOD doesn't actually update the period until the next
event fires. This is counter-intuitive behaviour and is better dealt
with in the core code.

This patch ensures that the period is forcefully reset when dealing with
such a request in the core code. A subsequent patch removes the
equivalent hack from the ARM back-end.
Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1385560479-11014-1-git-send-email-will.deacon@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 7fd565e2
...@@ -3527,7 +3527,7 @@ static void perf_event_for_each(struct perf_event *event, ...@@ -3527,7 +3527,7 @@ static void perf_event_for_each(struct perf_event *event,
static int perf_event_period(struct perf_event *event, u64 __user *arg) static int perf_event_period(struct perf_event *event, u64 __user *arg)
{ {
struct perf_event_context *ctx = event->ctx; struct perf_event_context *ctx = event->ctx;
int ret = 0; int ret = 0, active;
u64 value; u64 value;
if (!is_sampling_event(event)) if (!is_sampling_event(event))
...@@ -3551,6 +3551,20 @@ static int perf_event_period(struct perf_event *event, u64 __user *arg) ...@@ -3551,6 +3551,20 @@ static int perf_event_period(struct perf_event *event, u64 __user *arg)
event->attr.sample_period = value; event->attr.sample_period = value;
event->hw.sample_period = value; event->hw.sample_period = value;
} }
active = (event->state == PERF_EVENT_STATE_ACTIVE);
if (active) {
perf_pmu_disable(ctx->pmu);
event->pmu->stop(event, PERF_EF_UPDATE);
}
local64_set(&event->hw.period_left, 0);
if (active) {
event->pmu->start(event, PERF_EF_RELOAD);
perf_pmu_enable(ctx->pmu);
}
unlock: unlock:
raw_spin_unlock_irq(&ctx->lock); raw_spin_unlock_irq(&ctx->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment