Commit b83a46e7 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

perf, x86: Don't reset the LBR as frequently

If we reset the LBR on each first counter, simple counter rotation which
first deschedules all counters and then reschedules the new ones will
lead to LBR reset, even though we're still in the same task context.

Reduce this by not flushing on the first counter but only flushing on
different task contexts.
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: paulus@samba.org
Cc: eranian@google.com
Cc: robert.richter@amd.com
Cc: fweisbec@gmail.com
LKML-Reference: <new-submission>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent ad0e6cfe
...@@ -72,12 +72,11 @@ static void intel_pmu_lbr_enable(struct perf_event *event) ...@@ -72,12 +72,11 @@ static void intel_pmu_lbr_enable(struct perf_event *event)
WARN_ON_ONCE(cpuc->enabled); WARN_ON_ONCE(cpuc->enabled);
/* /*
* Reset the LBR stack if this is the first LBR user or * Reset the LBR stack if we changed task context to
* we changed task context so as to avoid data leaks. * avoid data leaks.
*/ */
if (!cpuc->lbr_users || if (event->ctx->task && cpuc->lbr_context != event->ctx) {
(event->ctx->task && cpuc->lbr_context != event->ctx)) {
intel_pmu_lbr_reset(); intel_pmu_lbr_reset();
cpuc->lbr_context = event->ctx; cpuc->lbr_context = event->ctx;
} }
...@@ -93,7 +92,7 @@ static void intel_pmu_lbr_disable(struct perf_event *event) ...@@ -93,7 +92,7 @@ static void intel_pmu_lbr_disable(struct perf_event *event)
return; return;
cpuc->lbr_users--; cpuc->lbr_users--;
BUG_ON(cpuc->lbr_users < 0); WARN_ON_ONCE(cpuc->lbr_users < 0);
if (cpuc->enabled && !cpuc->lbr_users) if (cpuc->enabled && !cpuc->lbr_users)
__intel_pmu_lbr_disable(); __intel_pmu_lbr_disable();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment