Commit f66de7ac authored by Athira Rajeev's avatar Athira Rajeev Committed by Michael Ellerman

powerpc/perf: Invoke per-CPU variable access with disabled interrupts

The power_pmu_event_init() callback access per-cpu variable
(cpu_hw_events) to check for event constraints and Branch Stack
(BHRB). Current usage is to disable preemption when accessing the
per-cpu variable, but this does not prevent timer callback from
interrupting event_init. Fix this by using 'local_irq_save/restore'
to make sure the code path is invoked with disabled interrupts.

This change is tested in mambo simulator to ensure that, if a timer
interrupt comes in during the per-cpu access in event_init, it will be
soft masked and replayed later. For testing purpose, introduced a
udelay() in power_pmu_event_init() to make sure a timer interrupt arrives
while in per-cpu variable access code between local_irq_save/resore.
As expected the timer interrupt was replayed later during local_irq_restore
called from power_pmu_event_init. This was confirmed by adding
breakpoint in mambo and checking the backtrace when timer_interrupt
was hit.
Reported-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: default avatarAthira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1606814880-1720-1-git-send-email-atrajeev@linux.vnet.ibm.com
parent c9344769
...@@ -1912,7 +1912,7 @@ static bool is_event_blacklisted(u64 ev) ...@@ -1912,7 +1912,7 @@ static bool is_event_blacklisted(u64 ev)
static int power_pmu_event_init(struct perf_event *event) static int power_pmu_event_init(struct perf_event *event)
{ {
u64 ev; u64 ev;
unsigned long flags; unsigned long flags, irq_flags;
struct perf_event *ctrs[MAX_HWEVENTS]; struct perf_event *ctrs[MAX_HWEVENTS];
u64 events[MAX_HWEVENTS]; u64 events[MAX_HWEVENTS];
unsigned int cflags[MAX_HWEVENTS]; unsigned int cflags[MAX_HWEVENTS];
...@@ -2020,7 +2020,9 @@ static int power_pmu_event_init(struct perf_event *event) ...@@ -2020,7 +2020,9 @@ static int power_pmu_event_init(struct perf_event *event)
if (check_excludes(ctrs, cflags, n, 1)) if (check_excludes(ctrs, cflags, n, 1))
return -EINVAL; return -EINVAL;
cpuhw = &get_cpu_var(cpu_hw_events); local_irq_save(irq_flags);
cpuhw = this_cpu_ptr(&cpu_hw_events);
err = power_check_constraints(cpuhw, events, cflags, n + 1); err = power_check_constraints(cpuhw, events, cflags, n + 1);
if (has_branch_stack(event)) { if (has_branch_stack(event)) {
...@@ -2031,13 +2033,13 @@ static int power_pmu_event_init(struct perf_event *event) ...@@ -2031,13 +2033,13 @@ static int power_pmu_event_init(struct perf_event *event)
event->attr.branch_sample_type); event->attr.branch_sample_type);
if (bhrb_filter == -1) { if (bhrb_filter == -1) {
put_cpu_var(cpu_hw_events); local_irq_restore(irq_flags);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
cpuhw->bhrb_filter = bhrb_filter; cpuhw->bhrb_filter = bhrb_filter;
} }
put_cpu_var(cpu_hw_events); local_irq_restore(irq_flags);
if (err) if (err)
return -EINVAL; return -EINVAL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment