Commit 021c5b34 authored by Corey Minyard's avatar Corey Minyard Committed by Steven Rostedt

ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()

The code for resizing the trace ring buffers has to run the per-cpu
resize on the CPU itself.  The code was using preempt_off() and
running the code for the current CPU directly, otherwise calling
schedule_work_on().

At least on RT this could result in the following:

|BUG: sleeping function called from invalid context at kernel/rtmutex.c:673
|in_atomic(): 1, irqs_disabled(): 0, pid: 607, name: bash
|3 locks held by bash/607:
|CPU: 0 PID: 607 Comm: bash Not tainted 3.12.15-rt25+ #124
|(rt_spin_lock+0x28/0x68)
|(free_hot_cold_page+0x84/0x3b8)
|(free_buffer_page+0x14/0x20)
|(rb_update_pages+0x280/0x338)
|(ring_buffer_resize+0x32c/0x3dc)
|(free_snapshot+0x18/0x38)
|(tracing_set_tracer+0x27c/0x2ac)

probably via
|cd /sys/kernel/debug/tracing/
|echo 1 > events/enable ; sleep 2
|echo 1024 > buffer_size_kb

If we just always use schedule_work_on(), there's no need for the
preempt_off().  So do that.

Link: http://lkml.kernel.org/p/1405537633-31518-1-git-send-email-cminyard@mvista.comReported-by: default avatarStanislav Meduna <stano@meduna.org>
Signed-off-by: default avatarCorey Minyard <cminyard@mvista.com>
Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
parent 3a636388
...@@ -1693,22 +1693,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size, ...@@ -1693,22 +1693,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
if (!cpu_buffer->nr_pages_to_update) if (!cpu_buffer->nr_pages_to_update)
continue; continue;
/* The update must run on the CPU that is being updated. */ /* Can't run something on an offline CPU. */
preempt_disable(); if (!cpu_online(cpu)) {
if (cpu == smp_processor_id() || !cpu_online(cpu)) {
rb_update_pages(cpu_buffer); rb_update_pages(cpu_buffer);
cpu_buffer->nr_pages_to_update = 0; cpu_buffer->nr_pages_to_update = 0;
} else { } else {
/*
* Can not disable preemption for schedule_work_on()
* on PREEMPT_RT.
*/
preempt_enable();
schedule_work_on(cpu, schedule_work_on(cpu,
&cpu_buffer->update_pages_work); &cpu_buffer->update_pages_work);
preempt_disable();
} }
preempt_enable();
} }
/* wait for all the updates to complete */ /* wait for all the updates to complete */
...@@ -1746,22 +1738,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size, ...@@ -1746,22 +1738,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
get_online_cpus(); get_online_cpus();
preempt_disable(); /* Can't run something on an offline CPU. */
/* The update must run on the CPU that is being updated. */ if (!cpu_online(cpu_id))
if (cpu_id == smp_processor_id() || !cpu_online(cpu_id))
rb_update_pages(cpu_buffer); rb_update_pages(cpu_buffer);
else { else {
/*
* Can not disable preemption for schedule_work_on()
* on PREEMPT_RT.
*/
preempt_enable();
schedule_work_on(cpu_id, schedule_work_on(cpu_id,
&cpu_buffer->update_pages_work); &cpu_buffer->update_pages_work);
wait_for_completion(&cpu_buffer->update_done); wait_for_completion(&cpu_buffer->update_done);
preempt_disable();
} }
preempt_enable();
cpu_buffer->nr_pages_to_update = 0; cpu_buffer->nr_pages_to_update = 0;
put_online_cpus(); put_online_cpus();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment