Commit 29558665 authored by Steven Rostedt's avatar Steven Rostedt Committed by Greg Kroah-Hartman

ftrace: Fix synchronization location disabling and freeing ftrace_ops

commit a4c35ed2 upstream.

The synchronization needed after ftrace_ops are unregistered must happen
after the callback is disabled from becing called by functions.

The current location happens after the function is being removed from the
internal lists, but not after the function callbacks were disabled, leaving
the functions susceptible of being called after their callbacks are freed.

This affects perf and any externel users of function tracing (LTTng and
SystemTap).

Fixes: cdbe61bf "ftrace: Allow dynamically allocated function tracers"
Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 95bcd16e
...@@ -376,16 +376,6 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops) ...@@ -376,16 +376,6 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops)
} else if (ops->flags & FTRACE_OPS_FL_CONTROL) { } else if (ops->flags & FTRACE_OPS_FL_CONTROL) {
ret = remove_ftrace_list_ops(&ftrace_control_list, ret = remove_ftrace_list_ops(&ftrace_control_list,
&control_ops, ops); &control_ops, ops);
if (!ret) {
/*
* The ftrace_ops is now removed from the list,
* so there'll be no new users. We must ensure
* all current users are done before we free
* the control data.
*/
synchronize_sched();
control_ops_free(ops);
}
} else } else
ret = remove_ftrace_ops(&ftrace_ops_list, ops); ret = remove_ftrace_ops(&ftrace_ops_list, ops);
...@@ -395,13 +385,6 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops) ...@@ -395,13 +385,6 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops)
if (ftrace_enabled) if (ftrace_enabled)
update_ftrace_function(); update_ftrace_function();
/*
* Dynamic ops may be freed, we must make sure that all
* callers are done before leaving this function.
*/
if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
synchronize_sched();
return 0; return 0;
} }
...@@ -2025,10 +2008,41 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) ...@@ -2025,10 +2008,41 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
command |= FTRACE_UPDATE_TRACE_FUNC; command |= FTRACE_UPDATE_TRACE_FUNC;
} }
if (!command || !ftrace_enabled) if (!command || !ftrace_enabled) {
/*
* If these are control ops, they still need their
* per_cpu field freed. Since, function tracing is
* not currently active, we can just free them
* without synchronizing all CPUs.
*/
if (ops->flags & FTRACE_OPS_FL_CONTROL)
control_ops_free(ops);
return 0; return 0;
}
ftrace_run_update_code(command); ftrace_run_update_code(command);
/*
* Dynamic ops may be freed, we must make sure that all
* callers are done before leaving this function.
* The same goes for freeing the per_cpu data of the control
* ops.
*
* Again, normal synchronize_sched() is not good enough.
* We need to do a hard force of sched synchronization.
* This is because we use preempt_disable() to do RCU, but
* the function tracers can be called where RCU is not watching
* (like before user_exit()). We can not rely on the RCU
* infrastructure to do the synchronization, thus we must do it
* ourselves.
*/
if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_CONTROL)) {
schedule_on_each_cpu(ftrace_sync);
if (ops->flags & FTRACE_OPS_FL_CONTROL)
control_ops_free(ops);
}
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment