Commit 452938cb authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing fixes from Steven Rostedt:
 "Masami found an off by one bug in the code that keeps "notrace"
  functions from being traced by kprobes. During my testing, I found
  that there's places that we may want to add kprobes to notrace, thus
  we may end up changing this code before 4.19 is released.

  The history behind this change is that we found that adding kprobes to
  various notrace functions caused the kernel to crashed. We took the
  safe route and decided not to allow kprobes to trace any notrace
  function.

  But because notrace is added to functions that just cause weird side
  effects to the function tracer, but are still safe, preventing kprobes
  for all notrace functios may be too much of a big hammer.

  One such place is __schedule() is marked notrace, to keep function
  tracer from doing strange recursive loops when it gets traced with
  NEED_RESCHED set. With this change, one can not add kprobes to the
  scheduler.

  Masami also added code to use gcov on ftrace"

* tag 'trace-v4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  tracing/kprobes: Fix to check notrace function with correct range
  tracing: Allow gcov profiling on only ftrace subsystem
parents 815f0ddb 9161a864
......@@ -774,6 +774,18 @@ config TRACING_EVENTS_GPIO
help
Enable tracing events for gpio subsystem
config GCOV_PROFILE_FTRACE
bool "Enable GCOV profiling on ftrace subsystem"
depends on GCOV_KERNEL
help
Enable GCOV profiling on ftrace subsystem for checking
which functions/lines are tested.
If unsure, say N.
Note that on a kernel compiled with this config, ftrace will
run significantly slower.
endif # FTRACE
endif # TRACING_SUPPORT
......
......@@ -23,6 +23,11 @@ ifdef CONFIG_TRACING_BRANCHES
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
endif
# for GCOV coverage profiling
ifdef CONFIG_GCOV_PROFILE_FTRACE
GCOV_PROFILE := y
endif
CFLAGS_trace_benchmark.o := -I$(src)
CFLAGS_trace_events_filter.o := -I$(src)
......
......@@ -513,7 +513,14 @@ static bool within_notrace_func(struct trace_kprobe *tk)
if (!addr || !kallsyms_lookup_size_offset(addr, &size, &offset))
return false;
return !ftrace_location_range(addr - offset, addr - offset + size);
/* Get the entry address of the target function */
addr -= offset;
/*
* Since ftrace_location_range() does inclusive range check, we need
* to subtract 1 byte from the end address.
*/
return !ftrace_location_range(addr, addr + size - 1);
}
#else
#define within_notrace_func(tk) (false)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment