1. 11 May, 2019 1 commit
  2. 10 May, 2019 3 commits
    • Jiri Kosina's avatar
      livepatch: Remove klp_check_compiler_support() · 56e33afd
      Jiri Kosina authored
      The only purpose of klp_check_compiler_support() is to make sure that we
      are not using ftrace on x86 via mcount (because that's executed only after
      prologue has already happened, and that's too late for livepatching
      purposes).
      
      Now that mcount is not supported by ftrace any more, there is no need for
      klp_check_compiler_support() either.
      
      Link: http://lkml.kernel.org/r/nycvar.YFH.7.76.1905102346100.17054@cbobk.fhfr.pmReported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      56e33afd
    • Steven Rostedt (VMware)'s avatar
      ftrace/x86: Remove mcount support · 562e14f7
      Steven Rostedt (VMware) authored
      There's two methods of enabling function tracing in Linux on x86. One is
      with just "gcc -pg" and the other is "gcc -pg -mfentry". The former will use
      calls to a special function "mcount" after the frame is set up in all C
      functions. The latter will add calls to a special function called "fentry"
      as the very first instruction of all C functions.
      
      At compile time, there is a check to see if gcc supports, -mfentry, and if
      it does, it will use that, because it is more versatile and less error prone
      for function tracing.
      
      Starting with v4.19, the minimum gcc supported to build the Linux kernel,
      was raised to version 4.6. That also happens to be the first gcc version to
      support -mfentry. Since on x86, using gcc versions from 4.6 and beyond will
      unconditionally enable the -mfentry, it will no longer use mcount as the
      method for inserting calls into the C functions of the kernel. This means
      that there is no point in continuing to maintain mcount in x86.
      
      Remove support for using mcount. This makes the code less complex, and will
      also allow it to be simplified in the future.
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarJiri Kosina <jkosina@suse.cz>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      562e14f7
    • Steven Rostedt (VMware)'s avatar
      ftrace/x86_32: Remove support for non DYNAMIC_FTRACE · 518049d9
      Steven Rostedt (VMware) authored
      When DYNAMIC_FTRACE is enabled in the kernel, all the functions that can be
      traced by the function tracer have a "nop" placeholder at the start of the
      function. When function tracing is enabled, the nop is converted into a call
      to the tracing infrastructure where the functions get traced. This also
      allows for specifying specific functions to trace, and a lot of
      infrastructure is built on top of this.
      
      When DYNAMIC_FTRACE is not enabled, all the functions have a call to the
      ftrace trampoline. A check is made to see if a function pointer is the
      ftrace_stub or not, and if it is not, it calls the function pointer to trace
      the code. This adds over 10% overhead to the kernel even when tracing is
      disabled.
      
      When an architecture supports DYNAMIC_FTRACE there really is no reason to
      use the static tracing. I have kept non DYNAMIC_FTRACE available in x86 so
      that the generic code for non DYNAMIC_FTRACE can be tested. There is no
      reason to support non DYNAMIC_FTRACE for both x86_64 and x86_32. As the non
      DYNAMIC_FTRACE for x86_32 does not even support fentry, and we want to
      remove mcount completely, there's no reason to keep non DYNAMIC_FTRACE
      around for x86_32.
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      518049d9
  3. 09 May, 2019 1 commit
  4. 08 May, 2019 13 commits
  5. 03 May, 2019 3 commits
  6. 29 Apr, 2019 1 commit
  7. 21 Apr, 2019 1 commit
    • Steven Rostedt (VMware)'s avatar
      function_graph: Have selftest also emulate tr->reset() as it did with tr->init() · 52fde6e7
      Steven Rostedt (VMware) authored
      The function_graph boot up self test emulates the tr->init() function in
      order to add a wrapper around the function graph tracer entry code to test
      for lock ups and such. But it does not emulate the tr->reset(), and just
      calls the function_graph tracer tr->reset() function which will use its own
      fgraph_ops to unregister function tracing with. As the fgraph_ops is
      becoming more meaningful with the register_ftrace_graph() and
      unregister_ftrace_graph() functions, the two need to be the same. The
      emulated tr->init() uses its own fgraph_ops descriptor, which means the
      unregister_ftrace_graph() must use the same ftrace_ops, which the selftest
      currently does not do. By emulating the tr->reset() as the selftest does
      with the tr->init() it will be able to pass the same fgraph_ops descriptor
      to the unregister_ftrace_graph() as it did with the register_ftrace_graph().
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      52fde6e7
  8. 11 Apr, 2019 1 commit
    • Steven Rostedt (VMware)'s avatar
      ftrace: Do not process STUB functions in ftrace_ops_list_func() · 2fa717a0
      Steven Rostedt (VMware) authored
      The function_graph tracer has a stub function and its ops flag has the
      FTRACE_OPS_FL_STUB set. As the function graph does not use the
      ftrace_ops->func pointer but instead is called by a separate part of the
      ftrace trampoline. The function_graph tracer still requires to pass in a
      ftrace_ops that may also hold the hash of the functions to call. But there's
      no reason to test that hash in the function tracing portion. Instead of
      testing to see if we should call the stub function, just test if the ops has
      FTRACE_OPS_FL_STUB set, and just skip it.
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      2fa717a0
  9. 10 Apr, 2019 1 commit
  10. 08 Apr, 2019 12 commits
  11. 02 Apr, 2019 3 commits