1. 08 Jan, 2012 1 commit
  2. 07 Jan, 2012 2 commits
  3. 03 Jan, 2012 5 commits
  4. 29 Dec, 2011 1 commit
  5. 23 Dec, 2011 14 commits
  6. 22 Dec, 2011 1 commit
  7. 21 Dec, 2011 16 commits
    • Steven Rostedt's avatar
      x86: Add counter when debug stack is used with interrupts enabled · 42181186
      Steven Rostedt authored
      Mathieu Desnoyers pointed out a case that can cause issues with
      NMIs running on the debug stack:
      
        int3 -> interrupt -> NMI -> int3
      
      Because the interrupt changes the stack, the NMI will not see that
      it preempted the debug stack. Looking deeper at this case,
      interrupts only happen when the int3 is from userspace or in
      an a location in the exception table (fixup).
      
        userspace -> int3 -> interurpt -> NMI -> int3
      
      All other int3s that happen in the kernel should be processed
      without ever enabling interrupts, as the do_trap() call will
      panic the kernel if it is called to process any other location
      within the kernel.
      
      Adding a counter around the sections that enable interrupts while
      using the debug stack allows the NMI to also check that case.
      If the NMI sees that it either interrupted a task using the debug
      stack or the debug counter is non-zero, then it will have to
      change the IDT table to make the int3 not change stacks (which will
      corrupt the stack if it does).
      
      Note, I had to move the debug_usage functions out of processor.h
      and into debugreg.h because of the static inlined functions to
      inc and dec the debug_usage counter. __get_cpu_var() requires
      smp.h which includes processor.h, and would fail to build.
      
      Link: http://lkml.kernel.org/r/1323976535.23971.112.camel@gandalf.stny.rr.comReported-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      42181186
    • Steven Rostedt's avatar
      x86: Allow NMIs to hit breakpoints in i386 · ccd49c23
      Steven Rostedt authored
      With i386, NMIs and breakpoints use the current stack and they
      do not reset the stack pointer to a fix point that might corrupt
      a previous NMI or breakpoint (as it does in x86_64). But NMIs are
      still not made to be re-entrant, and need to prevent the case that
      an NMI hitting a breakpoint (which does an iret), doesn't allow
      another NMI to run.
      
      The fix is to let the NMI be in 3 different states:
      
      1) not running
      2) executing
      3) latched
      
      When no NMI is executing on a given CPU, the state is "not running".
      When the first NMI comes in, the state is switched to "executing".
      On exit of that NMI, a cmpxchg is performed to switch the state
      back to "not running" and if that fails, the NMI is restarted.
      
      If a breakpoint is hit and does an iret, which re-enables NMIs,
      and another NMI comes in before the first NMI finished, it will
      detect that the state is not in the "not running" state and the
      current NMI is nested. In this case, the state is switched to "latched"
      to let the interrupted NMI know to restart the NMI handler, and
      the nested NMI exits without doing anything.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ccd49c23
    • Steven Rostedt's avatar
      x86: Keep current stack in NMI breakpoints · 228bdaa9
      Steven Rostedt authored
      We want to allow NMI handlers to have breakpoints to be able to
      remove stop_machine from ftrace, kprobes and jump_labels. But if
      an NMI interrupts a current breakpoint, and then it triggers a
      breakpoint itself, it will switch to the breakpoint stack and
      corrupt the data on it for the breakpoint processing that it
      interrupted.
      
      Instead, have the NMI check if it interrupted breakpoint processing
      by checking if the stack that is currently used is a breakpoint
      stack. If it is, then load a special IDT that changes the IST
      for the debug exception to keep the same stack in kernel context.
      When the NMI is done, it puts it back.
      
      This way, if the NMI does trigger a breakpoint, it will keep
      using the same stack and not stomp on the breakpoint data for
      the breakpoint it interrupted.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      228bdaa9
    • Steven Rostedt's avatar
      x86: Add workaround to NMI iret woes · 3f3c8b8c
      Steven Rostedt authored
      In x86, when an NMI goes off, the CPU goes into an NMI context that
      prevents other NMIs to trigger on that CPU. If an NMI is suppose to
      trigger, it has to wait till the previous NMI leaves NMI context.
      At that time, the next NMI can trigger (note, only one more NMI will
      trigger, as only one can be latched at a time).
      
      The way x86 gets out of NMI context is by calling iret. The problem
      with this is that this causes problems if the NMI handle either
      triggers an exception, or a breakpoint. Both the exception and the
      breakpoint handlers will finish with an iret. If this happens while
      in NMI context, the CPU will leave NMI context and a new NMI may come
      in. As NMI handlers are not made to be re-entrant, this can cause
      havoc with the system, not to mention, the nested NMI will write
      all over the previous NMI's stack.
      
      Linus Torvalds proposed the following workaround to this problem:
      
      https://lkml.org/lkml/2010/7/14/264
      
      "In fact, I wonder if we couldn't just do a software NMI disable
      instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
      allocated statically) that points to the NMI stack frame, and just
      make the NMI code itself do something like
      
       NMI entry:
       - load percpu NMI stack frame pointer
       - if non-zero we know we're nested, and should ignore this NMI:
          - we're returning to kernel mode, so return immediately by using
      "popf/ret", which also keeps NMI's disabled in the hardware until the
      "real" NMI iret happens.
          - before the popf/iret, use the NMI stack pointer to make the NMI
      return stack be invalid and cause a fault
        - set the NMI stack pointer to the current stack pointer
      
       NMI exit (not the above "immediate exit because we nested"):
         clear the percpu NMI stack pointer
         Just do the iret.
      
      Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
      we'll take a fault, and that re-does our "delayed" NMI - and NMI's
      will stay masked.
      
      And if we didn't have a nested NMI, that iret will now unmask NMI's,
      and everything is happy."
      
      I first tried to follow this advice but as I started implementing this
      code, a few gotchas showed up.
      
      One, is accessing per-cpu variables in the NMI handler.
      
      The problem is that per-cpu variables use the %gs register to get the
      variable for the given CPU. But as the NMI may happen in userspace,
      we must first perform a SWAPGS to get to it. The NMI handler already
      does this later in the code, but its too late as we have saved off
      all the registers and we don't want to do that for a disabled NMI.
      
      Peter Zijlstra suggested to keep all variables on the stack. This
      simplifies things greatly and it has the added benefit of cache locality.
      
      Two, faulting on the iret.
      
      I really wanted to make this work, but it was becoming very hacky, and
      I never got it to be stable. The iret already had a fault handler for
      userspace faulting with bad segment registers, and getting NMI to trigger
      a fault and detect it was very tricky. But for strange reasons, the system
      would usually take a double fault and crash. I never figured out why
      and decided to go with a simple "jmp" approach. The new approach I took
      also simplified things.
      
      Finally, the last problem with Linus's approach was to have the nested
      NMI handler do a ret instead of an iret to give the first NMI NMI-context
      again.
      
      The problem is that ret is much more limited than an iret. I couldn't figure
      out how to get the stack back where it belonged. I could have copied the
      current stack, pushed the return onto it, but my fear here is that there
      may be some place that writes data below the stack pointer. I know that
      is not something code should depend on, but I don't want to chance it.
      I may add this feature later, but for now, an NMI handler that loses NMI
      context will not get it back.
      
      Here's what is done:
      
      When an NMI comes in, the HW pushes the interrupt stack frame onto the
      per cpu NMI stack that is selected by the IST.
      
      A special location on the NMI stack holds a variable that is set when
      the first NMI handler runs. If this variable is set then we know that
      this is a nested NMI and we process the nested NMI code.
      
      There is still a race when this variable is cleared and an NMI comes
      in just before the first NMI does the return. For this case, if the
      variable is cleared, we also check if the interrupted stack is the
      NMI stack. If it is, then we process the nested NMI code.
      
      Why the two tests and not just test the interrupted stack?
      
      If the first NMI hits a breakpoint and loses NMI context, and then it
      hits another breakpoint and while processing that breakpoint we get a
      nested NMI. When processing a breakpoint, the stack changes to the
      breakpoint stack. If another NMI comes in here we can't rely on the
      interrupted stack to be the NMI stack.
      
      If the variable is not set and the interrupted task's stack is not the
      NMI stack, then we know this is the first NMI and we can process things
      normally. But in order to do so, we need to do a few things first.
      
      1) Set the stack variable that tells us that we are in an NMI handler
      
      2) Make two copies of the interrupt stack frame.
         One copy is used to return on iret
         The other is used to restore the first one if we have a nested NMI.
      
      This is what the stack will look like:
      
      	  +-------------------------+
      	  | original SS             |
      	  | original Return RSP     |
      	  | original RFLAGS         |
      	  | original CS             |
      	  | original RIP            |
      	  +-------------------------+
      	  | temp storage for rdx    |
      	  +-------------------------+
      	  | NMI executing variable  |
      	  +-------------------------+
      	  | Saved SS                |
      	  | Saved Return RSP        |
      	  | Saved RFLAGS            |
      	  | Saved CS                |
      	  | Saved RIP               |
      	  +-------------------------+
      	  | copied SS               |
      	  | copied Return RSP       |
      	  | copied RFLAGS           |
      	  | copied CS               |
      	  | copied RIP              |
      	  +-------------------------+
      	  | pt_regs                 |
      	  +-------------------------+
      
      The original stack frame contains what the HW put in when we entered
      the NMI.
      
      We store %rdx as a temp variable to use. Both the original HW stack
      frame and this %rdx storage will be clobbered by nested NMIs so we
      can not rely on them later in the first NMI handler.
      
      The next item is the special stack variable that is set when we execute
      the rest of the NMI handler.
      
      Then we have two copies of the interrupt stack. The second copy is
      modified by any nested NMIs to let the first NMI know that we triggered
      a second NMI (latched) and that we should repeat the NMI handler.
      
      If the first NMI hits an exception or breakpoint that takes it out of
      NMI context, if a second NMI comes in before the first one finishes,
      it will update the copied interrupt stack to point to a fix up location
      to trigger another NMI.
      
      When the first NMI calls iret, it will instead jump to the fix up
      location. This fix up location will copy the saved interrupt stack back
      to the copy and execute the nmi handler again.
      
      Note, the nested NMI knows enough to check if it preempted a previous
      NMI handler while it is in the fixup location. If it has, it will not
      modify the copied interrupt stack and will just leave as if nothing
      happened. As the NMI handle is about to execute again, there's no reason
      to latch now.
      
      To test all this, I forced the NMI handler to call iret and take itself
      out of NMI context. I also added assemble code to write to the serial to
      make sure that it hits the nested path as well as the fix up path.
      Everything seems to be working fine.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      3f3c8b8c
    • Steven Rostedt's avatar
      x86: Document the NMI handler about not using paranoid_exit · 1fd466ef
      Steven Rostedt authored
      Linus cleaned up the NMI handler but it still needs some comments to
      explain why it uses save_paranoid but not paranoid_exit. Just to keep
      others from adding that in the future, document why it's not used.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1fd466ef
    • Linus Torvalds's avatar
      x86: Do not schedule while still in NMI context · 549c89b9
      Linus Torvalds authored
      The NMI handler uses the paranoid_exit routine that checks the
      NEED_RESCHED flag, and if it is set and the return is for userspace,
      then interrupts are enabled, the stack is swapped to the thread's stack,
      and schedule is called. The problem with this is that we are still in an
      NMI context until an iret is executed. This means that any new NMIs are
      now starved until an interrupt or exception occurs and does the iret.
      
      As NMIs can not be masked and can interrupt any location, they are
      treated as a special case. NEED_RESCHED should not be set in an NMI
      handler. The interruption by the NMI should not disturb the work flow
      for scheduling. Any IPI sent to a processor after sending the
      NEED_RESCHED would have to wait for the NMI anyway, and after the IPI
      finishes the schedule would be called as required.
      
      There is no reason to do anything special leaving an NMI. Remove the
      call to paranoid_exit and do a simple return. This not only fixes the
      bug of starved NMIs, but it also cleans up the code.
      
      Link: http://lkml.kernel.org/r/CA+55aFzgM55hXTs4griX5e9=v_O+=ue+7Rj0PTD=M7hFYpyULQ@mail.gmail.comAcked-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      549c89b9
    • Tejun Heo's avatar
      tracing: Factorize filter creation · 38b78eb8
      Tejun Heo authored
      There are four places where new filter for a given filter string is
      created, which involves several different steps.  This patch factors
      those steps into create_[system_]filter() functions which in turn make
      use of create_filter_{start|finish}() for common parts.
      
      The only functional change is that if replace_filter_string() is
      requested and fails, creation fails without any side effect instead of
      being ignored.
      
      Note that system filter is now installed after the processing is
      complete which makes freeing before and then restoring filter string
      on error unncessary.
      
      -v2: Rebased to resolve conflict with 49aa2951 and updated both
           create_filter() functions to always set *filterp instead of
           requiring the caller to clear it to %NULL on entry.
      
      Link: http://lkml.kernel.org/r/1323988305-1469-2-git-send-email-tj@kernel.orgSigned-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      38b78eb8
    • Steven Rostedt's avatar
      tracing: Have stack tracing set filtered functions at boot · 762e1207
      Steven Rostedt authored
      Add stacktrace_filter= to the kernel command line that lets
      the user pick specific functions to check the stack on.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      762e1207
    • Steven Rostedt's avatar
      ftrace: Allow access to the boot time function enabling · 2a85a37f
      Steven Rostedt authored
      Change set_ftrace_early_filter() to ftrace_set_early_filter()
      and make it a global function. This will allow other subsystems
      in the kernel to be able to enable function tracing at start
      up and reuse the ftrace function parsing code.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2a85a37f
    • Steven Rostedt's avatar
      tracing: Have stack_tracer use a separate list of functions · d2d45c7a
      Steven Rostedt authored
      The stack_tracer is used to look at every function and check
      if the current stack is bigger than the last recorded max stack size.
      When a new max is found, then it saves that stack off.
      
      Currently the stack tracer is limited by the global_ops of
      the function tracer. As the stack tracer has nothing to do with
      the ftrace function tracer, except that it uses it as its internal
      engine, the stack tracer should have its own list.
      
      A new file is added to the tracing debugfs directory called:
      
        stack_trace_filter
      
      that can be used to select which functions you want to check the stack
      on.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      d2d45c7a
    • Steven Rostedt's avatar
      ftrace: Decouple hash items from showing filtered functions · 69a3083c
      Steven Rostedt authored
      The set_ftrace_filter shows "hashed" functions, which are functions
      that are added with operations to them (like traceon and traceoff).
      
      As other subsystems may be able to show what functions they are
      using for function tracing, the hash items should no longer
      be shown just because the FILTER flag is set. As they have nothing
      to do with other subsystems filters.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      69a3083c
    • Steven Rostedt's avatar
      ftrace: Allow other users of function tracing to use the output listing · fc13cb0c
      Steven Rostedt authored
      The function tracer is set up to allow any other subsystem (like perf)
      to use it. Ftrace already has a way to list what functions are enabled
      by the global_ops. It would be very helpful to let other users of
      the function tracer to be able to use the same code.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      fc13cb0c
    • Steven Rostedt's avatar
      ftrace: Create ftrace_hash_empty() helper routine · 06a51d93
      Steven Rostedt authored
      There are two types of hashes in the ftrace_ops; one type
      is the filter_hash and the other is the notrace_hash. Either
      one may be null, meaning it has no elements. But when elements
      are added, the hash is allocated.
      
      Throughout the code, a check needs to be made to see if a hash
      exists or the hash has elements, but the check if the hash exists
      is usually missing causing the possible "NULL pointer dereference bug".
      
      Add a helper routine called "ftrace_hash_empty()" that returns
      true if the hash doesn't exist or its count is zero. As they mean
      the same thing.
      Last-bug-reported-by: default avatarJiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      06a51d93
    • Steven Rostedt's avatar
      ftrace: Fix ftrace hash record update with notrace · c842e975
      Steven Rostedt authored
      When disabling the "notrace" records, that means we want to trace them.
      If the notrace_hash is zero, it means that we want to trace all
      records. But to disable a zero notrace_hash means nothing.
      
      The check for the notrace_hash count was incorrect with:
      
      	if (hash && !hash->count)
      		return
      
      With the correct comment above it that states that we do nothing
      if the notrace_hash has zero count. But !hash also means that
      the notrace hash has zero count. I think this was done to
      protect against dereferencing NULL. But if !hash is true, then
      we go through the following loop without doing a single thing.
      
      Fix it to:
      
      	if (!hash || !hash->count)
      		return;
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      c842e975
    • Steven Rostedt's avatar
      ftrace: Use bsearch to find record ip · 5855fead
      Steven Rostedt authored
      Now that each set of pages in the function list are sorted by
      ip, we can use bsearch to find a record within each set of pages.
      This speeds up the ftrace_location() function by magnitudes.
      
      For archs (like x86) that need to add a breakpoint at every function
      that will be converted from a nop to a callback and vice versa,
      the breakpoint callback needs to know if the breakpoint was for
      ftrace or not. It requires finding the breakpoint ip within the
      records. Doing a linear search is extremely inefficient. It is
      a must to be able to do a fast binary search to find these locations.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      5855fead
    • Steven Rostedt's avatar
      ftrace: Sort the mcount records on each page · 68950619
      Steven Rostedt authored
      Sort records by ip locations of the ftrace mcount calls on each of the
      set of pages in the function list. This helps in localizing cache
      usuage when updating the function locations, as well as gives us
      the ability to quickly find an ip location in the list.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      68950619