1. 21 Oct, 2010 2 commits
    • Masami Hiramatsu's avatar
      perf probe: Fix type searching · 4046b8bb
      Masami Hiramatsu authored
      Fix to get the actual type die of variables by using dwarf_attr_integrate()
      which gets attribute from die even if the type die is connected by
      DW_AT_abstract_origin.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      LKML-Reference: <20101021101302.3542.38549.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      4046b8bb
    • Thomas Gleixner's avatar
      tracing: Cleanup the convoluted softirq tracepoints · f4bc6bb2
      Thomas Gleixner authored
      With the addition of trace_softirq_raise() the softirq tracepoint got
      even more convoluted. Why the tracepoints take two pointers to assign
      an integer is beyond my comprehension.
      
      But adding an extra case which treats the first pointer as an unsigned
      long when the second pointer is NULL including the back and forth
      type casting is just horrible.
      
      Convert the softirq tracepoints to take a single unsigned int argument
      for the softirq vector number and fix the call sites.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <alpine.LFD.2.00.1010191428560.6815@localhost6.localdomain6>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Acked-by: mathieu.desnoyers@efficios.com
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      f4bc6bb2
  2. 19 Oct, 2010 6 commits
  3. 18 Oct, 2010 21 commits
    • Steven Rostedt's avatar
      ftrace: Remove recursion between recordmcount and scripts/mod/empty · d7b4d6de
      Steven Rostedt authored
      When DYNAMIC_FTRACE is enabled and we use the C version of recordmcount,
      all objects are run through the recordmcount program to create a
      separate section that stores all the callers of mcount.
      
      The build process has a special file: scripts/mod/empty.o. This is
      built from empty.c which is literally an empty file (except for a
      single comment). This file is used to find information about the target
      elf format, like endianness and word size.
      
      The problem comes up when we need to build recordmcount. The
      build process requires that empty.o is built first. The build rules
      for empty.o will try to execute recordmcount on the empty.o file.
      We get an error that recordmcount does not exist.
      
      To avoid this recursion, the build file will skip running recordmcount
      if the file that it is building is script/mod/empty.o.
      
      [ extra comment Suggested-by: Sam Ravnborg <sam@ravnborg.org> ]
      Reported-by: default avatarIngo Molnar <mingo@elte.hu>
      Tested-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: linux-kbuild@vger.kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      d7b4d6de
    • Peter Zijlstra's avatar
      jump_label: Add COND_STMT(), reducer wrappery · ebf31f50
      Peter Zijlstra authored
      The use of the JUMP_LABEL() construct ends up creating endless silly
      wrappers, create a higher level construct to reduce this clutter.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ebf31f50
    • Peter Zijlstra's avatar
      perf: Optimize sw events · 7e54a5a0
      Peter Zijlstra authored
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7e54a5a0
    • Peter Zijlstra's avatar
      perf: Use jump_labels to optimize the scheduler hooks · 82cd6def
      Peter Zijlstra authored
      Trades a call + conditional + ret for an unconditional jmp.
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20101014203625.501657727@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      82cd6def
    • Peter Zijlstra's avatar
      jump_label: Add atomic_t interface · 8b92538d
      Peter Zijlstra authored
      Add an interface to allow usage of jump_labels with atomic counters.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20101014203625.501657727@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8b92538d
    • Peter Zijlstra's avatar
      jump_label: Use more consistent naming · 3b6e901f
      Peter Zijlstra authored
      Now that there's still only a few users around, rename things to make
      them more consistent.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20101014203625.448565169@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3b6e901f
    • Peter Zijlstra's avatar
      perf, hw_breakpoint: Fix crash in hw_breakpoint creation · d580ff86
      Peter Zijlstra authored
      hw_breakpoint creation needs to account stuff per-task to ensure there
      is always sufficient hardware resources to back these things due to
      ptrace.
      
      With the perf per pmu context changes the event initialization no
      longer has access to the event context, for the simple reason that we
      need to first find the pmu (result of initialization) before we can
      find the context.
      
      This makes hw_breakpoints unhappy, because it can no longer do per
      task accounting, cure this by frobbing a task pointer in the event::hw
      bits for now...
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20101014203625.391543667@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d580ff86
    • Peter Zijlstra's avatar
      perf: Find task before event alloc · c6be5a5c
      Peter Zijlstra authored
      So that we can pass the task pointer to the event allocation, so that
      we can use task associated data during event initialization.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20101014203625.340789919@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c6be5a5c
    • Peter Zijlstra's avatar
      perf: Fix task refcount bugs · e7d0bc04
      Peter Zijlstra authored
      Currently it looks like find_lively_task_by_vpid() takes a task ref
      and relies on find_get_context() to drop it.
      
      The problem is that perf_event_create_kernel_counter() shouldn't be
      dropping task refs.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: default avatarMatt Helsley <matthltc@us.ibm.com>
      LKML-Reference: <20101014203625.278436085@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e7d0bc04
    • Peter Zijlstra's avatar
      perf: Fix group moving · 74c3337c
      Peter Zijlstra authored
      Matt found we trigger the WARN_ON_ONCE() in perf_group_attach() when we take
      the move_group path in perf_event_open().
      
      Since we cannot de-construct the group (we rely on it to move the events), we
      have to simply ignore the double attach. The group state is context invariant
      and doesn't need changing.
      Reported-by: default avatarMatt Fleming <matt@console-pimps.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1287135757.29097.1368.camel@twins>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      74c3337c
    • Peter Zijlstra's avatar
      irq_work: Add generic hardirq context callbacks · e360adbe
      Peter Zijlstra authored
      Provide a mechanism that allows running code in IRQ context. It is
      most useful for NMI code that needs to interact with the rest of the
      system -- like wakeup a task to drain buffers.
      
      Perf currently has such a mechanism, so extract that and provide it as
      a generic feature, independent of perf so that others may also
      benefit.
      
      The IRQ context callback is generated through self-IPIs where
      possible, or on architectures like powerpc the decrementer (the
      built-in timer facility) is set to generate an interrupt immediately.
      
      Architectures that don't have anything like this get to do with a
      callback from the timer tick. These architectures can call
      irq_work_run() at the tail of any IRQ handlers that might enqueue such
      work (like the perf IRQ handler) to avoid undue latencies in
      processing the work.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarKyle McMartin <kyle@mcmartin.ca>
      Acked-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      [ various fixes ]
      Signed-off-by: default avatarHuang Ying <ying.huang@intel.com>
      LKML-Reference: <1287036094.7768.291.camel@yhuang-dev>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e360adbe
    • Stephane Eranian's avatar
      perf_events: Fix transaction recovery in group_sched_in() · 8e5fc1a7
      Stephane Eranian authored
      The group_sched_in() function uses a transactional approach to schedule
      a group of events. In a group, either all events can be scheduled or
      none are. To schedule each event in, the function calls event_sched_in().
      In case of error, event_sched_out() is called on each event in the group.
      
      The problem is that event_sched_out() does not completely cancel the
      effects of event_sched_in(). Furthermore event_sched_out() changes the
      state of the event as if it had run which is not true is this particular
      case.
      
      Those inconsistencies impact time tracking fields and may lead to events
      in a group not all reporting the same time_enabled and time_running values.
      This is demonstrated with the example below:
      
      $ task -eunhalted_core_cycles,baclears,baclears -e unhalted_core_cycles,baclears,baclears sleep 5
      1946101 unhalted_core_cycles (32.85% scaling, ena=829181, run=556827)
        11423 baclears (32.85% scaling, ena=829181, run=556827)
         7671 baclears (0.00% scaling, ena=556827, run=556827)
      
      2250443 unhalted_core_cycles (57.83% scaling, ena=962822, run=405995)
        11705 baclears (57.83% scaling, ena=962822, run=405995)
        11705 baclears (57.83% scaling, ena=962822, run=405995)
      
      Notice that in the first group, the last baclears event does not
      report the same timings as its siblings.
      
      This issue comes from the fact that tstamp_stopped is updated
      by event_sched_out() as if the event had actually run.
      
      To solve the issue, we must ensure that, in case of error, there is
      no change in the event state whatsoever. That means timings must
      remain as they were when entering group_sched_in().
      
      To do this we defer updating tstamp_running until we know the
      transaction succeeded. Therefore, we have split event_sched_in()
      in two parts separating the update to tstamp_running.
      
      Similarly, in case of error, we do not want to update tstamp_stopped.
      Therefore, we have split event_sched_out() in two parts separating
      the update to tstamp_stopped.
      
      With this patch, we now get the following output:
      
      $ task -eunhalted_core_cycles,baclears,baclears -e unhalted_core_cycles,baclears,baclears sleep 5
      2492050 unhalted_core_cycles (71.75% scaling, ena=1093330, run=308841)
        11243 baclears (71.75% scaling, ena=1093330, run=308841)
        11243 baclears (71.75% scaling, ena=1093330, run=308841)
      
      1852746 unhalted_core_cycles (0.00% scaling, ena=784489, run=784489)
         9253 baclears (0.00% scaling, ena=784489, run=784489)
         9253 baclears (0.00% scaling, ena=784489, run=784489)
      
      Note that the uneven timing between groups is a side effect of
      the process spending most of its time sleeping, i.e., not enough
      event rotations (but that's a separate issue).
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4cb86b4c.41e9d80a.44e9.3e19@mx.google.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8e5fc1a7
    • Stephane Eranian's avatar
      perf_events: Fix bogus AMD64 generic TLB events · ba0cef3d
      Stephane Eranian authored
      PERF_COUNT_HW_CACHE_DTLB:READ:MISS had a bogus umask value of 0 which
      counts nothing. Needed to be 0x7 (to count all possibilities).
      
      PERF_COUNT_HW_CACHE_ITLB:READ:MISS had a bogus umask value of 0 which
      counts nothing. Needed to be 0x3 (to count all possibilities).
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: <stable@kernel.org> # as far back as it applies
      LKML-Reference: <4cb85478.41e9d80a.44e2.3f00@mx.google.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ba0cef3d
    • Stephane Eranian's avatar
      perf_events: Fix bogus context time tracking · c530ccd9
      Stephane Eranian authored
      You can only call update_context_time() when the context
      is active, i.e., the thread it is attached to is still running.
      
      However, perf_event_read() can be called even when the context
      is inactive, e.g., user read() the counters. The call to
      update_context_time() must be conditioned on the status of
      the context, otherwise, bogus time_enabled, time_running may
      be returned. Here is an example on AMD64. The task program
      is an example from libpfm4. The -p prints deltas every 1s.
      
      $ task -p -e cpu_clk_unhalted sleep 5
          2,266,610 cpu_clk_unhalted (0.00% scaling, ena=2,158,982, run=2,158,982)
      	    0 cpu_clk_unhalted (0.00% scaling, ena=2,158,982, run=2,158,982)
      	    0 cpu_clk_unhalted (0.00% scaling, ena=2,158,982, run=2,158,982)
      	    0 cpu_clk_unhalted (0.00% scaling, ena=2,158,982, run=2,158,982)
      	    0 cpu_clk_unhalted (0.00% scaling, ena=2,158,982, run=2,158,982)
      5,242,358,071 cpu_clk_unhalted (99.95% scaling, ena=5,000,359,984, run=2,319,270)
      
      Whereas if you don't read deltas, e.g., no call to perf_event_read() until
      the process terminates:
      
      $ task -e cpu_clk_unhalted sleep 5
          2,497,783 cpu_clk_unhalted (0.00% scaling, ena=2,376,899, run=2,376,899)
      
      Notice that time_enable, time_running are bogus in the first example
      causing bogus scaling.
      
      This patch fixes the problem, by conditionally calling update_context_time()
      in perf_event_read().
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: stable@kernel.org
      LKML-Reference: <4cb856dc.51edd80a.5ae0.38fb@mx.google.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c530ccd9
    • Steven Rostedt's avatar
      tracing: Remove parent recording in latency tracer graph options · 78c89ba1
      Steven Rostedt authored
      Even though the parent is recorded with the normal function tracing
      of the latency tracers (irqsoff and wakeup), the function graph
      recording is bogus.
      
      This is due to the function graph messing with the return stack.
      The latency tracers pass in as the parent CALLER_ADDR0, which
      works fine for plain function tracing. But this causes bogus output
      with the graph tracer:
      
       3)    <idle>-0    |  d.s3.  0.000 us    |  return_to_handler();
       3)    <idle>-0    |  d.s3.  0.000 us    |  _raw_spin_unlock_irqrestore();
       3)    <idle>-0    |  d.s3.  0.000 us    |  return_to_handler();
       3)    <idle>-0    |  d.s3.  0.000 us    |  trace_hardirqs_on();
      
      The "return_to_handle()" call is the trampoline of the
      function graph tracer, and is meaningless in this context.
      
      Cc: Jiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      78c89ba1
    • Steven Rostedt's avatar
      tracing: Use one prologue for the preempt irqs off tracer function tracers · 5e6d2b9c
      Steven Rostedt authored
      The preempt and irqsoff tracers have three types of function tracers.
      Normal function tracer, function graph entry, and function graph return.
      Each of these use a complex dance to prevent recursion and whether
      to trace the data or not (depending if interrupts are enabled or not).
      
      This patch moves the duplicate code into a single routine, to
      prevent future mistakes with modifying duplicate complex code.
      
      Cc: Jiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      5e6d2b9c
    • Steven Rostedt's avatar
      tracing: Use one prologue for the wakeup tracer function tracers · 542181d3
      Steven Rostedt authored
      The wakeup tracer has three types of function tracers. Normal
      function tracer, function graph entry, and function graph return.
      Each of these use a complex dance to prevent recursion and whether
      to trace the data or not (depending on the wake_task variable).
      
      This patch moves the duplicate code into a single routine, to
      prevent future mistakes with modifying duplicate complex code.
      
      Cc: Jiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      542181d3
    • Jiri Olsa's avatar
      tracing: Graph support for wakeup tracer · 7495a5be
      Jiri Olsa authored
      Add function graph support for wakeup latency tracer.
      The graph output is enabled by setting the 'display-graph'
      trace option.
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1285243253-7372-4-git-send-email-jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7495a5be
    • Jiri Olsa's avatar
      tracing: Make graph related irqs/preemptsoff functions global · 0a772620
      Jiri Olsa authored
      Move trace_graph_function() and print_graph_headers_flags() functions
      to the trace_function_graph.c to be globaly available.
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1285243253-7372-3-git-send-email-jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      0a772620
    • Jiri Olsa's avatar
      tracing: Add proper check for irq_depth routines · a9d61173
      Jiri Olsa authored
      The check_irq_entry and check_irq_return could be called
      from graph event context. In such case there's no graph
      private data allocated. Adding checks to handle this case.
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <20100924154102.GB1818@jolsa.brq.redhat.com>
      
      [ Fixed some grammar in the comments ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a9d61173
    • matt mooney's avatar
      tracing/trivial: Remove cast from void* · 907f2784
      matt mooney authored
      Unnecessary cast from void* in assignment.
      Signed-off-by: default avatarmatt mooney <mfm@muteddisk.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      907f2784
  4. 16 Oct, 2010 2 commits
  5. 15 Oct, 2010 9 commits
    • Steven Rostedt's avatar
      ftrace: Use objtree for C version of recordmcount · 85caa993
      Steven Rostedt authored
      The C version of recordmcount is compiled to a binary, which will
      end up located in the objtree. If the kernel is built with O=path,
      the srctree will not include the binary recordmcount caller.
      
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: linux-kbuild@vger.kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      85caa993
    • Steven Rostedt's avatar
      ftrace: Do not process kernel/trace/ftrace.o with C recordmcount program · 44475863
      Steven Rostedt authored
      The file kernel/trace/ftrace.c references the mcount() call to
      convert the mcount() callers to nops. But because it references
      mcount(), the mcount() address is placed in the relocation table.
      
      The C version of recordmcount reads the relocation table of all
      object files, and it will add all references to mcount to the
      __mcount_loc table that is used to find the places that call mcount()
      and change the call to a nop. When recordmcount finds the mcount reference
      in kernel/trace/ftrace.o, it saves that location even though the code
      is not a call, but references mcount as data.
      
      On boot up, when all calls are converted to nops, the code has a safety
      check to determine what op code it is actually replacing before it
      replaces it. If that op code at the address does not match, then
      a warning is printed and the function tracer is disabled.
      
      The reference to mcount in ftrace.c, causes this warning to trigger,
      since the reference is not a call to mcount(). The ftrace.c file is
      not compiled with the -pg flag, so no calls to mcount() should be
      expected.
      
      This patch simply makes recordmcount.c skip the kernel/trace/ftrace.c
      file. This was the same solution used by the perl version of
      recordmcount.
      Reported-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: John Reiser <jreiser@bitwagon.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      44475863
    • Robert Richter's avatar
      oprofile: make !CONFIG_PM function stubs static inline · cd254f29
      Robert Richter authored
      Make !CONFIG_PM function stubs static inline and remove section
      attribute.
      Signed-off-by: default avatarRobert Richter <robert.richter@amd.com>
      cd254f29
    • Anand Gadiyar's avatar
      oprofile: fix linker errors · b3b3a9b6
      Anand Gadiyar authored
      Commit e9677b3c (oprofile, ARM: Use oprofile_arch_exit() to
      cleanup on failure) caused oprofile_perf_exit to be called
      in the cleanup path of oprofile_perf_init. The __exit tag
      for oprofile_perf_exit should therefore be dropped.
      
      The same has to be done for exit_driverfs as well, as this
      function is called from oprofile_perf_exit. Else, we get
      the following two linker errors.
      
        LD      .tmp_vmlinux1
      `oprofile_perf_exit' referenced in section `.init.text' of arch/arm/oprofile/built-in.o: defined in discarded section `.exit.text' of arch/arm/oprofile/built-in.o
      make: *** [.tmp_vmlinux1] Error 1
      
        LD      .tmp_vmlinux1
      `exit_driverfs' referenced in section `.text' of arch/arm/oprofile/built-in.o: defined in discarded section `.exit.text' of arch/arm/oprofile/built-in.o
      make: *** [.tmp_vmlinux1] Error 1
      Signed-off-by: default avatarAnand Gadiyar <gadiyar@ti.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRobert Richter <robert.richter@amd.com>
      b3b3a9b6
    • Anand Gadiyar's avatar
      oprofile: include platform_device.h to fix build break · 277dd984
      Anand Gadiyar authored
      oprofile_perf.c needs to include platform_device.h
      Otherwise we get the following build break.
      
        CC      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.o
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:192: warning: 'struct platform_device' declared inside parameter list
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:192: warning: its scope is only this definition or declaration, which is probably not what you want
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:201: warning: 'struct platform_device' declared inside parameter list
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:210: error: variable 'oprofile_driver' has initializer but incomplete type
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:211: error: unknown field 'driver' specified in initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:211: error: extra brace group at end of initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:211: error: (near initialization for 'oprofile_driver')
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:213: warning: excess elements in struct initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:213: warning: (near initialization for 'oprofile_driver')
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:214: error: unknown field 'resume' specified in initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:214: warning: excess elements in struct initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:214: warning: (near initialization for 'oprofile_driver')
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:215: error: unknown field 'suspend' specified in initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:215: warning: excess elements in struct initializer
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c:215: warning: (near initialization for 'oprofile_driver')
      arch/arm/oprofile/../../../drivers/oprofile/oprofile_perf.c: In function 'init_driverfs':
      Signed-off-by: default avatarAnand Gadiyar <gadiyar@ti.com>
      Cc: Matt Fleming <matt@console-pimps.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRobert Richter <robert.richter@amd.com>
      277dd984
    • Robert Richter's avatar
      Merge remote branch 'tip/perf/core' into oprofile/core · 6268464b
      Robert Richter authored
      Conflicts:
      	arch/arm/oprofile/common.c
      	kernel/perf_event.c
      6268464b
    • Ingo Molnar's avatar
      Merge branch 'tip/perf/recordmcount-2' of... · 0fdf1360
      Ingo Molnar authored
      Merge branch 'tip/perf/recordmcount-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
      0fdf1360
    • Steven Rostedt's avatar
      ftrace: Rename config option HAVE_C_MCOUNT_RECORD to HAVE_C_RECORDMCOUNT · cf4db259
      Steven Rostedt authored
      The config option used by archs to let the build system know that
      the C version of the recordmcount works for said arch is currently
      called HAVE_C_MCOUNT_RECORD which enables BUILD_C_RECORDMCOUNT. To
      be more consistent with the name that all archs may use, it has been
      renamed to HAVE_C_RECORDMCOUNT. This will be less confusing since
      we are building a C recordmcount and not a mcount_record.
      Suggested-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: linux-kbuild@vger.kernel.org
      Cc: John Reiser <jreiser@bitwagon.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      cf4db259
    • Ingo Molnar's avatar
      Merge branch 'perf/core' of... · d9d572a9
      Ingo Molnar authored
      Merge branch 'perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into perf/core
      d9d572a9