1. 30 May, 2014 3 commits
    • Yoshihiro YUNOMAE's avatar
      tracing: Have saved_cmdlines use the seq_read infrastructure · 42584c81
      Yoshihiro YUNOMAE authored
      Current tracing_saved_cmdlines_read() implementation is naive; It allocates
      a large buffer, constructs output data to that buffer for each read
      operation, and then copies a portion of the buffer to the user space
      buffer. This has several issues such as slow memory allocation, high
      CPU usage, and even corruption of the output data.
      
      The seq_read infrastructure is made to handle this type of work.
      By converting it to use seq_read() the code becomes smaller, simplified,
      as well as correct.
      
      Link: http://lkml.kernel.org/p/20140220084431.3839.51793.stgit@yunodevelSigned-off-by: default avatarHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      42584c81
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add tracepoint benchmark tracepoint · 81dc9f0e
      Steven Rostedt (Red Hat) authored
      In order to help benchmark the time tracepoints take, a new config
      option is added called CONFIG_TRACEPOINT_BENCHMARK. When this option
      is set a tracepoint is created called "benchmark:benchmark_event".
      When the tracepoint is enabled, it kicks off a kernel thread that
      goes into an infinite loop (calling cond_sched() to let other tasks
      run), and calls the tracepoint. Each iteration will record the time
      it took to write to the tracepoint and the next iteration that
      data will be passed to the tracepoint itself. That is, the tracepoint
      will report the time it took to do the previous tracepoint.
      The string written to the tracepoint is a static string of 128 bytes
      to keep the time the same. The initial string is simply a write of
      "START". The second string records the cold cache time of the first
      write which is not added to the rest of the calculations.
      
      As it is a tight loop, it benchmarks as hot cache. That's fine because
      we care most about hot paths that are probably in cache already.
      
      An example of the output:
      
           START
           first=3672 [COLD CACHED]
           last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712
           last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337
           last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064
           last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411
           last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389
           last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      81dc9f0e
    • Steven Rostedt's avatar
      tracing: Print nasty banner when trace_printk() is in use · 2184db46
      Steven Rostedt authored
      trace_printk() is used to debug fast paths within the kernel. Places
      that gets called in any context (interrupt or NMI) or thousands of
      times a second. Something you do not want to do with a printk().
      
      In order to make it completely lockless as it needs a temporary buffer
      to handle some of the string formatting, a page is created per cpu for
      every context (four per cpu; normal, softirq, irq, NMI).
      
      Since trace_printk() should only be used for debugging purposes,
      there's no reason to waste memory on these buffers on a production
      system. That means, trace_printk() should never be used unless a
      developer is debugging their kernel. There's macro magic to allocate
      the buffers if trace_printk() is used anywhere in the kernel.
      
      To help enforce that trace_printk() isn't used outside of development,
      when it is used, a nasty banner is displayed on bootup (or when a module
      is loaded that uses trace_printk() and the kernel core does not).
      
      Here's the banner:
      
       **********************************************************
       **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
       **                                                      **
       ** trace_printk() being used. Allocating extra memory.  **
       **                                                      **
       ** This means that this is a DEBUG kernel and it is     **
       ** unsafe for produciton use.                           **
       **                                                      **
       ** If you see this message and you are not debugging    **
       ** the kernel, report this immediately to your vendor!  **
       **                                                      **
       **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
       **********************************************************
      
      That should hopefully keep developers from trying to sneak in a
      trace_printk() or two.
      
      Link: http://lkml.kernel.org/p/20140528131440.2283213c@gandalf.local.homeSigned-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2184db46
  2. 21 May, 2014 2 commits
  3. 15 May, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks · 4449bf92
      Steven Rostedt (Red Hat) authored
      Being able to show a cpumask of events can be useful as some events
      may affect only some CPUs. There is no standard way to record the
      cpumask and converting it to a string is rather expensive during
      the trace as traces happen in hotpaths. It would be better to record
      the raw event mask and be able to parse it at print time.
      
      The following macros were added for use with the TRACE_EVENT() macro:
      
        __bitmask()
        __assign_bitmask()
        __get_bitmask()
      
      To test this, I added this to the sched_migrate_task event, which
      looked like this:
      
      TRACE_EVENT(sched_migrate_task,
      
      	TP_PROTO(struct task_struct *p, int dest_cpu, const struct cpumask *cpus),
      
      	TP_ARGS(p, dest_cpu, cpus),
      
      	TP_STRUCT__entry(
      		__array(	char,	comm,	TASK_COMM_LEN	)
      		__field(	pid_t,	pid			)
      		__field(	int,	prio			)
      		__field(	int,	orig_cpu		)
      		__field(	int,	dest_cpu		)
      		__bitmask(	cpumask, num_possible_cpus()	)
      	),
      
      	TP_fast_assign(
      		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
      		__entry->pid		= p->pid;
      		__entry->prio		= p->prio;
      		__entry->orig_cpu	= task_cpu(p);
      		__entry->dest_cpu	= dest_cpu;
      		__assign_bitmask(cpumask, cpumask_bits(cpus), num_possible_cpus());
      	),
      
      	TP_printk("comm=%s pid=%d prio=%d orig_cpu=%d dest_cpu=%d cpumask=%s",
      		  __entry->comm, __entry->pid, __entry->prio,
      		  __entry->orig_cpu, __entry->dest_cpu,
      		  __get_bitmask(cpumask))
      );
      
      With the output of:
      
              ksmtuned-3613  [003] d..2   485.220508: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=3 dest_cpu=2 cpumask=00000000,0000000f
           migration/1-13    [001] d..5   485.221202: sched_migrate_task: comm=ksmtuned pid=3614 prio=120 orig_cpu=1 dest_cpu=0 cpumask=00000000,0000000f
                   awk-3615  [002] d.H5   485.221747: sched_migrate_task: comm=rcu_preempt pid=7 prio=120 orig_cpu=0 dest_cpu=1 cpumask=00000000,000000ff
           migration/2-18    [002] d..5   485.222062: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=2 dest_cpu=3 cpumask=00000000,0000000f
      
      Link: http://lkml.kernel.org/r/1399377998-14870-6-git-send-email-javi.merino@arm.com
      Link: http://lkml.kernel.org/r/20140506132238.22e136d1@gandalf.local.homeSuggested-by: default avatarJavi Merino <javi.merino@arm.com>
      Tested-by: default avatarJavi Merino <javi.merino@arm.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      4449bf92
  4. 14 May, 2014 8 commits
  5. 07 May, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add trace_<tracepoint>_enabled() function · 7c65bbc7
      Steven Rostedt (Red Hat) authored
      There are some code paths in the kernel that need to do some preparations
      before it calls a tracepoint. As that code is worthless overhead when
      the tracepoint is not enabled, it would be prudent to have that code
      only run when the tracepoint is active. To accomplish this, all tracepoints
      now get a static inline function called "trace_<tracepoint-name>_enabled()"
      which returns true when the tracepoint is enabled and false otherwise.
      
      As an added bonus, that function uses the static_key of the tracepoint
      such that no branch is needed.
      
        if (trace_mytracepoint_enabled()) {
      	arg = process_tp_arg();
      	trace_mytracepoint(arg);
        }
      
      Will keep the "process_tp_arg()" (which may be expensive to run) from
      being executed when the tracepoint isn't enabled.
      
      It's best to encapsulate the tracepoint itself in the if statement
      just to keep races. For example, if you had:
      
        if (trace_mytracepoint_enabled())
      	arg = process_tp_arg();
        trace_mytracepoint(arg);
      
      There's a chance that the tracepoint could be enabled just after the
      if statement, and arg will be undefined when calling the tracepoint.
      
      Link: http://lkml.kernel.org/r/20140506094407.507b6435@gandalf.local.homeAcked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7c65bbc7
  6. 06 May, 2014 1 commit
  7. 05 May, 2014 1 commit
    • Frederic Weisbecker's avatar
      tracing: Remove myself as a tracing maintainer · 3415c28c
      Frederic Weisbecker authored
      It has been a while since I last sent a tracing patch. I always keep an
      eye on tracing evolutions and contributions in general but given
      how busy I am with nohz, isolation and more generally core cleanups stuff,
      I seldom have time left to provide deep reviews of tracing patches nor
      simply for reviews to begin with.
      
      I've been very lucky to start kernel development on a very young subsystem
      with tons of low hanging fruits back in 2008. Given that it deals with
      a lot of tricky stuffs all around (sched, timers, irq, preemption, NMIs,
      SMP, RCU, ....) I basically learned everything there.
      
      Steve has been doing most of the incredible work these last years.
      Thanks a lot!
      
      Of course consider me always available to help on tracing if any hard
      days happen.
      
      Link: http://lkml.kernel.org/p/1399131991-13216-1-git-send-email-fweisbec@gmail.com
      
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      3415c28c
  8. 02 May, 2014 1 commit
  9. 30 Apr, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Remove mock up poll wait function · b1169cc6
      Steven Rostedt (Red Hat) authored
      Now that the ring buffer has a built in way to wake up readers
      when there's data, using irq_work such that it is safe to do it
      in any context. But it was still using the old "poor man's"
      wait polling that checks every 1/10 of a second to see if it
      should wake up a waiter. This makes the latency for a wake up
      excruciatingly long. No need to do that anymore.
      
      Completely remove the different wait_poll types from the tracers
      and have them all use the default one now.
      Reported-by: default avatarJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      b1169cc6
  10. 29 Apr, 2014 1 commit
  11. 24 Apr, 2014 2 commits
  12. 21 Apr, 2014 8 commits
  13. 20 Apr, 2014 5 commits
  14. 19 Apr, 2014 5 commits
    • Adrien BAK's avatar
      perf tools: Improve error reporting · ffa91880
      Adrien BAK authored
      In the current version, when using perf record, if something goes
      wrong in tools/perf/builtin-record.c:375
        session = perf_session__new(file, false, NULL);
      
      The error message:
      "Not enough memory for reading per file header"
      
      is issued. This error message seems to be outdated and is not very
      helpful. This patch proposes to replace this error message by
      "Perf session creation failed"
      
      I believe this issue has been brought to lkml:
      https://lkml.org/lkml/2014/2/24/458
      although this patch only tackles a (small) part of the issue.
      
      Additionnaly, this patch improves error reporting in
      tools/perf/util/data.c open_file_write.
      
      Currently, if the call to open fails, the user is unaware of it.
      This patch logs the error, before returning the error code to
      the caller.
      Reported-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAdrien BAK <adrien.bak@metascale.org>
      Link: http://lkml.kernel.org/r/1397786443.3093.4.camel@beast
      [ Reorganize the changelog into paragraphs ]
      [ Added empty line after fd declaration in open_file_write ]
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      ffa91880
    • Vladimir Nikulichev's avatar
      perf tools: Adjust symbols in VDSO · 922d0e4d
      Vladimir Nikulichev authored
      pert-report doesn't resolve function names in VDSO:
      
      $ perf report --stdio -g flat,0.0,15,callee --sort pid
      ...
                  8.76%
                     0x7fff6b1fe861
                     __gettimeofday
                     ACE_OS::gettimeofday()
      ...
      
      In this case symbol values should be adjusted the same way as for executables,
      relocatable objects and prelinked libraries.
      
      After fix:
      
      $ perf report --stdio -g flat,0.0,15,callee --sort pid
      ...
                  8.76%
                     __vdso_gettimeofday
                     __gettimeofday
                     ACE_OS::gettimeofday()
      Signed-off-by: default avatarVladimir Nikulichev <nvs@tbricks.com>
      Tested-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Reviewed-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Link: http://lkml.kernel.org/r/969812.163009436-sendEmail@nvsSigned-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      922d0e4d
    • Alexander Yarygin's avatar
      perf kvm: Fix 'Min time' counting in report command · acb61fc8
      Alexander Yarygin authored
      Every event in the perf-kvm has a 'stats' structure, which contains
      max/min/average/etc times of handling this event.
      The problem is that the 'perf-kvm stat report' command always shows
      that 'min time' is 0us for every event. Example:
      
       # perf kvm stat report
      
       Analyze events for all VCPUs:
      
          VM-EXIT    Samples  Samples%     Time%   Min Time   Max Time Avg time
        [..]
        0xB2 MSCH         12     0.07%     0.00%        0us        8us 7.31us ( +-   2.11% )
        0xB2 CHSC         12     0.07%     0.00%        0us       18us 9.39us ( +-   9.49% )
        0xB2 STPX          8     0.05%     0.00%        0us        2us 1.88us ( +-   7.18% )
        0xB2 STSI          7     0.04%     0.00%        0us       44us 16.49us ( +-  38.20% )
        [..]
      
      This happens because the 'stats' structure is not initialized and
      stats->min equals to 0. Lets initialize the structure for every
      event after its allocation using init_stats() function. This initializes
      stats->min to -1 and makes 'Min time' statistics counting work:
      
       # perf kvm stat report
      
       Analyze events for all VCPUs:
      
          VM-EXIT    Samples  Samples%     Time%   Min Time   Max Time Avg time
        [..]
        0xB2 MSCH         12     0.07%     0.00%        6us        8us 7.31us ( +-   2.11% )
        0xB2 CHSC         12     0.07%     0.00%        7us       18us 9.39us ( +-   9.49% )
        0xB2 STPX          8     0.05%     0.00%        1us        2us 1.88us ( +-   7.18% )
        0xB2 STSI          7     0.04%     0.00%        1us       44us 16.49us ( +-  38.20% )
        [..]
      Signed-off-by: default avatarAlexander Yarygin <yarygin@linux.vnet.ibm.com>
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: default avatarDavid Ahern <dsahern@gmail.com>
      Link: http://lkml.kernel.org/r/1397053319-2130-3-git-send-email-borntraeger@de.ibm.com
      [ Fixing the perf examples changelog output ]
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      acb61fc8
    • Eric Dumazet's avatar
      coredump: fix va_list corruption · 404ca80e
      Eric Dumazet authored
      A va_list needs to be copied in case it needs to be used twice.
      
      Thanks to Hugh for debugging this issue, leading to various panics.
      
      Tested:
      
        lpq84:~# echo "|/foobar12345 %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h" >/proc/sys/kernel/core_pattern
      
      'produce_core' is simply : main() { *(int *)0 = 1;}
      
        lpq84:~# ./produce_core
        Segmentation fault (core dumped)
        lpq84:~# dmesg | tail -1
        [  614.352947] Core dump to |/foobar12345 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 (null) pipe failed
      
      Notice the last argument was replaced by a NULL (we were lucky enough to
      not crash, but do not try this on your production machine !)
      
      After fix :
      
        lpq83:~# echo "|/foobar12345 %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h" >/proc/sys/kernel/core_pattern
        lpq83:~# ./produce_core
        Segmentation fault
        lpq83:~# dmesg | tail -1
        [  740.800441] Core dump to |/foobar12345 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 pipe failed
      
      Fixes: 5fe9d8ca ("coredump: cn_vprintf() has no reason to call vsnprintf() twice")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Diagnosed-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@vger.kernel.org # 3.11+
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      404ca80e
    • Linus Torvalds's avatar
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 6d459690
      Linus Torvalds authored
      Pull x86 fix from Ingo Molnar:
       "This fixes the preemption-count imbalance crash reported by Owen
        Kibel"
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86/mce: Fix CMCI preemption bugs
      6d459690