- 04 Feb, 2010 13 commits
-
-
Masami Hiramatsu authored
Check whether the address of new probe is already reserved by ftrace or alternatives (on x86) when registering new probe. If reserved, it returns an error and not register the probe. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: systemtap <systemtap@sources.redhat.com> Cc: DLE <dle-develop@lists.sourceforge.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: przemyslaw@pawelczyk.it Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Cc: Jason Baron <jbaron@redhat.com> LKML-Reference: <20100202214918.4694.94179.stgit@dhcp-100-2-132.bos.redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Masami Hiramatsu authored
Introducing *_text_reserved functions for checking the text address range is partially reserved or not. This patch provides checking routines for x86 smp alternatives and dynamic ftrace. Since both functions modify fixed pieces of kernel text, they should reserve and protect those from other dynamic text modifier, like kprobes. This will also be extended when introducing other subsystems which modify fixed pieces of kernel text. Dynamic text modifiers should avoid those. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: systemtap <systemtap@sources.redhat.com> Cc: DLE <dle-develop@lists.sourceforge.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: przemyslaw@pawelczyk.it Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Cc: Jason Baron <jbaron@redhat.com> LKML-Reference: <20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Masami Hiramatsu authored
Disable kprobe booster when CONFIG_PREEMPT=y at this time, because it can't ensure that all kernel threads preempted on kprobe's boosted slot run out from the slot even using freeze_processes(). The booster on preemptive kernel will be resumed if synchronize_tasks() or something like that is introduced. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: systemtap <systemtap@sources.redhat.com> Cc: DLE <dle-develop@lists.sourceforge.net> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <20100202214904.4694.24330.stgit@dhcp-100-2-132.bos.redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Mike Galbraith authored
Signed-off-by: Mike Galbraith <efault@gmx.de> Cc: Kirill Smelkov <kirr@landau.phys.spbu.ru> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1265265106.6364.5.camel@marge.simson.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Kirill Smelkov authored
By relying on logic in dso__load_kernel_sym(), we can automatically load vmlinux. The only thing which needs to be adjusted, is how --sym-annotate option is handled - now we can't rely on vmlinux been loaded until full successful pass of dso__load_vmlinux(), but that's not the case if we'll do sym_filter_entry setup in symbol_filter(). So move this step right after event__process_sample() where we know the whole dso__load_kernel_sym() pass is done. By the way, though conceptually similar `perf top` still can't annotate userspace - see next patches with fixes. Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <1265223128-11786-9-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Kirill Smelkov authored
The problem was we were incorrectly calculating objdump addresses for sym->start and sym->end, look: For simple ET_DYN type DSO (*.so) with one function, objdump -dS output is something like this: 000004ac <my_strlen>: int my_strlen(const char *s) 4ac: 55 push %ebp 4ad: 89 e5 mov %esp,%ebp 4af: 83 ec 10 sub $0x10,%esp { i.e. we have relative-to-dso-mapping IPs (=RIP) there. For ET_EXEC type and probably for prelinked libs as well (sorry can't test - I don't use prelink) objdump outputs absolute IPs, e.g. 08048604 <zz_strlen>: extern "C" int zz_strlen(const char *s) 8048604: 55 push %ebp 8048605: 89 e5 mov %esp,%ebp 8048607: 83 ec 10 sub $0x10,%esp { So, if sym->start is always relative to dso mapping(*), we'll have to unmap it for ET_EXEC like cases, and leave as is for ET_DYN cases. (*) and it is - we've explicitely made it relative. Look for adjust_symbols handling in dso__load_sym() Previously we were always unmapping sym->start and for ET_DYN dsos resulting addresses were wrong, and so objdump output was empty. The end result was that perf annotate output for symbols from non-prelinked *.so had always 0.00% percents only, which is wrong. To fix it, let's introduce a helper for converting rip to objdump address, and also let's document what map_ip() and unmap_ip() do -- I had to study sources for several hours to understand it. Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <1265223128-11786-8-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
Not to pollute too much 'perf annotate' debugging sessions. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-7-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
We want to stream events as fast as possible to perf.data, and also in the future we want to have splice working, when no interception will be possible. Using build_id__mark_dso_hit_ops to create the list of DSOs that back MMAPs we also optimize disk usage in the build-id cache by only caching DSOs that had hits. Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-6-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
Because 'perf record' will have to find the build-ids in after we stop recording, so as to reduce even more the impact in the workload while we do the measurement. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-5-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
With the recent modifications done to untie the session and symbol layers, 'perf probe' now can use just the symbols layer. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
We can check using strcmp, most DSOs don't start with '[' so the test is cheap enough and we had to test it there anyway since when reading perf.data files we weren't calling the routine that created this global variable and thus weren't setting it as "loaded", which was causing a bogus: Failed to open [vdso], continuing without symbols Message as the first line of 'perf report'. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-3-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
While debugging a problem reported by Pekka Enberg by printing the IP and all the maps for a thread when we don't find a map for an IP I noticed that dso__load_sym needs to fixup these extra maps it creates to hold symbols in different ELF sections than the main kernel one. Now we're back showing things like: [root@doppio linux-2.6-tip]# perf report | grep vsyscall 0.02% mutt [kernel.kallsyms].vsyscall_fn [.] vread_hpet 0.01% named [kernel.kallsyms].vsyscall_fn [.] vread_hpet 0.01% NetworkManager [kernel.kallsyms].vsyscall_fn [.] vread_hpet 0.01% gconfd-2 [kernel.kallsyms].vsyscall_0 [.] vgettimeofday 0.01% hald-addon-rfki [kernel.kallsyms].vsyscall_fn [.] vread_hpet 0.00% dbus-daemon [kernel.kallsyms].vsyscall_fn [.] vread_hpet [root@doppio linux-2.6-tip]# Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-2-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
I noticed while writing the first test in 'perf regtest' that to just test the symbol handling routines one needs to create a perf session, that is a layer centered on a perf.data file, events, etc, so I untied these layers. This reduces the complexity for the users as the number of parameters to most of the symbols and session APIs now was reduced while not adding more state to all the map instances by only having data that is needed to split the kernel (kallsyms and ELF symtab sections) maps and do vmlinux relocation on the main kernel map. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1265223128-11786-1-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 03 Feb, 2010 1 commit
-
-
Xiao Guangrong authored
Open perf data file with O_LARGEFILE flag since its size is easily larger that 2G. For example: # rm -rf perf.data # ./perf kmem record sleep 300 [ perf record: Woken up 0 times to write data ] [ perf record: Captured and wrote 3142.147 MB perf.data (~137282513 samples) ] # ll -h perf.data -rw------- 1 root root 3.1G ..... Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <4B68F32A.9040203@cn.fujitsu.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 31 Jan, 2010 6 commits
-
-
Ingo Molnar authored
Fix up a few small stylistic details: - use consistent vertical spacing/alignment - remove line80 artifacts - group some global variables better - remove dead code Plus rename 'prof' to 'report' to make it more in line with other tools, and remove the line/file keying as we really want to use IPs like the other tools do. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Hitoshi Mitake authored
Adding new subcommand "perf lock" to perf. I have a lot of remaining ToDos, but for now perf lock can already provide minimal functionality for analyzing lock statistics. Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Hitoshi Mitake authored
Add wait time and lock identification details. Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264851813-8413-11-git-send-email-mitake@dcl.info.waseda.ac.jp> [ removed the file/line bits as we can do that better via IPs ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Hitoshi Mitake authored
linux/hash.h, hash header of kernel, is also useful for perf. util/include/linuxhash.h includes linux/hash.h, so we can use hash facilities (e.g. hash_long()) in perf now. Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264851813-8413-3-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Hitoshi Mitake authored
This patch is required to test the next patch for perf lock. At 064739bc , support for the modifier "__data_loc" of format is added. But, when I wanted to parse format of lock_acquired (or some event else), raw_field_ptr() did not returned correct pointer. So I modified raw_field_ptr() like this patch. Then raw_field_ptr() works well. Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> LKML-Reference: <1264851813-8413-2-git-send-email-mitake@dcl.info.waseda.ac.jp> [ v3: fixed minor stylistic detail ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Hitoshi Mitake authored
This reverts commit f5a2c3dc. This patch is required for making "perf lock rec" work. The commit f5a2c3dc changes write_event() of builtin-record.c . And changed write_event() sometimes doesn't stop with perf lock rec. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <new-submission> [ that commit also causes perf record to not be Ctrl-C-able, and it's concetually wrong to parse the data at record time (unconditionally - even when not needed), as we eventually want to be able to do zero-copy recording, at least for non-archive recordings. ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 29 Jan, 2010 20 commits
-
-
John Kacur authored
Tell git to ignore perf-archive. Signed-off-by: John Kacur <jkacur@redhat.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264633557-17597-6-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Thiago Farina authored
Checked with: ./../scripts/checkpatch.pl --terse --file perf.c perf.c: 51: ERROR: open brace '{' following function declarations go on the next line perf.c: 73: ERROR: "foo*** bar" should be "foo ***bar" perf.c:112: ERROR: space prohibited before that close parenthesis ')' perf.c:127: ERROR: space prohibited before that close parenthesis ')' perf.c:171: ERROR: "foo** bar" should be "foo **bar" perf.c:213: ERROR: "(foo*)" should be "(foo *)" perf.c:216: ERROR: "(foo*)" should be "(foo *)" perf.c:217: ERROR: space required before that '*' (ctx:OxV) perf.c:452: ERROR: do not initialise statics to 0 or NULL perf.c:453: ERROR: do not initialise statics to 0 or NULL Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> LKML-Reference: <1264633557-17597-7-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Merge reason: We want to queue up a dependent patch. Also update to later -rc's. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
Removing one extra step needed in the tools that need this, fixing a bug in 'perf probe' where this was not being done. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1264633557-17597-4-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
To make it clear and allow for direct usage by, for instance, regression test suites. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1264633557-17597-3-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
So that we can call it directly from regression tests, and also to reduce the size of dso__load_kernel_sym(), making it more clear. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1264633557-17597-2-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Arnaldo Carvalho de Melo authored
As we do lazy loading of symtabs we only will know if the specified vmlinux file is invalid when we actually have a hit in kernel space and then try to load it. So if we get kernel hits and there are _no_ symbols in the DSO backing the kernel map, bail out. Reported-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1264633557-17597-1-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
One problem with frequency driven counters is that we cannot predict the rate at which they trigger, therefore we have to start them at period=1, this causes a ramp up effect. However, if we fail to propagate the stable state on fork each new child will have to ramp up again. This can lead to significant artifacts in sample data. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: eranian@google.com Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1264752266.4283.2121.camel@laptop> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
At enable time the counter might still have a ->idx pointing to a previously occupied location that might now be taken by another event. Resetting the counter at that location with data from this event will destroy the other counter's count. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100127221122.261477183@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
The new Intel documentation includes Westmere arch specific event maps that are significantly different from the Nehalem ones. Add support for this generation. Found the CPUID model numbers on wikipedia. Also ammend some Nehalem constraints, spotted those when looking for the differences between Nehalem and Westmere. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100127221122.151865645@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Put the recursion avoidance code in the generic hook instead of replicating it in each implementation. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100127221122.057507285@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Since constraints are specified on the event number, not number and unit mask shorten the constraint masks so that we'll actually match something. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100127221121.967610372@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Share the meat of the x86_pmu_disable() code with hw_perf_enable(). Also remove the barrier() from that code, since I could not convince myself we actually need it. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
- Remove stray debug code - Improve ugly macros a bit - Remove some whitespace damage - (Also fix up some accumulated damage in perf_event.h) Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Stephane Eranian <eranian@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission>
-
Peter Zijlstra authored
x86_pmu_disable() removes the event from the cpuc->event_list[], however since an event can only be on that list once, stop looking after we found it. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Remove num from the fast path and save a few ops. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100122155536.056430539@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Add a weight member to the constraint structure and avoid recomputing the weight at runtime. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100122155535.963944926@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Instead of copying bitmasks around, pass pointers to the constraint structure. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100122155535.887853503@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Provide compile time versions of hweight. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> LKML-Reference: <20100122155535.797688466@chello.nl> [ Remove some whitespace damage while we are at it ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Introduce INTEL_EVENT_CONSTRAINT and FIXED_EVENT_CONSTRAINT to reduce some line length and typing work. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <20100122155535.688730371@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-