An error occurred fetching the project authors.
- 03 Nov, 2009 1 commit
-
-
Brian Gerst authored
This jump should be unconditional. Signed-off-by:
Brian Gerst <brgerst@gmail.com> LKML-Reference: <1257274925-15713-1-git-send-email-brgerst@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 12 Oct, 2009 1 commit
-
-
Brian Gerst authored
Move the handling of truncated %rip from an iret fault to the fault entry path. This allows x86-64 to use the standard search_extable() function. Signed-off-by:
Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jan Beulich <jbeulich@novell.com> LKML-Reference: <1255357103-5418-1-git-send-email-brgerst@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 23 Sep, 2009 1 commit
-
-
Roland McGrath authored
If TIF_SYSCALL_TRACE or TIF_SINGLESTEP is set while inside a syscall, the path back to user mode should get to syscall_trace_leave. This does happen in most circumstances. The exception to this is on the 64-bit syscall fastpath, when no such flag was set on syscall entry and nothing else has punted it off the fastpath for exit. That one exit fastpath fails to check for _TIF_WORK_SYSCALL_EXIT flags. This makes the behavior inconsistent with what 32-bit tasks see and what the native 32-bit kernel always does, and what 64-bit tasks see in all cases where the iret path is taken anyhow. Perhaps the only example that is affected is a ptrace stop inside do_fork (for PTRACE_O_TRACE{CLONE,FORK,VFORK,VFORKDONE}). Other syscalls with internal ptrace stop points (execve) already take the iret exit path for unrelated reasons. Test cases for both PTRACE_SYSCALL and PTRACE_SINGLESTEP variants are at: http://sources.redhat.com/cgi-bin/cvsweb.cgi/~checkout~/tests/ptrace-tests/tests/syscall-from-clone.c?cvsroot=systemtap http://sources.redhat.com/cgi-bin/cvsweb.cgi/~checkout~/tests/ptrace-tests/tests/step-from-clone.c?cvsroot=systemtap There was no special benefit to the sysret path's special path to call do_notify_resume, because it always takes the iret exit path at the end. So this change just makes the sysret exit path join the iret exit path for all the signals and ptrace cases. The fastpath still applies to the plain syscall-audit and resched cases. Signed-off-by:
Roland McGrath <roland@redhat.com> CC: Oleg Nesterov <oleg@redhat.com>
-
- 21 Sep, 2009 1 commit
-
-
Ingo Molnar authored
Bye-bye Performance Counters, welcome Performance Events! In the past few months the perfcounters subsystem has grown out its initial role of counting hardware events, and has become (and is becoming) a much broader generic event enumeration, reporting, logging, monitoring, analysis facility. Naming its core object 'perf_counter' and naming the subsystem 'perfcounters' has become more and more of a misnomer. With pending code like hw-breakpoints support the 'counter' name is less and less appropriate. All in one, we've decided to rename the subsystem to 'performance events' and to propagate this rename through all fields, variables and API names. (in an ABI compatible fashion) The word 'event' is also a bit shorter than 'counter' - which makes it slightly more convenient to write/handle as well. Thanks goes to Stephane Eranian who first observed this misnomer and suggested a rename. User-space tooling and ABI compatibility is not affected - this patch should be function-invariant. (Also, defconfigs were not touched to keep the size down.) This patch has been generated via the following script: FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/PERF_EVENT_/PERF_RECORD_/g' \ -e 's/PERF_COUNTER/PERF_EVENT/g' \ -e 's/perf_counter/perf_event/g' \ -e 's/nb_counters/nb_events/g' \ -e 's/swcounter/swevent/g' \ -e 's/tpcounter_event/tp_event/g' \ $FILES for N in $(find . -name perf_counter.[ch]); do M=$(echo $N | sed 's/perf_counter/perf_event/g') mv $N $M done FILES=$(find . -name perf_event.*) sed -i \ -e 's/COUNTER_MASK/REG_MASK/g' \ -e 's/COUNTER/EVENT/g' \ -e 's/\<event\>/event_id/g' \ -e 's/counter/event/g' \ -e 's/Counter/Event/g' \ $FILES ... to keep it as correct as possible. This script can also be used by anyone who has pending perfcounters patches - it converts a Linux kernel tree over to the new naming. We tried to time this change to the point in time where the amount of pending patches is the smallest: the end of the merge window. Namespace clashes were fixed up in a preparatory patch - and some stylistic fallout will be fixed up in a subsequent patch. ( NOTE: 'counters' are still the proper terminology when we deal with hardware registers - and these sed scripts are a bit over-eager in renaming them. I've undone some of that, but in case there's something left where 'counter' would be better than 'event' we can undo that on an individual basis instead of touching an otherwise nicely automated patch. ) Suggested-by:
Stephane Eranian <eranian@google.com> Acked-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by:
Paul Mackerras <paulus@samba.org> Reviewed-by:
Arjan van de Ven <arjan@linux.intel.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <new-submission> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 13 Sep, 2009 1 commit
-
-
Jiri Olsa authored
Only 24 bytes needs to be reserved on the stack for the function graph tracer on x86_64. Signed-off-by:
Jiri Olsa <jolsa@redhat.com> LKML-Reference: <20090729085837.GB4998@jolsa.lab.eng.brq.redhat.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 18 Jun, 2009 1 commit
-
-
Steven Rostedt authored
In case gcc does something funny with the stack frames, or the return from function code, we would like to detect that. An arch may implement passing of a variable that is unique to the function and can be saved on entering a function and can be tested when exiting the function. Usually the frame pointer can be used for this purpose. This patch also implements this for x86. Where it passes in the stack frame of the parent function, and will test that frame on exit. There was a case in x86_32 with optimize for size (-Os) where, for a few functions, gcc would align the stack frame and place a copy of the return address into it. The function graph tracer modified the copy and not the actual return address. On return from the funtion, it did not go to the tracer hook, but returned to the parent. This broke the function graph tracer, because the return of the parent (where gcc did not do this funky manipulation) returned to the location that the child function was suppose to. This caused strange kernel crashes. This test detected the problem and pointed out where the issue was. This modifies the parameters of one of the functions that the arch specific code calls, so it includes changes to arch code to accommodate the new prototype. Note, I notice that the parsic arch implements its own push_return_trace. This is now a generic function and the ftrace_push_return_trace should be used instead. This patch does not touch that code. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Helge Deller <deller@gmx.de> Cc: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 03 Jun, 2009 3 commits
-
-
Andi Kleen authored
For some time each panic() called with interrupts disabled triggered the !irqs_disabled() WARN_ON in smp_call_function(), producing ugly backtraces and confusing users. This is a common situation with machine checks for example which tend to call panic with interrupts disabled, but will also hit in other situations e.g. panic during early boot. In fact it means that panic cannot be called in many circumstances, which would be bad. This all started with the new fancy queued smp_call_function, which is then used by the shutdown path to shut down the other CPUs. On closer examination it turned out that the fancy RCU smp_call_function() does lots of things not suitable in a panic situation anyways, like allocating memory and relying on complex system state. I originally tried to patch this over by checking for panic there, but it was quite complicated and the original patch was also not very popular. This also didn't fix some of the underlying complexity problems. The new code in post 2.6.29 tries to patch around this by checking for oops_in_progress, but that is not enough to make this fully safe and I don't think that's a real solution because panic has to be reliable. So instead use an own vector to reboot. This makes the reboot code extremly straight forward, which is definitely a big plus in a panic situation where it is important to avoid relying on too much kernel state. The new simple code is also safe to be called from interupts off region because it is very very simple. There can be situations where it is important that panic is reliable. For example on a fatal machine check the panic is needed to get the system up again and running as quickly as possible. So it's important that panic is reliable and all function it calls simple. This is why I came up with this simple vector scheme. It's very hard to beat in simplicity. Vectors are not particularly precious anymore since all big systems are using per CPU vectors. Another possibility would have been to use an NMI similar to kdump, but there is still the problem that NMIs don't work reliably on some systems due to BIOS issues. NMIs would have been able to stop CPUs running with interrupts off too. In the sake of universal reliability I opted for using a non NMI vector for now. I put the reboot vector into the highest priority bucket of the APIC vectors and moved the 64bit UV_BAU message down instead into the next lower priority. [ Impact: bug fix, fixes an old regression ] Signed-off-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Andi Kleen authored
Machine checks support waking up the mcelog daemon quickly. The original wake up code for this was pretty ugly, relying on a idle notifier and a special process flag. The reason it did it this way is that the machine check handler is not subject to normal interrupt locking rules so it's not safe to call wake_up(). Instead it set a process flag and then either did the wakeup in the syscall return or in the idle notifier. This patch adds a new "bootstraping" method as replacement. The idea is that the handler checks if it's in a state where it is unsafe to call wake_up(). If it's safe it calls it directly. When it's not safe -- that is it interrupted in a critical section with interrupts disables -- it uses a new "self IPI" to trigger an IPI to its own CPU. This can be done safely because IPI triggers are atomic with some care. The IPI is raised once the interrupts are reenabled and can then safely call wake_up(). When APICs are disabled the event is just queued and will be picked up eventually by the next polling timer. I think that's a reasonable compromise, since it should only happen quite rarely. Contains fixes from Ying Huang. [ solve conflict on irqinit, make it work on 32bit (entry_arch.h) - HS ] Signed-off-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Yong Wang authored
Remove the IRQ (non-NMI) handling bits as NMI will be used always. Signed-off-by:
Yong Wang <yong.y.wang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: John Kacur <jkacur@redhat.com> LKML-Reference: <20090603051255.GA2791@ywang-moblin2.bj.intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 28 May, 2009 2 commits
-
-
Andi Kleen authored
Enable the 64bit MCE_INTEL code (CMCI, thermal interrupts) for 32bit NEW_MCE. Signed-off-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Andi Kleen authored
Allows to call different machine check handlers from the low level machine check entry vector. This is needed for later when it will be used for 32bit too. Signed-off-by:
Andi Kleen <ak@linux.intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- 08 May, 2009 1 commit
-
-
Jeremy Fitzhardinge authored
Native x86-64 uses the IST mechanism to run int3 and debug traps on an alternative stack. Xen does not do this, and so the frames were being misinterpreted by the ptrace code. This change special-cases these two exceptions by using Xen variants which run on the normal kernel stack properly. Impact: avoid crash or bad data when IST trap is invoked under Xen Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 18 Apr, 2009 1 commit
-
-
Steven Rostedt authored
I hit the check_flags error of lockdep: WARNING: at kernel/lockdep.c:2893 check_flags+0x1a7/0x1d0() [...] hardirqs last enabled at (12567): [<ffffffff8026206a>] local_bh_enable+0xaa/0x110 hardirqs last disabled at (12569): [<ffffffff80610c76>] int3+0x16/0x40 softirqs last enabled at (12566): [<ffffffff80514d2b>] lock_sock_nested+0xfb/0x110 softirqs last disabled at (12568): [<ffffffff8058454e>] tcp_prequeue_process+0x2e/0xa0 The check_flags warning of lockdep tells me that lockdep thought interrupts were disabled, but they were really enabled. The numbers in the above parenthesis show the order of events: 12566: softirqs last enabled: lock_sock_nested 12567: hardirqs last enabled: local_bh_enable 12568: softirqs last disabled: tcp_prequeue_process 12566: hardirqs last disabled: int3 int3 is a breakpoint! Examining this further, I have CONFIG_NET_TCPPROBE enabled which adds break points into the kernel. The paranoid_exit of the return of int3 does not account for enabling interrupts on return to kernel. This code is a bit tricky since it is also used by the nmi handler (when lockdep is off), and we must be careful about the swapgs. We can not call kernel code after the swapgs has been performed. [ Impact: fix lockdep check_flags warning + self-turn-off ] Acked-by:
Peter Zijlsta <a.p.zijlstra@chello.nl> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 10 Apr, 2009 1 commit
-
-
Steven Rostedt authored
Impact: speed up The return to handler portion of the function graph tracer should only need to save the return values. The caller already saved off the registers that the callee can modify. The returning function already saved the registers it modified. When we call our own trace function it too will save the registers that the callee must restore. There's no reason to save off anything more that the registers used to return the values. Note, I did a complete kernel build with this modification and the function graph tracer running on x86_64. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 07 Apr, 2009 1 commit
-
-
Peter Zijlstra authored
Implement set_perf_counter_pending() with a self-IPI so that it will run ASAP in a usable context. For now use a second IRQ vector, because the primary vector pokes the apic in funny ways that seem to confuse things. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> LKML-Reference: <20090406094517.724626696@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 12 Mar, 2009 2 commits
-
-
Jan Beulich authored
Impact: mark save_paranoid as non-kprobe-able code This appears to be necessary as the function gets called from kprobes-unsafe exception handling stubs (i.e. which themselves live in .kprobes.text). Signed-off-by:
Jan Beulich <jbeulich@novell.com> LKML-Reference: <49B8F44F.76E4.0078.0@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jan Beulich authored
Impact: cleanup These got left in needlessly when ret_from_fork got simplified. Signed-off-by:
Jan Beulich <jbeulich@novell.com> LKML-Reference: <49B8F355.76E4.0078.0@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 04 Mar, 2009 1 commit
-
-
Dimitri Sivanich authored
This patch allocates a system interrupt vector for various platform specific uses. Signed-off-by:
Dimitri Sivanich <sivanich@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: john stultz <johnstul@us.ibm.com> LKML-Reference: <20090304185605.GA24419@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 24 Feb, 2009 2 commits
-
-
Cyrill Gorcunov authored
Impact: cleanup Signed-off-by:
Cyrill Gorcunov <gorcunov@openvz.org> Cc: heukelum@fastmail.fm Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Cyrill Gorcunov authored
native_usergs_sysret64 is described as extern void native_usergs_sysret64(void) so lets add ENDPROC here Signed-off-by:
Cyrill Gorcunov <gorcunov@openvz.org> Cc: heukelum@fastmail.fm Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 13 Feb, 2009 1 commit
-
-
Jeremy Fitzhardinge authored
In general, the only definitions that assembly files can use are in _types.S headers (where available), so convert them. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 03 Feb, 2009 1 commit
-
-
Martin Hicks authored
Impact: Fixes dumpstack and KDB on 64 bits This re-adds the old stack pointer to the top of the irqstack to help with unwinding. It was removed in commit d99015b1 as part of the save_args out-of-line work. Both dumpstack and KDB require this information. Signed-off-by:
Martin Hicks <mort@sgi.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- 30 Jan, 2009 1 commit
-
-
Jeremy Fitzhardinge authored
Impact: Fix latent bug The clobber is trying to say that anything except RDI is available for clobbering, but actually clobbers everything. This hasn't mattered because the clobbers were basically ignored, but subsequent patches will rely on them. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- 21 Jan, 2009 1 commit
-
-
Nick Piggin authored
Make X86 SGI Ultraviolet support configurable. Saves about 13K of text size on my modest config. text data bss dec hex filename 6770537 1158680 694356 8623573 8395d5 vmlinux 6757492 1157664 694228 8609384 835e68 vmlinux.nouv Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 18 Jan, 2009 4 commits
-
-
Brian Gerst authored
tj: s/irqcount/irq_count/ Signed-off-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
Brian Gerst authored
tj: * in asm-offsets_64.c, pda.h inclusion shouldn't be removed as pda is still referenced in the file * s/oldrsp/old_rsp/ Signed-off-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
Brian Gerst authored
Also clean up PER_CPU_VAR usage in xen-asm_64.S tj: * remove now unused stack_thread_info() * s/kernelstack/kernel_stack/ * added FIXME comment in xen-asm_64.S Signed-off-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
Brian Gerst authored
Move the irqstackptr variable from the PDA to per-cpu. Make the stacks themselves per-cpu, removing some specific allocation code. Add a seperate flag (is_boot_cpu) to simplify the per-cpu boot adjustments. tj: * sprinkle some underbars around. * irq_stack_ptr is not used till traps_init(), no reason to initialize it early. On SMP, just leaving it NULL till proper initialization in setup_per_cpu_areas() works. Dropped is_boot_cpu and early irq_stack_ptr initialization. * do DECLARE/DEFINE_PER_CPU(char[IRQ_STACK_SIZE], irq_stack) instead of (char, irq_stack[IRQ_STACK_SIZE]). Signed-off-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
- 16 Jan, 2009 1 commit
-
-
Tejun Heo authored
Now that pda is allocated as part of percpu, percpu doesn't need to be accessed through pda. Unify x86_64 SMP percpu access with x86_32 SMP one. Other than the segment register, operand size and the base of percpu symbols, they behave identical now. This patch replaces now unnecessary pda->data_offset with a dummy field which is necessary to keep stack_canary at its place. This patch also moves per_cpu_offset initialization out of init_gdt() into setup_per_cpu_areas(). Note that this change also necessitates explicit per_cpu_offset initializations in voyager_smp.c. With this change, x86_OP_percpu()'s are as efficient on x86_64 as on x86_32 and also x86_64 can use assembly PER_CPU macros. Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 11 Jan, 2009 1 commit
-
-
Benjamin LaHaise authored
Impact: micro-optimization The patch below removes an unnecessary locked instruction from switch_to(). TIF_FORK is only ever set in copy_thread() on initial process creation, and gets cleared during the first scheduling of the process. As such, it is safe to use an unlocked test for the flag within switch_to(). Signed-off-by:
Benjamin LaHaise <bcrl@kvack.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 16 Dec, 2008 1 commit
-
-
Cyrill Gorcunov authored
Impact: clean up Itroduce MCOUNT_SAVE/RESTORE_FRAME which allow us to save a number of lines on source level. Also fix a comment in ftrace.h. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 08 Dec, 2008 1 commit
-
-
Ingo Molnar authored
Implement performance counters for x86 Intel CPUs. It's simplified right now: the PERFMON CPU feature is assumed, which is available in Core2 and later Intel CPUs. The design is flexible to be extended to more CPU types as well. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 03 Dec, 2008 2 commits
-
-
Steven Rostedt authored
Impact: feature, let entry function decide to trace or not This patch lets the graph tracer entry function decide if the tracing should be done at the end as well. This requires all function graph entry functions return 1 if it should trace, or 0 if the return should not be traced. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: consistency change for function graph This patch makes function graph record the mcount caller address the same way the function tracer does. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 02 Dec, 2008 1 commit
-
-
Frederic Weisbecker authored
Impact: extend and enable the function graph tracer to 64-bit x86 This patch implements the support for function graph tracer under x86-64. Both static and dynamic tracing are supported. This causes some small CPP conditional asm on arch/x86/kernel/ftrace.c I wanted to use probe_kernel_read/write to make the return address saving/patching code more generic but it causes tracing recursion. That would be perhaps useful to implement a notrace version of these function for other archs ports. Note that arch/x86/process_64.c is not traced, as in X86-32. I first thought __switch_to() was responsible of crashes during tracing because I believed current task were changed inside but that's actually not the case (actually yes, but not the "current" pointer). So I will have to investigate to find the functions that harm here, to enable tracing of the other functions inside (but there is no issue at this time, while process_64.c stays out of -pg flags). A little possible race condition is fixed inside this patch too. When the tracer allocate a return stack dynamically, the current depth is not initialized before but after. An interrupt could occur at this time and, after seeing that the return stack is allocated, the tracer could try to trace it with a random uninitialized depth. It's a prevention, even if I hadn't problems with it. Signed-off-by:
Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tim Bird <tim.bird@am.sony.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 28 Nov, 2008 3 commits
-
-
Ingo Molnar authored
Impact: remove dead code If we take a closer look at the rff_trace/rff_action ret_from_fork code, we have to realize that it does all the wrong things: for example it checks the TIF flag - while later on jumping back to the ret-from-syscall path - duplicating the check needlessly. But checking for _TIF_SYSCALL_TRACE is completely unnecessary here because we clear that flag for every freshly forked task. So the whole "tracing" code here, for which there is a out of line jump optimization that makes it even harder to read, is in reality completely dead code ... Reported-by:
Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Tested-by:
Cyrill Gorcunov <gorcunov@gmail.com>
-
Cyrill Gorcunov authored
Impact: cleanup Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Cyrill Gorcunov authored
Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Acked-by:
Cliff Wickman <cpw@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 27 Nov, 2008 2 commits
-
-
Cyrill Gorcunov authored
child_rip is called not by its name but indirectly rather so make it global and aligned. Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Acked-by:
Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
gorcunov@gmail.com authored
Impact: cleanup Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Acked-by:
Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-