- 28 Apr, 2017 15 commits
-
-
Nicholas Piggin authored
Move a couple of existing scripts under there. Remove scripts directory: a script is a tool, a tool is not a script. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Currently powerpc has to introduce a dependency on its default build target zImage in order to run a relocation check pass over the linked vmlinux. This is deficient because the check is not run if the plain vmlinux target is built, or if one of the other boot targets is built. Switch to using the kbuild post-link pass, added in commit fbe6e37d ("kbuild: add arch specific post-link Makefile") in order to run this check. In future powerpc will use this to do more complicated operations, but initially using it for something simple is a good first step. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
An externally triggered system reset (e.g., via QEMU nmi command, or pseries reset button) can cause system reset interrupts on all CPUs. In case this causes xmon to be entered, it is undesirable for the primary (first) CPU into xmon to trigger an NMI IPI to others, because this may cause a nested system reset interrupt. So spin for a time waiting for secondaries to join xmon before performing the NMI IPI, similarly to what the crash dump code does. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Only do it when we come in from system reset, not via sysrq etc.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Have the NMI IPI code use this op when the platform defines it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Add a simple NMI IPI system that handles concurrency and reentrancy. The platform does not have to implement a true non-maskable interrupt, the default is to simply use the debugger break IPI message. This has now been co-opted for a general IPI message, and users (debugger and crash) have been reimplemented on top of the NMI system. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Incorporate incremental fixes from Nick] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
System reset is a non-maskable interrupt from Linux's point of view (occurs under local_irq_disable()), so it should use nmi_enter/exit. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
The system reset interrupt is used for crash/debug situations, so it is desirable to have as little impact on the normal state of the system as possible. Currently it uses the current kernel stack to process the exception. This stores into the stack which may be involved with the crash. The stack pointer may be corrupted, or it may have overflowed. Avoid or minimise these problems by creating a dedicated NMI stack for the system reset interrupt to use. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
In preparation for using a dedicated stack for system reset interrupts, prevent a nested system reset from recovering, in order to simplify code that is called in crash/debug path. This allows a system reset interrupt to just use the base stack pointer. Keep an in_nmi nesting counter similarly to the in_mce counter. Consider the interrrupt non-recoverable if it is taken inside another system reset. Interrupt nesting could be allowed similarly to MCE, but system reset is a special case that's not for normal operation, so simplicity wins until there is requirement for nested system reset interrupts. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
The system reset interrupt can occur when MSR_EE=0, and it currently uses the PACA_EXGEN save area. Some PACA_EXGEN interrupts have a window where MSR_RI=1 and MSR_EE=0 when the save area is still in use. A system reset interrupt in this window can lead to undetected corruption when the save area gets overwritten. This patch introduces PACA_EXNMI save area for system reset exceptions, which closes this corruption window. It's also helpful to retain the EXGEN state for debugging situations, even if not considering the recoverability aspect. This patch also moves the PACA_EXMC area down to a less frequently used part of the paca with the new save area. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
This code is common to a few exceptions, and another user will be added. This causes a trivial change to generated code: - 604: std r9,416(r1) - 608: mfspr r11,314 - 60c: std r11,368(r1) - 610: mfspr r12,315 + 604: mfspr r11,314 + 608: mfspr r12,315 + 60c: std r9,416(r1) + 610: std r11,368(r1) machine_check_powernv_early could also use this, but that requires non trivial changes to generated code, so that's for another patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Subsequent patches will add more non-RI variant exceptions, so create a macro for it rather than open-code it. This does not change generated instructions. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Cell will wake from low power state at the system reset interrupt, with the event encoded in SRR1, rather than waking at the interrupt vector that corresponds to that event. The system reset handler for this platform decodes SRR1 event reason and calls the interrupt handler to process it directly from the system reset handlre. A subsequent change will treat the system reset interrupt as a Linux NMI with its own per-CPU stack, and this will no longer work. Remove the external and decrementer handlers from the system reset handler. - The external exception remains raised and will fire again at the EE interrupt vector when system reset returns. - The decrementer is set to 1 so it will be raised again and fire when the system reset returns. It is possible to branch to an idle handler from the system reset interrupt (like POWER does), then restore a normal stack and restore this optimisation. But simplicity wins for now. Tested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
PA Semi will wake from low power state at the system reset interrupt, with the event encoded in SRR1, rather than waking at the interrupt vector that corresponds to that event. The system reset handler for this platform decodes SRR1 event reason and calls the interrupt handler to process it directly from the system reset handlre. A subsequent change will treat the system reset interrupt as a Linux NMI with its own per-CPU stack, and this will no longer work. Remove the external and decrementer handlers from the system reset handler. - The external exception remains raised and will fire again at the EE interrupt vector when system reset returns. - The decrementer is set to 1 so it will be raised again and fire when the system reset returns. It is possible to branch to an idle handler from the system reset interrupt (like POWER does), then restore a normal stack and restore this optimisation. But simplicity wins for now. Tested-by: Christian Zigotzky <chzigotzky@xenosoft.de> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Merge the topic branch we were sharing with kvm-ppc, Paul has also merged it.
-
- 27 Apr, 2017 8 commits
-
-
Naveen N. Rao authored
Split ftrace_64.S further retaining the core ftrace 64-bit aspects in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into separate files based on -mprofile-kernel. The livepatch routines are all now contained within the mprofile file. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
entry_*.S now includes a lot more than just kernel entry/exit code. As a first step at cleaning this up, let's split out the ftrace bits into separate files. Also move all related tracing code into a new trace/ subdirectory. No functional changes. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Page table dump debugfs file is named 'kernel_page_tables' on all other architectures implementing it, while is is named 'kernel_pagetables' on powerpc. This patch renames it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
On some targets, _PAGE_RW is 0 and this is _PAGE_RO which is used. There is also _PAGE_SHARED that is missing. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
On PPC32 (eg. mpc885_ads_defconfig), page table dump compilation fails as follows. This is because the memory layout is slightly different on PPC32. This patch adapts it. arch/powerpc/mm/dump_linuxpagetables.c: In function 'walk_pagetables': arch/powerpc/mm/dump_linuxpagetables.c:369:10: error: 'KERN_VIRT_START' undeclared (first use in this function) ... Fixes: 8eb07b18 ("powerpc/mm: Dump linux pagetables") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
_tlbiel_pid() is called with a ric (Radix Invalidation Control) argument of either RIC_FLUSH_TLB or RIC_FLUSH_ALL. RIC_FLUSH_ALL says to invalidate the entire TLB and the Page Walk Cache (PWC). To flush the whole TLB, we have to iterate over each set (congruence class) of the TLB. Currently we do that and pass RIC_FLUSH_ALL each time. That is not incorrect but it means we flush the PWC 128 times, when once would suffice. Fix it by doing the first flush with the ric value we're passed, and then if it was RIC_FLUSH_ALL, we downgrade it to RIC_FLUSH_TLB, because we know we have just flushed the PWC and don't need to do it again. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Split out of combined patch, tweak logic, rewrite change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
Currently we implement flushing of the page walk cache (PWC) by calling _tlbiel_pid() with a RIC (Radix Invalidation Control) value of 1 which says to only flush the PWC. But _tlbiel_pid() loops over each set (congruence class) of the TLB, which is not necessary when we're just flushing the PWC. In fact the set argument is ignored for a PWC flush, so essentially we're just flushing the PWC 127 extra times for no benefit. Fix it by adding tlbiel_pwc() which just does a single flush of the PWC. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Split out of combined patch, drop _ in name, rewrite change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 26 Apr, 2017 3 commits
-
-
Michael Ellerman authored
Recently we merged the native xive support for Power9, and then separately some reworks for doorbell IPI support. In isolation both series were OK, but the merged result had a bug in one case. On P9 DD1 we use pnv_p9_dd1_cause_ipi() which tries to use doorbells, and then falls back to the interrupt controller. However the fallback is implemented by calling icp_ops->cause_ipi. But now that xive support is merged we might be using xive, in which case icp_ops is not initialised, it's a xics specific structure. This leads to an oops such as: Unable to handle kernel paging request for data at address 0x00000028 Oops: Kernel access of bad area, sig: 11 [#1] NIP pnv_p9_dd1_cause_ipi+0x74/0xe0 LR smp_muxed_ipi_message_pass+0x54/0x70 To fix it, rather than using icp_ops which might be NULL, have both xics and xive set smp_ops->cause_ipi, and then in the powernv code we save that as ic_cause_ipi before overriding smp_ops->cause_ipi. For paranoia add a WARN_ON() to check if somehow smp_ops->cause_ipi is NULL. Fixes: b866cc21 ("powerpc: Change the doorbell IPI calling convention") Tested-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
In opal_export_attrs() we dynamically allocate some bin_attributes. They're allocated with kmalloc() and although we initialise most of the fields, we don't initialise write() or mmap(), and in particular we don't initialise the lockdep related fields in the embedded struct attribute. This leads to a lockdep warning at boot: BUG: key c0000000f11906d8 not in .data! WARNING: CPU: 0 PID: 1 at ../kernel/locking/lockdep.c:3136 lockdep_init_map+0x28c/0x2a0 ... Call Trace: lockdep_init_map+0x288/0x2a0 (unreliable) __kernfs_create_file+0x8c/0x170 sysfs_add_file_mode_ns+0xc8/0x240 __machine_initcall_powernv_opal_init+0x60c/0x684 do_one_initcall+0x60/0x1c0 kernel_init_freeable+0x2f4/0x3d4 kernel_init+0x24/0x160 ret_from_kernel_thread+0x5c/0xb0 Fix it by kzalloc'ing the attr, which fixes the uninitialised write() and mmap(), and calling sysfs_bin_attr_init() on it to initialise the lockdep fields. Fixes: 11fe909d ("powerpc/powernv: Add OPAL exports attributes to sysfs") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
The recent patch to add runtime configuration of the ASLR limits added a bug in arch_mmap_rnd() where we may shift an integer (32-bits) by up to 33 bits, leading to undefined behaviour. In practice it exhibits as every process seg faulting instantly, presumably because the rnd value hasn't been restricited by the modulus at all. We didn't notice because it only happens under certain kernel configurations and if the number of bits is actually set to a large value. Fix it by switching to unsigned long. Fixes: 9fea59bd ("powerpc/mm: Add support for runtime configuration of ASLR limits") Reported-by: Balbir Singh <bsingharora@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 24 Apr, 2017 9 commits
-
-
David Gibson authored
powerpc expects IRQs to already be (soft) disabled when switch_mm() is called, as made clear in the commit message of 9c1e1052 ("powerpc: Allow perf_counters to access user memory at interrupt time"). Aside from any race conditions that might exist between switch_mm() and an IRQ, there is also an unconditional hard_irq_disable() in switch_slb(). If that isn't followed at some point by an IRQ enable then interrupts will remain disabled until we return to userspace. It is true that when switch_mm() is called from the scheduler IRQs are off, but not when it's called by use_mm(). Looking closer we see that last year in commit f98db601 ("sched/core: Add switch_mm_irqs_off() and use it in the scheduler") this was made more explicit by the addition of switch_mm_irqs_off() which is now called by the scheduler, vs switch_mm() which is used by use_mm(). Arguably it is a bug in use_mm() to call switch_mm() in a different context than it expects, but fixing that will take time. This was discovered recently when vhost started throwing warnings such as: BUG: sleeping function called from invalid context at kernel/mutex.c:578 in_atomic(): 0, irqs_disabled(): 1, pid: 10768, name: vhost-10760 no locks held by vhost-10760/10768. irq event stamp: 10 hardirqs last enabled at (9): _raw_spin_unlock_irq+0x40/0x80 hardirqs last disabled at (10): switch_slb+0x2e4/0x490 softirqs last enabled at (0): copy_process+0x5e8/0x1260 softirqs last disabled at (0): (null) Call Trace: show_stack+0x88/0x390 (unreliable) dump_stack+0x30/0x44 __might_sleep+0x1c4/0x2d0 mutex_lock_nested+0x74/0x5c0 cgroup_attach_task_all+0x5c/0x180 vhost_attach_cgroups_work+0x58/0x80 [vhost] vhost_worker+0x24c/0x3d0 [vhost] kthread+0xec/0x100 ret_from_kernel_thread+0x5c/0xd4 Prior to commit 04b96e55 ("vhost: lockless enqueuing") (Aug 2016) the vhost_worker() would do a spin_unlock_irq() not long after calling use_mm(), which had the effect of reenabling IRQs. Since that commit removed the locking in vhost_worker() the body of the vhost_worker() loop now runs with interrupts off causing the warnings. This patch addresses the problem by making the powerpc code mirror the x86 code, ie. we disable interrupts in switch_mm(), and optimise the scheduler case by defining switch_mm_irqs_off(). Cc: stable@vger.kernel.org # v4.7+ Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [mpe: Flesh out/rewrite change log, add stable] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Tyrel Datwyler authored
For CPUs present at boot each logical CPU acquires a reference to the associated device node of the core. This happens in register_cpu() which is called by topology_init(). The result of this is that we end up with a reference held by each thread of the core. However, these references are never freed if the CPU core is DLPAR removed. This patch fixes the reference leaks by acquiring and releasing the references in the CPU hotplug callbacks un/register_cpu_online(). With this patch symmetric reference counting is observed with both CPUs present at boot, and those DLPAR added after boot. Fixes: f86e4718 ("driver/core: cpu: initialize of_node in cpu's device struture") Cc: stable@vger.kernel.org # v3.12+ Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Tyrel Datwyler authored
Historically struct device_node references were tracked using a kref embedded as a struct field. Commit 75b57ecf ("of: Make device nodes kobjects so they show up in sysfs") (Mar 2014) refactored device_nodes to be kobjects such that the device tree could by more simply exposed to userspace using sysfs. Commit 0829f6d1 ("of: device_node kobject lifecycle fixes") (Mar 2014) followed up these changes to better control the kobject lifecycle and in particular the referecne counting via of_node_get(), of_node_put(), and of_node_init(). A result of this second commit was that it introduced an of_node_put() call when a dynamic node is detached, in of_node_remove(), that removes the initial kobj reference created by of_node_init(). Traditionally as the original dynamic device node user the pseries code had assumed responsibilty for releasing this final reference in its platform specific DLPAR detach code. This patch fixes a refcount underflow introduced by commit 0829f6d1, and recently exposed by the upstreaming of the recount API. Messages like the following are no longer seen in the kernel log with this patch following DLPAR remove operations of cpus and pci devices. rpadlpar_io: slot PHB 72 removed refcount_t: underflow; use-after-free. ------------[ cut here ]------------ WARNING: CPU: 5 PID: 3335 at lib/refcount.c:128 refcount_sub_and_test+0xf4/0x110 Fixes: 0829f6d1 ("of: device_node kobject lifecycle fixes") Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> [mpe: Make change log commit references more verbose] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Currently the code that dumps SLB entries uses a double-nested if. This means the actual dumping logic is a bit squashed. Deindent it by using continue. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Although most of these kprobes patches are powerpc specific, there's a couple that touch generic code (with Acks). At the moment there's one conflict with acme's tree, but it's not too bad. Still just in case some other conflicts show up, we've put these in a topic branch so another tree could merge some or all of it if necessary.
-
Naveen N. Rao authored
KPROBES_ON_FTRACE avoids much of the overhead of regular kprobes as it eliminates the need for a trap, as well as the need to emulate or single-step instructions. Though OPTPROBES provides us with similar performance, we have limited optprobes trampoline slots. As such, when asked to probe at a function entry, default to using the ftrace infrastructure. With: # cd /sys/kernel/debug/tracing # echo 'p _do_fork' > kprobe_events before patch: # cat ../kprobes/list c0000000000daf08 k _do_fork+0x8 [DISABLED] c000000000044fc0 k kretprobe_trampoline+0x0 [OPTIMIZED] and after patch: # cat ../kprobes/list c0000000000d074c k _do_fork+0xc [DISABLED][FTRACE] c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED] Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
kprobe_lookup_name() is specific to the kprobe subsystem and may not always return the function entry point (in a subsequent patch for KPROBES_ON_FTRACE). For looking up function entry points, introduce a separate helper and use it in optprobes.c Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Allow kprobes to be placed on ftrace _mcount() call sites. This optimization avoids the use of a trap, by riding on ftrace infrastructure. This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on MPROFILE_KERNEL, which is only currently enabled on powerpc64le with newer toolchains. Based on the x86 code by Masami. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Pass the real LR to the ftrace handler. This is needed for KPROBES_ON_FTRACE for the pre handlers. Also, with KPROBES_ON_FTRACE, the link register may be updated by the pre handlers or by a registed kretprobe. Honor updated LR by restoring it from pt_regs, rather than from the stack save area. Live patch and function graph continue to work fine with this change. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 23 Apr, 2017 5 commits
-
-
Naveen N. Rao authored
Blacklist all the exception common/OOL handlers as the kernel stack is not yet setup, which means we can't take a trap at this point. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Introduce __head_end to mark end of the early fixed sections and use it to blacklist all exception handlers from kprobes. mpe: We do not need to do anything special for relocatable kernels, where the exception vectors are split from the main kernel, as the split vectors are already excluded by the check for kernel_text_address(). Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Move __head_end outside #ifdef 64-bit to unbreak the 32-bit build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Along similar lines as commit 9326638c ("kprobes, x86: Use NOKPROBE_SYMBOL() instead of __kprobes annotation"), convert __kprobes annotation to either NOKPROBE_SYMBOL() or nokprobe_inline. The latter forces inlining, in which case the caller needs to be added to NOKPROBE_SYMBOL(). Also: - blacklist arch_deref_entry_point(), and - convert a few regular inlines to nokprobe_inline in lib/sstep.c A key benefit is the ability to detect such symbols as being blacklisted. Before this patch: $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem $ perf probe read_mem Failed to write event: Invalid argument Error: Failed to add events. $ dmesg | tail -1 [ 3736.112815] Could not insert probe at _text+10014968: -22 After patch: $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem 0xc000000000072b50-0xc000000000072d20 read_mem $ perf probe read_mem read_mem is blacklisted function, skip it. Added new events: (null):(null) (on read_mem) probe:read_mem (on read_mem) You can now use it in all perf tools, such as: perf record -e probe:read_mem -aR sleep 1 $ grep " read_mem" /proc/kallsyms c000000000072b50 t read_mem c0000000005f3b40 t read_mem $ cat /sys/kernel/debug/kprobes/list c0000000005f3b48 k read_mem+0x8 [DISABLED] Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Minor change log formatting, fix up some conflicts] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Move the stack setup and teardown code into ftrace_graph_caller(). This way, we don't incur the cost of setting it up unless function graph is enabled for this function. Also, remove the extraneous LR restore code after the function graph stub. LR has previously been restored and neither livepatch_handler() nor ftrace_graph_caller() return back here. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Drop bad change to non-mprofile-kernel version of ftrace_graph_caller] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove the redundant save. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-