Commit 5806b81a authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'auto-ftrace-next' into tracing/for-linus

Conflicts:

	arch/x86/kernel/entry_32.S
	arch/x86/kernel/process_32.c
	arch/x86/kernel/process_64.c
	arch/x86/lib/Makefile
	include/asm-x86/irqflags.h
	kernel/Makefile
	kernel/sched.c
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parents d14c8a68 6712e299
In-kernel memory-mapped I/O tracing
Home page and links to optional user space tools:
http://nouveau.freedesktop.org/wiki/MmioTrace
MMIO tracing was originally developed by Intel around 2003 for their Fault
Injection Test Harness. In Dec 2006 - Jan 2007, using the code from Intel,
Jeff Muizelaar created a tool for tracing MMIO accesses with the Nouveau
project in mind. Since then many people have contributed.
Mmiotrace was built for reverse engineering any memory-mapped IO device with
the Nouveau project as the first real user. Only x86 and x86_64 architectures
are supported.
Out-of-tree mmiotrace was originally modified for mainline inclusion and
ftrace framework by Pekka Paalanen <pq@iki.fi>.
Preparation
-----------
Mmiotrace feature is compiled in by the CONFIG_MMIOTRACE option. Tracing is
disabled by default, so it is safe to have this set to yes. SMP systems are
supported, but tracing is unreliable and may miss events if more than one CPU
is on-line, therefore mmiotrace takes all but one CPU off-line during run-time
activation. You can re-enable CPUs by hand, but you have been warned, there
is no way to automatically detect if you are losing events due to CPUs racing.
Usage Quick Reference
---------------------
$ mount -t debugfs debugfs /debug
$ echo mmiotrace > /debug/tracing/current_tracer
$ cat /debug/tracing/trace_pipe > mydump.txt &
Start X or whatever.
$ echo "X is up" > /debug/tracing/marker
$ echo none > /debug/tracing/current_tracer
Check for lost events.
Usage
-----
Make sure debugfs is mounted to /debug. If not, (requires root privileges)
$ mount -t debugfs debugfs /debug
Check that the driver you are about to trace is not loaded.
Activate mmiotrace (requires root privileges):
$ echo mmiotrace > /debug/tracing/current_tracer
Start storing the trace:
$ cat /debug/tracing/trace_pipe > mydump.txt &
The 'cat' process should stay running (sleeping) in the background.
Load the driver you want to trace and use it. Mmiotrace will only catch MMIO
accesses to areas that are ioremapped while mmiotrace is active.
[Unimplemented feature:]
During tracing you can place comments (markers) into the trace by
$ echo "X is up" > /debug/tracing/marker
This makes it easier to see which part of the (huge) trace corresponds to
which action. It is recommended to place descriptive markers about what you
do.
Shut down mmiotrace (requires root privileges):
$ echo none > /debug/tracing/current_tracer
The 'cat' process exits. If it does not, kill it by issuing 'fg' command and
pressing ctrl+c.
Check that mmiotrace did not lose events due to a buffer filling up. Either
$ grep -i lost mydump.txt
which tells you exactly how many events were lost, or use
$ dmesg
to view your kernel log and look for "mmiotrace has lost events" warning. If
events were lost, the trace is incomplete. You should enlarge the buffers and
try again. Buffers are enlarged by first seeing how large the current buffers
are:
$ cat /debug/tracing/trace_entries
gives you a number. Approximately double this number and write it back, for
instance:
$ echo 128000 > /debug/tracing/trace_entries
Then start again from the top.
If you are doing a trace for a driver project, e.g. Nouveau, you should also
do the following before sending your results:
$ lspci -vvv > lspci.txt
$ dmesg > dmesg.txt
$ tar zcf pciid-nick-mmiotrace.tar.gz mydump.txt lspci.txt dmesg.txt
and then send the .tar.gz file. The trace compresses considerably. Replace
"pciid" and "nick" with the PCI ID or model name of your piece of hardware
under investigation and your nick name.
How Mmiotrace Works
-------------------
Access to hardware IO-memory is gained by mapping addresses from PCI bus by
calling one of the ioremap_*() functions. Mmiotrace is hooked into the
__ioremap() function and gets called whenever a mapping is created. Mapping is
an event that is recorded into the trace log. Note, that ISA range mappings
are not caught, since the mapping always exists and is returned directly.
MMIO accesses are recorded via page faults. Just before __ioremap() returns,
the mapped pages are marked as not present. Any access to the pages causes a
fault. The page fault handler calls mmiotrace to handle the fault. Mmiotrace
marks the page present, sets TF flag to achieve single stepping and exits the
fault handler. The instruction that faulted is executed and debug trap is
entered. Here mmiotrace again marks the page as not present. The instruction
is decoded to get the type of operation (read/write), data width and the value
read or written. These are stored to the trace log.
Setting the page present in the page fault handler has a race condition on SMP
machines. During the single stepping other CPUs may run freely on that page
and events can be missed without a notice. Re-enabling other CPUs during
tracing is discouraged.
Trace Log Format
----------------
The raw log is text and easily filtered with e.g. grep and awk. One record is
one line in the log. A record starts with a keyword, followed by keyword
dependant arguments. Arguments are separated by a space, or continue until the
end of line. The format for version 20070824 is as follows:
Explanation Keyword Space separated arguments
---------------------------------------------------------------------------
read event R width, timestamp, map id, physical, value, PC, PID
write event W width, timestamp, map id, physical, value, PC, PID
ioremap event MAP timestamp, map id, physical, virtual, length, PC, PID
iounmap event UNMAP timestamp, map id, PC, PID
marker MARK timestamp, text
version VERSION the string "20070824"
info for reader LSPCI one line from lspci -v
PCI address map PCIDEV space separated /proc/bus/pci/devices data
unk. opcode UNKNOWN timestamp, map id, physical, data, PC, PID
Timestamp is in seconds with decimals. Physical is a PCI bus address, virtual
is a kernel virtual address. Width is the data width in bytes and value is the
data value. Map id is an arbitrary id number identifying the mapping that was
used in an operation. PC is the program counter and PID is process id. PC is
zero if it is not recorded. PID is always zero as tracing MMIO accesses
originating in user space memory is not yet supported.
For instance, the following awk filter will pass all 32-bit writes that target
physical addresses in the range [0xfb73ce40, 0xfb800000[
$ awk '/W 4 / { adr=strtonum($5); if (adr >= 0xfb73ce40 &&
adr < 0xfb800000) print; }'
Tools for Developers
--------------------
The user space tools include utilities for:
- replacing numeric addresses and values with hardware register names
- replaying MMIO logs, i.e., re-executing the recorded writes
...@@ -528,6 +528,10 @@ KBUILD_CFLAGS += -g ...@@ -528,6 +528,10 @@ KBUILD_CFLAGS += -g
KBUILD_AFLAGS += -gdwarf-2 KBUILD_AFLAGS += -gdwarf-2
endif endif
ifdef CONFIG_FTRACE
KBUILD_CFLAGS += -pg
endif
# We trigger additional mismatches with less inlining # We trigger additional mismatches with less inlining
ifdef CONFIG_DEBUG_SECTION_MISMATCH ifdef CONFIG_DEBUG_SECTION_MISMATCH
KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once) KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
......
...@@ -14,6 +14,8 @@ config ARM ...@@ -14,6 +14,8 @@ config ARM
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_KPROBES if (!XIP_KERNEL) select HAVE_KPROBES if (!XIP_KERNEL)
select HAVE_KRETPROBES if (HAVE_KPROBES) select HAVE_KRETPROBES if (HAVE_KPROBES)
select HAVE_FTRACE if (!XIP_KERNEL)
select HAVE_DYNAMIC_FTRACE if (HAVE_FTRACE)
help help
The ARM series is a line of low-power-consumption RISC chip designs The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and licensed by ARM Ltd and targeted at embedded applications and
......
...@@ -69,6 +69,12 @@ SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/BSS_START/$(ZBSSADDR)/ ...@@ -69,6 +69,12 @@ SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/BSS_START/$(ZBSSADDR)/
targets := vmlinux vmlinux.lds piggy.gz piggy.o font.o font.c \ targets := vmlinux vmlinux.lds piggy.gz piggy.o font.o font.c \
head.o misc.o $(OBJS) head.o misc.o $(OBJS)
ifeq ($(CONFIG_FTRACE),y)
ORIG_CFLAGS := $(KBUILD_CFLAGS)
KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS))
endif
EXTRA_CFLAGS := -fpic -fno-builtin EXTRA_CFLAGS := -fpic -fno-builtin
EXTRA_AFLAGS := EXTRA_AFLAGS :=
......
...@@ -4,6 +4,10 @@ ...@@ -4,6 +4,10 @@
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
ifdef CONFIG_DYNAMIC_FTRACE
CFLAGS_REMOVE_ftrace.o = -pg
endif
# Object file lists. # Object file lists.
obj-y := compat.o entry-armv.o entry-common.o irq.o \ obj-y := compat.o entry-armv.o entry-common.o irq.o \
...@@ -18,6 +22,7 @@ obj-$(CONFIG_ARTHUR) += arthur.o ...@@ -18,6 +22,7 @@ obj-$(CONFIG_ARTHUR) += arthur.o
obj-$(CONFIG_ISA_DMA) += dma-isa.o obj-$(CONFIG_ISA_DMA) += dma-isa.o
obj-$(CONFIG_PCI) += bios32.o isa.o obj-$(CONFIG_PCI) += bios32.o isa.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o
obj-$(CONFIG_KPROBES) += kprobes.o kprobes-decode.o obj-$(CONFIG_KPROBES) += kprobes.o kprobes-decode.o
obj-$(CONFIG_ATAGS_PROC) += atags.o obj-$(CONFIG_ATAGS_PROC) += atags.o
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/ftrace.h>
/* /*
* libgcc functions - functions that are used internally by the * libgcc functions - functions that are used internally by the
...@@ -181,3 +182,7 @@ EXPORT_SYMBOL(_find_next_bit_be); ...@@ -181,3 +182,7 @@ EXPORT_SYMBOL(_find_next_bit_be);
#endif #endif
EXPORT_SYMBOL(copy_page); EXPORT_SYMBOL(copy_page);
#ifdef CONFIG_FTRACE
EXPORT_SYMBOL(mcount);
#endif
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
*/ */
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/ftrace.h>
#include <asm/arch/entry-macro.S> #include <asm/arch/entry-macro.S>
#include "entry-header.S" #include "entry-header.S"
...@@ -99,6 +100,56 @@ ENTRY(ret_from_fork) ...@@ -99,6 +100,56 @@ ENTRY(ret_from_fork)
#undef CALL #undef CALL
#define CALL(x) .long x #define CALL(x) .long x
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(mcount)
stmdb sp!, {r0-r3, lr}
mov r0, lr
sub r0, r0, #MCOUNT_INSN_SIZE
.globl mcount_call
mcount_call:
bl ftrace_stub
ldmia sp!, {r0-r3, pc}
ENTRY(ftrace_caller)
stmdb sp!, {r0-r3, lr}
ldr r1, [fp, #-4]
mov r0, lr
sub r0, r0, #MCOUNT_INSN_SIZE
.globl ftrace_call
ftrace_call:
bl ftrace_stub
ldmia sp!, {r0-r3, pc}
#else
ENTRY(mcount)
stmdb sp!, {r0-r3, lr}
ldr r0, =ftrace_trace_function
ldr r2, [r0]
adr r0, ftrace_stub
cmp r0, r2
bne trace
ldmia sp!, {r0-r3, pc}
trace:
ldr r1, [fp, #-4]
mov r0, lr
sub r0, r0, #MCOUNT_INSN_SIZE
mov lr, pc
mov pc, r2
ldmia sp!, {r0-r3, pc}
#endif /* CONFIG_DYNAMIC_FTRACE */
.globl ftrace_stub
ftrace_stub:
mov pc, lr
#endif /* CONFIG_FTRACE */
/*============================================================================= /*=============================================================================
* SWI handler * SWI handler
*----------------------------------------------------------------------------- *-----------------------------------------------------------------------------
......
/*
* Dynamic function tracing support.
*
* Copyright (C) 2008 Abhishek Sagar <sagar.abhishek@gmail.com>
*
* For licencing details, see COPYING.
*
* Defines low-level handling of mcount calls when the kernel
* is compiled with the -pg flag. When using dynamic ftrace, the
* mcount call-sites get patched lazily with NOP till they are
* enabled. All code mutation routines here take effect atomically.
*/
#include <linux/ftrace.h>
#include <asm/cacheflush.h>
#include <asm/ftrace.h>
#define PC_OFFSET 8
#define BL_OPCODE 0xeb000000
#define BL_OFFSET_MASK 0x00ffffff
static unsigned long bl_insn;
static const unsigned long NOP = 0xe1a00000; /* mov r0, r0 */
unsigned char *ftrace_nop_replace(void)
{
return (char *)&NOP;
}
/* construct a branch (BL) instruction to addr */
unsigned char *ftrace_call_replace(unsigned long pc, unsigned long addr)
{
long offset;
offset = (long)addr - (long)(pc + PC_OFFSET);
if (unlikely(offset < -33554432 || offset > 33554428)) {
/* Can't generate branches that far (from ARM ARM). Ftrace
* doesn't generate branches outside of kernel text.
*/
WARN_ON_ONCE(1);
return NULL;
}
offset = (offset >> 2) & BL_OFFSET_MASK;
bl_insn = BL_OPCODE | offset;
return (unsigned char *)&bl_insn;
}
int ftrace_modify_code(unsigned long pc, unsigned char *old_code,
unsigned char *new_code)
{
unsigned long err = 0, replaced = 0, old, new;
old = *(unsigned long *)old_code;
new = *(unsigned long *)new_code;
__asm__ __volatile__ (
"1: ldr %1, [%2] \n"
" cmp %1, %4 \n"
"2: streq %3, [%2] \n"
" cmpne %1, %3 \n"
" movne %0, #2 \n"
"3:\n"
".section .fixup, \"ax\"\n"
"4: mov %0, #1 \n"
" b 3b \n"
".previous\n"
".section __ex_table, \"a\"\n"
" .long 1b, 4b \n"
" .long 2b, 4b \n"
".previous\n"
: "=r"(err), "=r"(replaced)
: "r"(pc), "r"(new), "r"(old), "0"(err), "1"(replaced)
: "memory");
if (!err && (replaced == old))
flush_icache_range(pc, pc + MCOUNT_INSN_SIZE);
return err;
}
int ftrace_update_ftrace_func(ftrace_func_t func)
{
int ret;
unsigned long pc, old;
unsigned char *new;
pc = (unsigned long)&ftrace_call;
memcpy(&old, &ftrace_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(pc, (unsigned long)func);
ret = ftrace_modify_code(pc, (unsigned char *)&old, new);
return ret;
}
int ftrace_mcount_set(unsigned long *data)
{
unsigned long pc, old;
unsigned long *addr = data;
unsigned char *new;
pc = (unsigned long)&mcount_call;
memcpy(&old, &mcount_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(pc, *addr);
*addr = ftrace_modify_code(pc, (unsigned char *)&old, new);
return 0;
}
/* run from kstop_machine */
int __init ftrace_dyn_arch_init(void *data)
{
ftrace_mcount_set(data);
return 0;
}
...@@ -274,7 +274,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ...@@ -274,7 +274,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
* for kretprobe handlers which should normally be interested in r0 only * for kretprobe handlers which should normally be interested in r0 only
* anyway. * anyway.
*/ */
static void __attribute__((naked)) __kprobes kretprobe_trampoline(void) void __naked __kprobes kretprobe_trampoline(void)
{ {
__asm__ __volatile__ ( __asm__ __volatile__ (
"stmdb sp!, {r0 - r11} \n\t" "stmdb sp!, {r0 - r11} \n\t"
......
...@@ -105,11 +105,13 @@ config ARCH_NO_VIRT_TO_BUS ...@@ -105,11 +105,13 @@ config ARCH_NO_VIRT_TO_BUS
config PPC config PPC
bool bool
default y default y
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE
select HAVE_IDE select HAVE_IDE
select HAVE_OPROFILE
select HAVE_KPROBES select HAVE_KPROBES
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_LMB select HAVE_LMB
select HAVE_OPROFILE
config EARLY_PRINTK config EARLY_PRINTK
bool bool
......
...@@ -12,6 +12,18 @@ CFLAGS_prom_init.o += -fPIC ...@@ -12,6 +12,18 @@ CFLAGS_prom_init.o += -fPIC
CFLAGS_btext.o += -fPIC CFLAGS_btext.o += -fPIC
endif endif
ifdef CONFIG_FTRACE
# Do not trace early boot code
CFLAGS_REMOVE_cputable.o = -pg
CFLAGS_REMOVE_prom_init.o = -pg
ifdef CONFIG_DYNAMIC_FTRACE
# dynamic ftrace setup.
CFLAGS_REMOVE_ftrace.o = -pg
endif
endif
obj-y := cputable.o ptrace.o syscalls.o \ obj-y := cputable.o ptrace.o syscalls.o \
irq.o align.o signal_32.o pmc.o vdso.o \ irq.o align.o signal_32.o pmc.o vdso.o \
init_task.o process.o systbl.o idle.o \ init_task.o process.o systbl.o idle.o \
...@@ -78,6 +90,8 @@ obj-$(CONFIG_KEXEC) += machine_kexec.o crash.o \ ...@@ -78,6 +90,8 @@ obj-$(CONFIG_KEXEC) += machine_kexec.o crash.o \
obj-$(CONFIG_AUDIT) += audit.o obj-$(CONFIG_AUDIT) += audit.o
obj64-$(CONFIG_AUDIT) += compat_audit.o obj64-$(CONFIG_AUDIT) += compat_audit.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_8XX_MINIMAL_FPEMU) += softemu8xx.o obj-$(CONFIG_8XX_MINIMAL_FPEMU) += softemu8xx.o
ifneq ($(CONFIG_PPC_INDIRECT_IO),y) ifneq ($(CONFIG_PPC_INDIRECT_IO),y)
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <asm/ppc_asm.h> #include <asm/ppc_asm.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/ftrace.h>
#undef SHOW_SYSCALLS #undef SHOW_SYSCALLS
#undef SHOW_SYSCALLS_TASK #undef SHOW_SYSCALLS_TASK
...@@ -1035,3 +1036,129 @@ machine_check_in_rtas: ...@@ -1035,3 +1036,129 @@ machine_check_in_rtas:
/* XXX load up BATs and panic */ /* XXX load up BATs and panic */
#endif /* CONFIG_PPC_RTAS */ #endif /* CONFIG_PPC_RTAS */
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
_GLOBAL(mcount)
_GLOBAL(_mcount)
stwu r1,-48(r1)
stw r3, 12(r1)
stw r4, 16(r1)
stw r5, 20(r1)
stw r6, 24(r1)
mflr r3
stw r7, 28(r1)
mfcr r5
stw r8, 32(r1)
stw r9, 36(r1)
stw r10,40(r1)
stw r3, 44(r1)
stw r5, 8(r1)
subi r3, r3, MCOUNT_INSN_SIZE
.globl mcount_call
mcount_call:
bl ftrace_stub
nop
lwz r6, 8(r1)
lwz r0, 44(r1)
lwz r3, 12(r1)
mtctr r0
lwz r4, 16(r1)
mtcr r6
lwz r5, 20(r1)
lwz r6, 24(r1)
lwz r0, 52(r1)
lwz r7, 28(r1)
lwz r8, 32(r1)
mtlr r0
lwz r9, 36(r1)
lwz r10,40(r1)
addi r1, r1, 48
bctr
_GLOBAL(ftrace_caller)
/* Based off of objdump optput from glibc */
stwu r1,-48(r1)
stw r3, 12(r1)
stw r4, 16(r1)
stw r5, 20(r1)
stw r6, 24(r1)
mflr r3
lwz r4, 52(r1)
mfcr r5
stw r7, 28(r1)
stw r8, 32(r1)
stw r9, 36(r1)
stw r10,40(r1)
stw r3, 44(r1)
stw r5, 8(r1)
subi r3, r3, MCOUNT_INSN_SIZE
.globl ftrace_call
ftrace_call:
bl ftrace_stub
nop
lwz r6, 8(r1)
lwz r0, 44(r1)
lwz r3, 12(r1)
mtctr r0
lwz r4, 16(r1)
mtcr r6
lwz r5, 20(r1)
lwz r6, 24(r1)
lwz r0, 52(r1)
lwz r7, 28(r1)
lwz r8, 32(r1)
mtlr r0
lwz r9, 36(r1)
lwz r10,40(r1)
addi r1, r1, 48
bctr
#else
_GLOBAL(mcount)
_GLOBAL(_mcount)
stwu r1,-48(r1)
stw r3, 12(r1)
stw r4, 16(r1)
stw r5, 20(r1)
stw r6, 24(r1)
mflr r3
lwz r4, 52(r1)
mfcr r5
stw r7, 28(r1)
stw r8, 32(r1)
stw r9, 36(r1)
stw r10,40(r1)
stw r3, 44(r1)
stw r5, 8(r1)
subi r3, r3, MCOUNT_INSN_SIZE
LOAD_REG_ADDR(r5, ftrace_trace_function)
lwz r5,0(r5)
mtctr r5
bctrl
nop
lwz r6, 8(r1)
lwz r0, 44(r1)
lwz r3, 12(r1)
mtctr r0
lwz r4, 16(r1)
mtcr r6
lwz r5, 20(r1)
lwz r6, 24(r1)
lwz r0, 52(r1)
lwz r7, 28(r1)
lwz r8, 32(r1)
mtlr r0
lwz r9, 36(r1)
lwz r10,40(r1)
addi r1, r1, 48
bctr
#endif
_GLOBAL(ftrace_stub)
blr
#endif /* CONFIG_MCOUNT */
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <asm/bug.h> #include <asm/bug.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/irqflags.h> #include <asm/irqflags.h>
#include <asm/ftrace.h>
/* /*
* System calls. * System calls.
...@@ -870,3 +871,67 @@ _GLOBAL(enter_prom) ...@@ -870,3 +871,67 @@ _GLOBAL(enter_prom)
ld r0,16(r1) ld r0,16(r1)
mtlr r0 mtlr r0
blr blr
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
_GLOBAL(mcount)
_GLOBAL(_mcount)
/* Taken from output of objdump from lib64/glibc */
mflr r3
stdu r1, -112(r1)
std r3, 128(r1)
subi r3, r3, MCOUNT_INSN_SIZE
.globl mcount_call
mcount_call:
bl ftrace_stub
nop
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
blr
_GLOBAL(ftrace_caller)
/* Taken from output of objdump from lib64/glibc */
mflr r3
ld r11, 0(r1)
stdu r1, -112(r1)
std r3, 128(r1)
ld r4, 16(r11)
subi r3, r3, MCOUNT_INSN_SIZE
.globl ftrace_call
ftrace_call:
bl ftrace_stub
nop
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
_GLOBAL(ftrace_stub)
blr
#else
_GLOBAL(mcount)
blr
_GLOBAL(_mcount)
/* Taken from output of objdump from lib64/glibc */
mflr r3
ld r11, 0(r1)
stdu r1, -112(r1)
std r3, 128(r1)
ld r4, 16(r11)
subi r3, r3, MCOUNT_INSN_SIZE
LOAD_REG_ADDR(r5,ftrace_trace_function)
ld r5,0(r5)
ld r5,0(r5)
mtctr r5
bctrl
nop
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
_GLOBAL(ftrace_stub)
blr
#endif
#endif
/*
* Code for replacing ftrace calls with jumps.
*
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
*
* Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box.
*
*/
#include <linux/spinlock.h>
#include <linux/hardirq.h>
#include <linux/ftrace.h>
#include <linux/percpu.h>
#include <linux/init.h>
#include <linux/list.h>
#include <asm/cacheflush.h>
#include <asm/ftrace.h>
static unsigned int ftrace_nop = 0x60000000;
#ifdef CONFIG_PPC32
# define GET_ADDR(addr) addr
#else
/* PowerPC64's functions are data that points to the functions */
# define GET_ADDR(addr) *(unsigned long *)addr
#endif
static unsigned int notrace ftrace_calc_offset(long ip, long addr)
{
return (int)(addr - ip);
}
notrace unsigned char *ftrace_nop_replace(void)
{
return (char *)&ftrace_nop;
}
notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
{
static unsigned int op;
/*
* It would be nice to just use create_function_call, but that will
* update the code itself. Here we need to just return the
* instruction that is going to be modified, without modifying the
* code.
*/
addr = GET_ADDR(addr);
/* Set to "bl addr" */
op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffc);
/*
* No locking needed, this must be called via kstop_machine
* which in essence is like running on a uniprocessor machine.
*/
return (unsigned char *)&op;
}
#ifdef CONFIG_PPC64
# define _ASM_ALIGN " .align 3 "
# define _ASM_PTR " .llong "
#else
# define _ASM_ALIGN " .align 2 "
# define _ASM_PTR " .long "
#endif
notrace int
ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char *new_code)
{
unsigned replaced;
unsigned old = *(unsigned *)old_code;
unsigned new = *(unsigned *)new_code;
int faulted = 0;
/*
* Note: Due to modules and __init, code can
* disappear and change, we need to protect against faulting
* as well as code changing.
*
* No real locking needed, this code is run through
* kstop_machine.
*/
asm volatile (
"1: lwz %1, 0(%2)\n"
" cmpw %1, %5\n"
" bne 2f\n"
" stwu %3, 0(%2)\n"
"2:\n"
".section .fixup, \"ax\"\n"
"3: li %0, 1\n"
" b 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
_ASM_ALIGN "\n"
_ASM_PTR "1b, 3b\n"
".previous"
: "=r"(faulted), "=r"(replaced)
: "r"(ip), "r"(new),
"0"(faulted), "r"(old)
: "memory");
if (replaced != old && replaced != new)
faulted = 2;
if (!faulted)
flush_icache_range(ip, ip + 8);
return faulted;
}
notrace int ftrace_update_ftrace_func(ftrace_func_t func)
{
unsigned long ip = (unsigned long)(&ftrace_call);
unsigned char old[MCOUNT_INSN_SIZE], *new;
int ret;
memcpy(old, &ftrace_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, (unsigned long)func);
ret = ftrace_modify_code(ip, old, new);
return ret;
}
notrace int ftrace_mcount_set(unsigned long *data)
{
unsigned long ip = (long)(&mcount_call);
unsigned long *addr = data;
unsigned char old[MCOUNT_INSN_SIZE], *new;
/*
* Replace the mcount stub with a pointer to the
* ip recorder function.
*/
memcpy(old, &mcount_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, *addr);
*addr = ftrace_modify_code(ip, old, new);
return 0;
}
int __init ftrace_dyn_arch_init(void *data)
{
/* This is running in kstop_machine */
ftrace_mcount_set(data);
return 0;
}
...@@ -120,7 +120,8 @@ EXPORT_SYMBOL(_outsl_ns); ...@@ -120,7 +120,8 @@ EXPORT_SYMBOL(_outsl_ns);
#define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0) #define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0)
void _memset_io(volatile void __iomem *addr, int c, unsigned long n) notrace void
_memset_io(volatile void __iomem *addr, int c, unsigned long n)
{ {
void *p = (void __force *)addr; void *p = (void __force *)addr;
u32 lc = c; u32 lc = c;
......
...@@ -98,7 +98,7 @@ EXPORT_SYMBOL(irq_desc); ...@@ -98,7 +98,7 @@ EXPORT_SYMBOL(irq_desc);
int distribute_irqs = 1; int distribute_irqs = 1;
static inline unsigned long get_hard_enabled(void) static inline notrace unsigned long get_hard_enabled(void)
{ {
unsigned long enabled; unsigned long enabled;
...@@ -108,13 +108,13 @@ static inline unsigned long get_hard_enabled(void) ...@@ -108,13 +108,13 @@ static inline unsigned long get_hard_enabled(void)
return enabled; return enabled;
} }
static inline void set_soft_enabled(unsigned long enable) static inline notrace void set_soft_enabled(unsigned long enable)
{ {
__asm__ __volatile__("stb %0,%1(13)" __asm__ __volatile__("stb %0,%1(13)"
: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled))); : : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
} }
void raw_local_irq_restore(unsigned long en) notrace void raw_local_irq_restore(unsigned long en)
{ {
/* /*
* get_paca()->soft_enabled = en; * get_paca()->soft_enabled = en;
......
...@@ -42,6 +42,7 @@ ...@@ -42,6 +42,7 @@
#include <asm/div64.h> #include <asm/div64.h>
#include <asm/signal.h> #include <asm/signal.h>
#include <asm/dcr.h> #include <asm/dcr.h>
#include <asm/ftrace.h>
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
extern void transfer_to_handler(void); extern void transfer_to_handler(void);
...@@ -67,6 +68,10 @@ EXPORT_SYMBOL(single_step_exception); ...@@ -67,6 +68,10 @@ EXPORT_SYMBOL(single_step_exception);
EXPORT_SYMBOL(sys_sigreturn); EXPORT_SYMBOL(sys_sigreturn);
#endif #endif
#ifdef CONFIG_FTRACE
EXPORT_SYMBOL(_mcount);
#endif
EXPORT_SYMBOL(strcpy); EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strncpy); EXPORT_SYMBOL(strncpy);
EXPORT_SYMBOL(strcat); EXPORT_SYMBOL(strcat);
......
...@@ -81,7 +81,7 @@ int ucache_bsize; ...@@ -81,7 +81,7 @@ int ucache_bsize;
* from the address that it was linked at, so we must use RELOC/PTRRELOC * from the address that it was linked at, so we must use RELOC/PTRRELOC
* to access static data (including strings). -- paulus * to access static data (including strings). -- paulus
*/ */
unsigned long __init early_init(unsigned long dt_ptr) notrace unsigned long __init early_init(unsigned long dt_ptr)
{ {
unsigned long offset = reloc_offset(); unsigned long offset = reloc_offset();
struct cpu_spec *spec; struct cpu_spec *spec;
...@@ -111,7 +111,7 @@ unsigned long __init early_init(unsigned long dt_ptr) ...@@ -111,7 +111,7 @@ unsigned long __init early_init(unsigned long dt_ptr)
* This is called very early on the boot process, after a minimal * This is called very early on the boot process, after a minimal
* MMU environment has been set up but before MMU_init is called. * MMU environment has been set up but before MMU_init is called.
*/ */
void __init machine_init(unsigned long dt_ptr, unsigned long phys) notrace void __init machine_init(unsigned long dt_ptr, unsigned long phys)
{ {
/* Enable early debugging if any specified (see udbg.h) */ /* Enable early debugging if any specified (see udbg.h) */
udbg_early_init(); udbg_early_init();
...@@ -133,7 +133,7 @@ void __init machine_init(unsigned long dt_ptr, unsigned long phys) ...@@ -133,7 +133,7 @@ void __init machine_init(unsigned long dt_ptr, unsigned long phys)
#ifdef CONFIG_BOOKE_WDT #ifdef CONFIG_BOOKE_WDT
/* Checks wdt=x and wdt_period=xx command-line option */ /* Checks wdt=x and wdt_period=xx command-line option */
int __init early_parse_wdt(char *p) notrace int __init early_parse_wdt(char *p)
{ {
if (p && strncmp(p, "0", 1) != 0) if (p && strncmp(p, "0", 1) != 0)
booke_wdt_enabled = 1; booke_wdt_enabled = 1;
......
CFLAGS_bootx_init.o += -fPIC CFLAGS_bootx_init.o += -fPIC
ifdef CONFIG_FTRACE
# Do not trace early boot code
CFLAGS_REMOVE_bootx_init.o = -pg
endif
obj-y += pic.o setup.o time.o feature.o pci.o \ obj-y += pic.o setup.o time.o feature.o pci.o \
sleep.o low_i2c.o cache.o pfunc_core.o \ sleep.o low_i2c.o cache.o pfunc_core.o \
pfunc_base.o pfunc_base.o
......
...@@ -11,6 +11,8 @@ config SPARC ...@@ -11,6 +11,8 @@ config SPARC
config SPARC64 config SPARC64
bool bool
default y default y
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE
select HAVE_IDE select HAVE_IDE
select HAVE_LMB select HAVE_LMB
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
......
...@@ -33,7 +33,7 @@ config DEBUG_PAGEALLOC ...@@ -33,7 +33,7 @@ config DEBUG_PAGEALLOC
config MCOUNT config MCOUNT
bool bool
depends on STACK_DEBUG depends on STACK_DEBUG || FTRACE
default y default y
config FRAME_POINTER config FRAME_POINTER
......
...@@ -14,6 +14,7 @@ obj-y := process.o setup.o cpu.o idprom.o \ ...@@ -14,6 +14,7 @@ obj-y := process.o setup.o cpu.o idprom.o \
power.o sbus.o sparc64_ksyms.o chmc.o \ power.o sbus.o sparc64_ksyms.o chmc.o \
visemul.o prom.o of_device.o hvapi.o sstate.o mdesc.o visemul.o prom.o of_device.o hvapi.o sstate.o mdesc.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_PCI) += ebus.o pci_common.o \ obj-$(CONFIG_PCI) += ebus.o pci_common.o \
pci_psycho.o pci_sabre.o pci_schizo.o \ pci_psycho.o pci_sabre.o pci_schizo.o \
......
#include <linux/spinlock.h>
#include <linux/hardirq.h>
#include <linux/ftrace.h>
#include <linux/percpu.h>
#include <linux/init.h>
#include <linux/list.h>
#include <asm/ftrace.h>
static const u32 ftrace_nop = 0x01000000;
notrace unsigned char *ftrace_nop_replace(void)
{
return (char *)&ftrace_nop;
}
notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
{
static u32 call;
s32 off;
off = ((s32)addr - (s32)ip);
call = 0x40000000 | ((u32)off >> 2);
return (unsigned char *) &call;
}
notrace int
ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char *new_code)
{
u32 old = *(u32 *)old_code;
u32 new = *(u32 *)new_code;
u32 replaced;
int faulted;
__asm__ __volatile__(
"1: cas [%[ip]], %[old], %[new]\n"
" flush %[ip]\n"
" mov 0, %[faulted]\n"
"2:\n"
" .section .fixup,#alloc,#execinstr\n"
" .align 4\n"
"3: sethi %%hi(2b), %[faulted]\n"
" jmpl %[faulted] + %%lo(2b), %%g0\n"
" mov 1, %[faulted]\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .align 4\n"
" .word 1b, 3b\n"
" .previous\n"
: "=r" (replaced), [faulted] "=r" (faulted)
: [new] "0" (new), [old] "r" (old), [ip] "r" (ip)
: "memory");
if (replaced != old && replaced != new)
faulted = 2;
return faulted;
}
notrace int ftrace_update_ftrace_func(ftrace_func_t func)
{
unsigned long ip = (unsigned long)(&ftrace_call);
unsigned char old[MCOUNT_INSN_SIZE], *new;
memcpy(old, &ftrace_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, (unsigned long)func);
return ftrace_modify_code(ip, old, new);
}
notrace int ftrace_mcount_set(unsigned long *data)
{
unsigned long ip = (long)(&mcount_call);
unsigned long *addr = data;
unsigned char old[MCOUNT_INSN_SIZE], *new;
/*
* Replace the mcount stub with a pointer to the
* ip recorder function.
*/
memcpy(old, &mcount_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, *addr);
*addr = ftrace_modify_code(ip, old, new);
return 0;
}
int __init ftrace_dyn_arch_init(void *data)
{
ftrace_mcount_set(data);
return 0;
}
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
#include <asm/ns87303.h> #include <asm/ns87303.h>
#include <asm/timer.h> #include <asm/timer.h>
#include <asm/cpudata.h> #include <asm/cpudata.h>
#include <asm/ftrace.h>
struct poll { struct poll {
int fd; int fd;
...@@ -111,8 +112,7 @@ EXPORT_SYMBOL(__write_trylock); ...@@ -111,8 +112,7 @@ EXPORT_SYMBOL(__write_trylock);
EXPORT_SYMBOL(smp_call_function); EXPORT_SYMBOL(smp_call_function);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#if defined(CONFIG_MCOUNT) #ifdef CONFIG_MCOUNT
extern void _mcount(void);
EXPORT_SYMBOL(_mcount); EXPORT_SYMBOL(_mcount);
#endif #endif
......
...@@ -28,10 +28,13 @@ ovstack: ...@@ -28,10 +28,13 @@ ovstack:
.skip OVSTACKSIZE .skip OVSTACKSIZE
#endif #endif
.text .text
.align 32 .align 32
.globl mcount, _mcount .globl _mcount
mcount: .type _mcount,#function
.globl mcount
.type mcount,#function
_mcount: _mcount:
mcount:
#ifdef CONFIG_STACK_DEBUG #ifdef CONFIG_STACK_DEBUG
/* /*
* Check whether %sp is dangerously low. * Check whether %sp is dangerously low.
...@@ -55,6 +58,53 @@ _mcount: ...@@ -55,6 +58,53 @@ _mcount:
or %g3, %lo(panicstring), %o0 or %g3, %lo(panicstring), %o0
call prom_halt call prom_halt
nop nop
1:
#endif
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
mov %o7, %o0
.globl mcount_call
mcount_call:
call ftrace_stub
mov %o0, %o7
#else
sethi %hi(ftrace_trace_function), %g1
sethi %hi(ftrace_stub), %g2
ldx [%g1 + %lo(ftrace_trace_function)], %g1
or %g2, %lo(ftrace_stub), %g2
cmp %g1, %g2
be,pn %icc, 1f
mov %i7, %o1
jmpl %g1, %g0
mov %o7, %o0
/* not reached */
1:
#endif #endif
1: retl #endif
retl
nop nop
.size _mcount,.-_mcount
.size mcount,.-mcount
#ifdef CONFIG_FTRACE
.globl ftrace_stub
.type ftrace_stub,#function
ftrace_stub:
retl
nop
.size ftrace_stub,.-ftrace_stub
#ifdef CONFIG_DYNAMIC_FTRACE
.globl ftrace_caller
.type ftrace_caller,#function
ftrace_caller:
mov %i7, %o1
mov %o7, %o0
.globl ftrace_call
ftrace_call:
call ftrace_stub
mov %o0, %o7
retl
nop
.size ftrace_caller,.-ftrace_caller
#endif
#endif
...@@ -23,6 +23,8 @@ config X86 ...@@ -23,6 +23,8 @@ config X86
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_KPROBES select HAVE_KPROBES
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE
select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64) select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64)
select HAVE_ARCH_KGDB if !X86_VOYAGER select HAVE_ARCH_KGDB if !X86_VOYAGER
......
...@@ -171,6 +171,34 @@ config IOMMU_LEAK ...@@ -171,6 +171,34 @@ config IOMMU_LEAK
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
config MMIOTRACE_HOOKS
bool
config MMIOTRACE
bool "Memory mapped IO tracing"
depends on DEBUG_KERNEL && PCI
select TRACING
select MMIOTRACE_HOOKS
default y
help
Mmiotrace traces Memory Mapped I/O access and is meant for
debugging and reverse engineering. It is called from the ioremap
implementation and works via page faults. Tracing is disabled by
default and can be enabled at run-time.
See Documentation/tracers/mmiotrace.txt.
If you are not helping to develop drivers, say N.
config MMIOTRACE_TEST
tristate "Test module for mmiotrace"
depends on MMIOTRACE && m
help
This is a dumb module for testing mmiotrace. It is very dangerous
as it will write garbage to IO memory starting at a given address.
However, it should be safe to use on e.g. unused portion of VRAM.
Say N, unless you absolutely know what you are doing.
# #
# IO delay types: # IO delay types:
# #
......
...@@ -6,6 +6,13 @@ extra-y := head_$(BITS).o head$(BITS).o head.o init_task.o vmlinu ...@@ -6,6 +6,13 @@ extra-y := head_$(BITS).o head$(BITS).o head.o init_task.o vmlinu
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE) CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
ifdef CONFIG_FTRACE
# Do not profile debug utilities
CFLAGS_REMOVE_tsc_64.o = -pg
CFLAGS_REMOVE_tsc_32.o = -pg
CFLAGS_REMOVE_rtc.o = -pg
endif
# #
# vsyscalls (which work on the user stack) should have # vsyscalls (which work on the user stack) should have
# no stack-protector checks: # no stack-protector checks:
...@@ -57,6 +64,7 @@ obj-$(CONFIG_X86_MPPARSE) += mpparse.o ...@@ -57,6 +64,7 @@ obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi.o obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi.o
obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
......
#include <linux/module.h> #include <linux/module.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/spinlock.h> #include <linux/mutex.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -143,7 +143,7 @@ static const unsigned char *const p6_nops[ASM_NOP_MAX+1] = { ...@@ -143,7 +143,7 @@ static const unsigned char *const p6_nops[ASM_NOP_MAX+1] = {
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
extern char __vsyscall_0; extern char __vsyscall_0;
static inline const unsigned char*const * find_nop_table(void) const unsigned char *const *find_nop_table(void)
{ {
return boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || return boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
boot_cpu_data.x86 < 6 ? k8_nops : p6_nops; boot_cpu_data.x86 < 6 ? k8_nops : p6_nops;
...@@ -162,7 +162,7 @@ static const struct nop { ...@@ -162,7 +162,7 @@ static const struct nop {
{ -1, NULL } { -1, NULL }
}; };
static const unsigned char*const * find_nop_table(void) const unsigned char *const *find_nop_table(void)
{ {
const unsigned char *const *noptable = intel_nops; const unsigned char *const *noptable = intel_nops;
int i; int i;
...@@ -279,7 +279,7 @@ struct smp_alt_module { ...@@ -279,7 +279,7 @@ struct smp_alt_module {
struct list_head next; struct list_head next;
}; };
static LIST_HEAD(smp_alt_modules); static LIST_HEAD(smp_alt_modules);
static DEFINE_SPINLOCK(smp_alt); static DEFINE_MUTEX(smp_alt);
static int smp_mode = 1; /* protected by smp_alt */ static int smp_mode = 1; /* protected by smp_alt */
void alternatives_smp_module_add(struct module *mod, char *name, void alternatives_smp_module_add(struct module *mod, char *name,
...@@ -312,12 +312,12 @@ void alternatives_smp_module_add(struct module *mod, char *name, ...@@ -312,12 +312,12 @@ void alternatives_smp_module_add(struct module *mod, char *name,
__func__, smp->locks, smp->locks_end, __func__, smp->locks, smp->locks_end,
smp->text, smp->text_end, smp->name); smp->text, smp->text_end, smp->name);
spin_lock(&smp_alt); mutex_lock(&smp_alt);
list_add_tail(&smp->next, &smp_alt_modules); list_add_tail(&smp->next, &smp_alt_modules);
if (boot_cpu_has(X86_FEATURE_UP)) if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(smp->locks, smp->locks_end, alternatives_smp_unlock(smp->locks, smp->locks_end,
smp->text, smp->text_end); smp->text, smp->text_end);
spin_unlock(&smp_alt); mutex_unlock(&smp_alt);
} }
void alternatives_smp_module_del(struct module *mod) void alternatives_smp_module_del(struct module *mod)
...@@ -327,17 +327,17 @@ void alternatives_smp_module_del(struct module *mod) ...@@ -327,17 +327,17 @@ void alternatives_smp_module_del(struct module *mod)
if (smp_alt_once || noreplace_smp) if (smp_alt_once || noreplace_smp)
return; return;
spin_lock(&smp_alt); mutex_lock(&smp_alt);
list_for_each_entry(item, &smp_alt_modules, next) { list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod) if (mod != item->mod)
continue; continue;
list_del(&item->next); list_del(&item->next);
spin_unlock(&smp_alt); mutex_unlock(&smp_alt);
DPRINTK("%s: %s\n", __func__, item->name); DPRINTK("%s: %s\n", __func__, item->name);
kfree(item); kfree(item);
return; return;
} }
spin_unlock(&smp_alt); mutex_unlock(&smp_alt);
} }
void alternatives_smp_switch(int smp) void alternatives_smp_switch(int smp)
...@@ -359,7 +359,7 @@ void alternatives_smp_switch(int smp) ...@@ -359,7 +359,7 @@ void alternatives_smp_switch(int smp)
return; return;
BUG_ON(!smp && (num_online_cpus() > 1)); BUG_ON(!smp && (num_online_cpus() > 1));
spin_lock(&smp_alt); mutex_lock(&smp_alt);
/* /*
* Avoid unnecessary switches because it forces JIT based VMs to * Avoid unnecessary switches because it forces JIT based VMs to
...@@ -383,7 +383,7 @@ void alternatives_smp_switch(int smp) ...@@ -383,7 +383,7 @@ void alternatives_smp_switch(int smp)
mod->text, mod->text_end); mod->text, mod->text_end);
} }
smp_mode = smp; smp_mode = smp;
spin_unlock(&smp_alt); mutex_unlock(&smp_alt);
} }
#endif #endif
......
...@@ -51,6 +51,7 @@ ...@@ -51,6 +51,7 @@
#include <asm/percpu.h> #include <asm/percpu.h>
#include <asm/dwarf2.h> #include <asm/dwarf2.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <asm/ftrace.h>
#include <asm/irq_vectors.h> #include <asm/irq_vectors.h>
/* /*
...@@ -1111,6 +1112,77 @@ ENDPROC(xen_failsafe_callback) ...@@ -1111,6 +1112,77 @@ ENDPROC(xen_failsafe_callback)
#endif /* CONFIG_XEN */ #endif /* CONFIG_XEN */
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(mcount)
pushl %eax
pushl %ecx
pushl %edx
movl 0xc(%esp), %eax
subl $MCOUNT_INSN_SIZE, %eax
.globl mcount_call
mcount_call:
call ftrace_stub
popl %edx
popl %ecx
popl %eax
ret
END(mcount)
ENTRY(ftrace_caller)
pushl %eax
pushl %ecx
pushl %edx
movl 0xc(%esp), %eax
movl 0x4(%ebp), %edx
subl $MCOUNT_INSN_SIZE, %eax
.globl ftrace_call
ftrace_call:
call ftrace_stub
popl %edx
popl %ecx
popl %eax
.globl ftrace_stub
ftrace_stub:
ret
END(ftrace_caller)
#else /* ! CONFIG_DYNAMIC_FTRACE */
ENTRY(mcount)
cmpl $ftrace_stub, ftrace_trace_function
jnz trace
.globl ftrace_stub
ftrace_stub:
ret
/* taken from glibc */
trace:
pushl %eax
pushl %ecx
pushl %edx
movl 0xc(%esp), %eax
movl 0x4(%ebp), %edx
subl $MCOUNT_INSN_SIZE, %eax
call *ftrace_trace_function
popl %edx
popl %ecx
popl %eax
jmp ftrace_stub
END(mcount)
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* CONFIG_FTRACE */
.section .rodata,"a" .section .rodata,"a"
#include "syscall_table_32.S" #include "syscall_table_32.S"
......
...@@ -51,9 +51,115 @@ ...@@ -51,9 +51,115 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/irqflags.h> #include <asm/irqflags.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/ftrace.h>
.code64 .code64
#ifdef CONFIG_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(mcount)
subq $0x38, %rsp
movq %rax, (%rsp)
movq %rcx, 8(%rsp)
movq %rdx, 16(%rsp)
movq %rsi, 24(%rsp)
movq %rdi, 32(%rsp)
movq %r8, 40(%rsp)
movq %r9, 48(%rsp)
movq 0x38(%rsp), %rdi
subq $MCOUNT_INSN_SIZE, %rdi
.globl mcount_call
mcount_call:
call ftrace_stub
movq 48(%rsp), %r9
movq 40(%rsp), %r8
movq 32(%rsp), %rdi
movq 24(%rsp), %rsi
movq 16(%rsp), %rdx
movq 8(%rsp), %rcx
movq (%rsp), %rax
addq $0x38, %rsp
retq
END(mcount)
ENTRY(ftrace_caller)
/* taken from glibc */
subq $0x38, %rsp
movq %rax, (%rsp)
movq %rcx, 8(%rsp)
movq %rdx, 16(%rsp)
movq %rsi, 24(%rsp)
movq %rdi, 32(%rsp)
movq %r8, 40(%rsp)
movq %r9, 48(%rsp)
movq 0x38(%rsp), %rdi
movq 8(%rbp), %rsi
subq $MCOUNT_INSN_SIZE, %rdi
.globl ftrace_call
ftrace_call:
call ftrace_stub
movq 48(%rsp), %r9
movq 40(%rsp), %r8
movq 32(%rsp), %rdi
movq 24(%rsp), %rsi
movq 16(%rsp), %rdx
movq 8(%rsp), %rcx
movq (%rsp), %rax
addq $0x38, %rsp
.globl ftrace_stub
ftrace_stub:
retq
END(ftrace_caller)
#else /* ! CONFIG_DYNAMIC_FTRACE */
ENTRY(mcount)
cmpq $ftrace_stub, ftrace_trace_function
jnz trace
.globl ftrace_stub
ftrace_stub:
retq
trace:
/* taken from glibc */
subq $0x38, %rsp
movq %rax, (%rsp)
movq %rcx, 8(%rsp)
movq %rdx, 16(%rsp)
movq %rsi, 24(%rsp)
movq %rdi, 32(%rsp)
movq %r8, 40(%rsp)
movq %r9, 48(%rsp)
movq 0x38(%rsp), %rdi
movq 8(%rbp), %rsi
subq $MCOUNT_INSN_SIZE, %rdi
call *ftrace_trace_function
movq 48(%rsp), %r9
movq 40(%rsp), %r8
movq 32(%rsp), %rdi
movq 24(%rsp), %rsi
movq 16(%rsp), %rdx
movq 8(%rsp), %rcx
movq (%rsp), %rax
addq $0x38, %rsp
jmp ftrace_stub
END(mcount)
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* CONFIG_FTRACE */
#ifndef CONFIG_PREEMPT #ifndef CONFIG_PREEMPT
#define retint_kernel retint_restore_args #define retint_kernel retint_restore_args
#endif #endif
......
/*
* Code for replacing ftrace calls with jumps.
*
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
*
* Thanks goes to Ingo Molnar, for suggesting the idea.
* Mathieu Desnoyers, for suggesting postponing the modifications.
* Arjan van de Ven, for keeping me straight, and explaining to me
* the dangers of modifying code on the run.
*/
#include <linux/spinlock.h>
#include <linux/hardirq.h>
#include <linux/ftrace.h>
#include <linux/percpu.h>
#include <linux/init.h>
#include <linux/list.h>
#include <asm/alternative.h>
#include <asm/ftrace.h>
/* Long is fine, even if it is only 4 bytes ;-) */
static long *ftrace_nop;
union ftrace_code_union {
char code[MCOUNT_INSN_SIZE];
struct {
char e8;
int offset;
} __attribute__((packed));
};
static int notrace ftrace_calc_offset(long ip, long addr)
{
return (int)(addr - ip);
}
notrace unsigned char *ftrace_nop_replace(void)
{
return (char *)ftrace_nop;
}
notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
{
static union ftrace_code_union calc;
calc.e8 = 0xe8;
calc.offset = ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
/*
* No locking needed, this must be called via kstop_machine
* which in essence is like running on a uniprocessor machine.
*/
return calc.code;
}
notrace int
ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char *new_code)
{
unsigned replaced;
unsigned old = *(unsigned *)old_code; /* 4 bytes */
unsigned new = *(unsigned *)new_code; /* 4 bytes */
unsigned char newch = new_code[4];
int faulted = 0;
/*
* Note: Due to modules and __init, code can
* disappear and change, we need to protect against faulting
* as well as code changing.
*
* No real locking needed, this code is run through
* kstop_machine.
*/
asm volatile (
"1: lock\n"
" cmpxchg %3, (%2)\n"
" jnz 2f\n"
" movb %b4, 4(%2)\n"
"2:\n"
".section .fixup, \"ax\"\n"
"3: movl $1, %0\n"
" jmp 2b\n"
".previous\n"
_ASM_EXTABLE(1b, 3b)
: "=r"(faulted), "=a"(replaced)
: "r"(ip), "r"(new), "c"(newch),
"0"(faulted), "a"(old)
: "memory");
sync_core();
if (replaced != old && replaced != new)
faulted = 2;
return faulted;
}
notrace int ftrace_update_ftrace_func(ftrace_func_t func)
{
unsigned long ip = (unsigned long)(&ftrace_call);
unsigned char old[MCOUNT_INSN_SIZE], *new;
int ret;
memcpy(old, &ftrace_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, (unsigned long)func);
ret = ftrace_modify_code(ip, old, new);
return ret;
}
notrace int ftrace_mcount_set(unsigned long *data)
{
unsigned long ip = (long)(&mcount_call);
unsigned long *addr = data;
unsigned char old[MCOUNT_INSN_SIZE], *new;
/*
* Replace the mcount stub with a pointer to the
* ip recorder function.
*/
memcpy(old, &mcount_call, MCOUNT_INSN_SIZE);
new = ftrace_call_replace(ip, *addr);
*addr = ftrace_modify_code(ip, old, new);
return 0;
}
int __init ftrace_dyn_arch_init(void *data)
{
const unsigned char *const *noptable = find_nop_table();
/* This is running in kstop_machine */
ftrace_mcount_set(data);
ftrace_nop = (unsigned long *)noptable[MCOUNT_INSN_SIZE];
return 0;
}
#include <linux/module.h> #include <linux/module.h>
#include <asm/checksum.h> #include <asm/checksum.h>
#include <asm/desc.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/desc.h>
#include <asm/ftrace.h>
#ifdef CONFIG_FTRACE
/* mcount is defined in assembly */
EXPORT_SYMBOL(mcount);
#endif
/* Networking helper routines. */ /* Networking helper routines. */
EXPORT_SYMBOL(csum_partial_copy_generic); EXPORT_SYMBOL(csum_partial_copy_generic);
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/numa.h> #include <linux/numa.h>
#include <linux/ftrace.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
...@@ -107,6 +109,8 @@ NORET_TYPE void machine_kexec(struct kimage *image) ...@@ -107,6 +109,8 @@ NORET_TYPE void machine_kexec(struct kimage *image)
unsigned long page_list[PAGES_NR]; unsigned long page_list[PAGES_NR];
void *control_page; void *control_page;
tracer_disable();
/* Interrupts aren't acceptable while we reboot */ /* Interrupts aren't acceptable while we reboot */
local_irq_disable(); local_irq_disable();
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/numa.h> #include <linux/numa.h>
#include <linux/ftrace.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
...@@ -184,6 +186,8 @@ NORET_TYPE void machine_kexec(struct kimage *image) ...@@ -184,6 +186,8 @@ NORET_TYPE void machine_kexec(struct kimage *image)
unsigned long page_list[PAGES_NR]; unsigned long page_list[PAGES_NR];
void *control_page; void *control_page;
tracer_disable();
/* Interrupts aren't acceptable while we reboot */ /* Interrupts aren't acceptable while we reboot */
local_irq_disable(); local_irq_disable();
......
...@@ -142,7 +142,10 @@ void cpu_idle(void) ...@@ -142,7 +142,10 @@ void cpu_idle(void)
local_irq_disable(); local_irq_disable();
__get_cpu_var(irq_stat).idle_timestamp = jiffies; __get_cpu_var(irq_stat).idle_timestamp = jiffies;
/* Don't trace irqs off for idle */
stop_critical_timings();
pm_idle(); pm_idle();
start_critical_timings();
} }
tick_nohz_restart_sched_tick(); tick_nohz_restart_sched_tick();
preempt_enable_no_resched(); preempt_enable_no_resched();
......
...@@ -134,7 +134,10 @@ void cpu_idle(void) ...@@ -134,7 +134,10 @@ void cpu_idle(void)
*/ */
local_irq_disable(); local_irq_disable();
enter_idle(); enter_idle();
/* Don't trace irqs off for idle */
stop_critical_timings();
pm_idle(); pm_idle();
start_critical_timings();
/* In many cases the interrupt that ended idle /* In many cases the interrupt that ended idle
has already called exit_idle. But some idle has already called exit_idle. But some idle
loops can be woken up without interrupt. */ loops can be woken up without interrupt. */
......
...@@ -42,7 +42,8 @@ ...@@ -42,7 +42,8 @@
#include <asm/topology.h> #include <asm/topology.h>
#include <asm/vgtod.h> #include <asm/vgtod.h>
#define __vsyscall(nr) __attribute__ ((unused,__section__(".vsyscall_" #nr))) #define __vsyscall(nr) \
__attribute__ ((unused, __section__(".vsyscall_" #nr))) notrace
#define __syscall_clobber "r11","cx","memory" #define __syscall_clobber "r11","cx","memory"
/* /*
......
...@@ -2,13 +2,20 @@ ...@@ -2,13 +2,20 @@
All C exports should go in the respective C files. */ All C exports should go in the respective C files. */
#include <linux/module.h> #include <linux/module.h>
#include <net/checksum.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <net/checksum.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/uaccess.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/ftrace.h>
#ifdef CONFIG_FTRACE
/* mcount is defined in assembly */
EXPORT_SYMBOL(mcount);
#endif
EXPORT_SYMBOL(kernel_thread); EXPORT_SYMBOL(kernel_thread);
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
obj-$(CONFIG_SMP) := msr-on-cpu.o obj-$(CONFIG_SMP) := msr-on-cpu.o
lib-y := delay.o lib-y := delay.o
lib-y += thunk_$(BITS).o
lib-y += usercopy_$(BITS).o getuser.o putuser.o lib-y += usercopy_$(BITS).o getuser.o putuser.o
lib-y += memcpy_$(BITS).o lib-y += memcpy_$(BITS).o
......
/*
* Trampoline to trace irqs off. (otherwise CALLER_ADDR1 might crash)
* Copyright 2008 by Steven Rostedt, Red Hat, Inc
* (inspired by Andi Kleen's thunk_64.S)
* Subject to the GNU public license, v.2. No warranty of any kind.
*/
#include <linux/linkage.h>
#define ARCH_TRACE_IRQS_ON \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call trace_hardirqs_on; \
popl %edx; \
popl %ecx; \
popl %eax;
#define ARCH_TRACE_IRQS_OFF \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call trace_hardirqs_off; \
popl %edx; \
popl %ecx; \
popl %eax;
#ifdef CONFIG_TRACE_IRQFLAGS
/* put return address in eax (arg1) */
.macro thunk_ra name,func
.globl \name
\name:
pushl %eax
pushl %ecx
pushl %edx
/* Place EIP in the arg1 */
movl 3*4(%esp), %eax
call \func
popl %edx
popl %ecx
popl %eax
ret
.endm
thunk_ra trace_hardirqs_on_thunk,trace_hardirqs_on_caller
thunk_ra trace_hardirqs_off_thunk,trace_hardirqs_off_caller
#endif
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
* Save registers before calling assembly functions. This avoids * Save registers before calling assembly functions. This avoids
* disturbance of register allocation in some inline assembly constructs. * disturbance of register allocation in some inline assembly constructs.
* Copyright 2001,2002 by Andi Kleen, SuSE Labs. * Copyright 2001,2002 by Andi Kleen, SuSE Labs.
* Added trace_hardirqs callers - Copyright 2007 Steven Rostedt, Red Hat, Inc.
* Subject to the GNU public license, v.2. No warranty of any kind. * Subject to the GNU public license, v.2. No warranty of any kind.
*/ */
...@@ -42,8 +43,22 @@ ...@@ -42,8 +43,22 @@
#endif #endif
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
thunk trace_hardirqs_on_thunk,trace_hardirqs_on /* put return address in rdi (arg1) */
thunk trace_hardirqs_off_thunk,trace_hardirqs_off .macro thunk_ra name,func
.globl \name
\name:
CFI_STARTPROC
SAVE_ARGS
/* SAVE_ARGS pushs 9 elements */
/* the next element would be the rip */
movq 9*8(%rsp), %rdi
call \func
jmp restore
CFI_ENDPROC
.endm
thunk_ra trace_hardirqs_on_thunk,trace_hardirqs_on_caller
thunk_ra trace_hardirqs_off_thunk,trace_hardirqs_off_caller
#endif #endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
......
...@@ -8,6 +8,11 @@ obj-$(CONFIG_X86_PTDUMP) += dump_pagetables.o ...@@ -8,6 +8,11 @@ obj-$(CONFIG_X86_PTDUMP) += dump_pagetables.o
obj-$(CONFIG_HIGHMEM) += highmem_32.o obj-$(CONFIG_HIGHMEM) += highmem_32.o
obj-$(CONFIG_MMIOTRACE_HOOKS) += kmmio.o
obj-$(CONFIG_MMIOTRACE) += mmiotrace.o
mmiotrace-y := pf_in.o mmio-mod.o
obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o
ifeq ($(CONFIG_X86_32),y) ifeq ($(CONFIG_X86_32),y)
obj-$(CONFIG_NUMA) += discontig_32.o obj-$(CONFIG_NUMA) += discontig_32.o
else else
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/mmiotrace.h>
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/smp.h> #include <linux/smp.h>
...@@ -49,6 +50,16 @@ ...@@ -49,6 +50,16 @@
#define PF_RSVD (1<<3) #define PF_RSVD (1<<3)
#define PF_INSTR (1<<4) #define PF_INSTR (1<<4)
static inline int kmmio_fault(struct pt_regs *regs, unsigned long addr)
{
#ifdef CONFIG_MMIOTRACE_HOOKS
if (unlikely(is_kmmio_active()))
if (kmmio_handler(regs, addr) == 1)
return -1;
#endif
return 0;
}
static inline int notify_page_fault(struct pt_regs *regs) static inline int notify_page_fault(struct pt_regs *regs)
{ {
#ifdef CONFIG_KPROBES #ifdef CONFIG_KPROBES
...@@ -598,6 +609,8 @@ void __kprobes do_page_fault(struct pt_regs *regs, unsigned long error_code) ...@@ -598,6 +609,8 @@ void __kprobes do_page_fault(struct pt_regs *regs, unsigned long error_code)
if (notify_page_fault(regs)) if (notify_page_fault(regs))
return; return;
if (unlikely(kmmio_fault(regs, address)))
return;
/* /*
* We fault-in kernel-space virtual memory on-demand. The * We fault-in kernel-space virtual memory on-demand. The
......
...@@ -1035,6 +1035,8 @@ void mark_rodata_ro(void) ...@@ -1035,6 +1035,8 @@ void mark_rodata_ro(void)
unsigned long start = PFN_ALIGN(_text); unsigned long start = PFN_ALIGN(_text);
unsigned long size = PFN_ALIGN(_etext) - start; unsigned long size = PFN_ALIGN(_etext) - start;
#ifndef CONFIG_DYNAMIC_FTRACE
/* Dynamic tracing modifies the kernel text section */
set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT); set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
printk(KERN_INFO "Write protecting the kernel text: %luk\n", printk(KERN_INFO "Write protecting the kernel text: %luk\n",
size >> 10); size >> 10);
...@@ -1047,6 +1049,8 @@ void mark_rodata_ro(void) ...@@ -1047,6 +1049,8 @@ void mark_rodata_ro(void)
printk(KERN_INFO "Testing CPA: write protecting again\n"); printk(KERN_INFO "Testing CPA: write protecting again\n");
set_pages_ro(virt_to_page(start), size>>PAGE_SHIFT); set_pages_ro(virt_to_page(start), size>>PAGE_SHIFT);
#endif #endif
#endif /* CONFIG_DYNAMIC_FTRACE */
start += size; start += size;
size = (unsigned long)__end_rodata - start; size = (unsigned long)__end_rodata - start;
set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT); set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
......
...@@ -991,6 +991,13 @@ EXPORT_SYMBOL_GPL(rodata_test_data); ...@@ -991,6 +991,13 @@ EXPORT_SYMBOL_GPL(rodata_test_data);
void mark_rodata_ro(void) void mark_rodata_ro(void)
{ {
unsigned long start = PFN_ALIGN(_stext), end = PFN_ALIGN(__end_rodata); unsigned long start = PFN_ALIGN(_stext), end = PFN_ALIGN(__end_rodata);
unsigned long rodata_start =
((unsigned long)__start_rodata + PAGE_SIZE - 1) & PAGE_MASK;
#ifdef CONFIG_DYNAMIC_FTRACE
/* Dynamic tracing modifies the kernel text section */
start = rodata_start;
#endif
printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
(end - start) >> 10); (end - start) >> 10);
...@@ -1000,8 +1007,7 @@ void mark_rodata_ro(void) ...@@ -1000,8 +1007,7 @@ void mark_rodata_ro(void)
* The rodata section (but not the kernel text!) should also be * The rodata section (but not the kernel text!) should also be
* not-executable. * not-executable.
*/ */
start = ((unsigned long)__start_rodata + PAGE_SIZE - 1) & PAGE_MASK; set_memory_nx(rodata_start, (end - rodata_start) >> PAGE_SHIFT);
set_memory_nx(start, (end - start) >> PAGE_SHIFT);
rodata_test(); rodata_test();
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mmiotrace.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/e820.h> #include <asm/e820.h>
...@@ -122,10 +123,13 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, ...@@ -122,10 +123,13 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
{ {
unsigned long pfn, offset, vaddr; unsigned long pfn, offset, vaddr;
resource_size_t last_addr; resource_size_t last_addr;
const resource_size_t unaligned_phys_addr = phys_addr;
const unsigned long unaligned_size = size;
struct vm_struct *area; struct vm_struct *area;
unsigned long new_prot_val; unsigned long new_prot_val;
pgprot_t prot; pgprot_t prot;
int retval; int retval;
void __iomem *ret_addr;
/* Don't allow wraparound or zero size */ /* Don't allow wraparound or zero size */
last_addr = phys_addr + size - 1; last_addr = phys_addr + size - 1;
...@@ -233,7 +237,10 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, ...@@ -233,7 +237,10 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
return NULL; return NULL;
} }
return (void __iomem *) (vaddr + offset); ret_addr = (void __iomem *) (vaddr + offset);
mmiotrace_ioremap(unaligned_phys_addr, unaligned_size, ret_addr);
return ret_addr;
} }
/** /**
...@@ -348,6 +355,8 @@ void iounmap(volatile void __iomem *addr) ...@@ -348,6 +355,8 @@ void iounmap(volatile void __iomem *addr)
addr = (volatile void __iomem *) addr = (volatile void __iomem *)
(PAGE_MASK & (unsigned long __force)addr); (PAGE_MASK & (unsigned long __force)addr);
mmiotrace_iounmap(addr);
/* Use the vm area unlocked, assuming the caller /* Use the vm area unlocked, assuming the caller
ensures there isn't another iounmap for the same address ensures there isn't another iounmap for the same address
in parallel. Reuse of the virtual address is prevented by in parallel. Reuse of the virtual address is prevented by
......
This diff is collapsed.
This diff is collapsed.
...@@ -262,6 +262,7 @@ pte_t *lookup_address(unsigned long address, unsigned int *level) ...@@ -262,6 +262,7 @@ pte_t *lookup_address(unsigned long address, unsigned int *level)
return pte_offset_kernel(pmd, address); return pte_offset_kernel(pmd, address);
} }
EXPORT_SYMBOL_GPL(lookup_address);
/* /*
* Set the new pmd in all the pgds we know about: * Set the new pmd in all the pgds we know about:
......
This diff is collapsed.
/*
* Fault Injection Test harness (FI)
* Copyright (C) Intel Crop.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307,
* USA.
*
*/
#ifndef __PF_H_
#define __PF_H_
enum reason_type {
NOT_ME, /* page fault is not in regions */
NOTHING, /* access others point in regions */
REG_READ, /* read from addr to reg */
REG_WRITE, /* write from reg to addr */
IMM_WRITE, /* write from imm to addr */
OTHERS /* Other instructions can not intercept */
};
enum reason_type get_ins_type(unsigned long ins_addr);
unsigned int get_ins_mem_width(unsigned long ins_addr);
unsigned long get_ins_reg_val(unsigned long ins_addr, struct pt_regs *regs);
unsigned long get_ins_imm_val(unsigned long ins_addr);
#endif /* __PF_H_ */
/*
* Written by Pekka Paalanen, 2008 <pq@iki.fi>
*/
#include <linux/module.h>
#include <linux/io.h>
#define MODULE_NAME "testmmiotrace"
static unsigned long mmio_address;
module_param(mmio_address, ulong, 0);
MODULE_PARM_DESC(mmio_address, "Start address of the mapping of 16 kB.");
static void do_write_test(void __iomem *p)
{
unsigned int i;
for (i = 0; i < 256; i++)
iowrite8(i, p + i);
for (i = 1024; i < (5 * 1024); i += 2)
iowrite16(i * 12 + 7, p + i);
for (i = (5 * 1024); i < (16 * 1024); i += 4)
iowrite32(i * 212371 + 13, p + i);
}
static void do_read_test(void __iomem *p)
{
unsigned int i;
for (i = 0; i < 256; i++)
ioread8(p + i);
for (i = 1024; i < (5 * 1024); i += 2)
ioread16(p + i);
for (i = (5 * 1024); i < (16 * 1024); i += 4)
ioread32(p + i);
}
static void do_test(void)
{
void __iomem *p = ioremap_nocache(mmio_address, 0x4000);
if (!p) {
pr_err(MODULE_NAME ": could not ioremap, aborting.\n");
return;
}
do_write_test(p);
do_read_test(p);
iounmap(p);
}
static int __init init(void)
{
if (mmio_address == 0) {
pr_err(MODULE_NAME ": you have to use the module argument "
"mmio_address.\n");
pr_err(MODULE_NAME ": DO NOT LOAD THIS MODULE UNLESS"
" YOU REALLY KNOW WHAT YOU ARE DOING!\n");
return -ENXIO;
}
pr_warning(MODULE_NAME ": WARNING: mapping 16 kB @ 0x%08lx "
"in PCI address space, and writing "
"rubbish in there.\n", mmio_address);
do_test();
return 0;
}
static void __exit cleanup(void)
{
pr_debug(MODULE_NAME ": unloaded.\n");
}
module_init(init);
module_exit(cleanup);
MODULE_LICENSE("GPL");
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
#define gtod vdso_vsyscall_gtod_data #define gtod vdso_vsyscall_gtod_data
static long vdso_fallback_gettime(long clock, struct timespec *ts) notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
{ {
long ret; long ret;
asm("syscall" : "=a" (ret) : asm("syscall" : "=a" (ret) :
...@@ -31,7 +31,7 @@ static long vdso_fallback_gettime(long clock, struct timespec *ts) ...@@ -31,7 +31,7 @@ static long vdso_fallback_gettime(long clock, struct timespec *ts)
return ret; return ret;
} }
static inline long vgetns(void) notrace static inline long vgetns(void)
{ {
long v; long v;
cycles_t (*vread)(void); cycles_t (*vread)(void);
...@@ -40,7 +40,7 @@ static inline long vgetns(void) ...@@ -40,7 +40,7 @@ static inline long vgetns(void)
return (v * gtod->clock.mult) >> gtod->clock.shift; return (v * gtod->clock.mult) >> gtod->clock.shift;
} }
static noinline int do_realtime(struct timespec *ts) notrace static noinline int do_realtime(struct timespec *ts)
{ {
unsigned long seq, ns; unsigned long seq, ns;
do { do {
...@@ -54,7 +54,8 @@ static noinline int do_realtime(struct timespec *ts) ...@@ -54,7 +54,8 @@ static noinline int do_realtime(struct timespec *ts)
} }
/* Copy of the version in kernel/time.c which we cannot directly access */ /* Copy of the version in kernel/time.c which we cannot directly access */
static void vset_normalized_timespec(struct timespec *ts, long sec, long nsec) notrace static void
vset_normalized_timespec(struct timespec *ts, long sec, long nsec)
{ {
while (nsec >= NSEC_PER_SEC) { while (nsec >= NSEC_PER_SEC) {
nsec -= NSEC_PER_SEC; nsec -= NSEC_PER_SEC;
...@@ -68,7 +69,7 @@ static void vset_normalized_timespec(struct timespec *ts, long sec, long nsec) ...@@ -68,7 +69,7 @@ static void vset_normalized_timespec(struct timespec *ts, long sec, long nsec)
ts->tv_nsec = nsec; ts->tv_nsec = nsec;
} }
static noinline int do_monotonic(struct timespec *ts) notrace static noinline int do_monotonic(struct timespec *ts)
{ {
unsigned long seq, ns, secs; unsigned long seq, ns, secs;
do { do {
...@@ -82,7 +83,7 @@ static noinline int do_monotonic(struct timespec *ts) ...@@ -82,7 +83,7 @@ static noinline int do_monotonic(struct timespec *ts)
return 0; return 0;
} }
int __vdso_clock_gettime(clockid_t clock, struct timespec *ts) notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
{ {
if (likely(gtod->sysctl_enabled && gtod->clock.vread)) if (likely(gtod->sysctl_enabled && gtod->clock.vread))
switch (clock) { switch (clock) {
...@@ -96,7 +97,7 @@ int __vdso_clock_gettime(clockid_t clock, struct timespec *ts) ...@@ -96,7 +97,7 @@ int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
int clock_gettime(clockid_t, struct timespec *) int clock_gettime(clockid_t, struct timespec *)
__attribute__((weak, alias("__vdso_clock_gettime"))); __attribute__((weak, alias("__vdso_clock_gettime")));
int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz) notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{ {
long ret; long ret;
if (likely(gtod->sysctl_enabled && gtod->clock.vread)) { if (likely(gtod->sysctl_enabled && gtod->clock.vread)) {
......
...@@ -13,7 +13,8 @@ ...@@ -13,7 +13,8 @@
#include <asm/vgtod.h> #include <asm/vgtod.h>
#include "vextern.h" #include "vextern.h"
long __vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused) notrace long
__vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
{ {
unsigned int p; unsigned int p;
......
#ifndef _ASM_ARM_FTRACE
#define _ASM_ARM_FTRACE
#ifdef CONFIG_FTRACE
#define MCOUNT_ADDR ((long)(mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void mcount(void);
#endif
#endif
#endif /* _ASM_ARM_FTRACE */
...@@ -59,6 +59,7 @@ struct kprobe_ctlblk { ...@@ -59,6 +59,7 @@ struct kprobe_ctlblk {
}; };
void arch_remove_kprobe(struct kprobe *); void arch_remove_kprobe(struct kprobe *);
void kretprobe_trampoline(void);
int kprobe_trap_handler(struct pt_regs *regs, unsigned int instr); int kprobe_trap_handler(struct pt_regs *regs, unsigned int instr);
int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
......
#ifndef _ASM_POWERPC_FTRACE
#define _ASM_POWERPC_FTRACE
#ifdef CONFIG_FTRACE
#define MCOUNT_ADDR ((long)(_mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void _mcount(void);
#endif
#endif
#endif /* _ASM_POWERPC_FTRACE */
...@@ -59,6 +59,11 @@ extern void iseries_handle_interrupts(void); ...@@ -59,6 +59,11 @@ extern void iseries_handle_interrupts(void);
get_paca()->hard_enabled = 0; \ get_paca()->hard_enabled = 0; \
} while(0) } while(0)
static inline int irqs_disabled_flags(unsigned long flags)
{
return flags == 0;
}
#else #else
#if defined(CONFIG_BOOKE) #if defined(CONFIG_BOOKE)
...@@ -113,6 +118,11 @@ static inline void local_irq_save_ptr(unsigned long *flags) ...@@ -113,6 +118,11 @@ static inline void local_irq_save_ptr(unsigned long *flags)
#define hard_irq_enable() local_irq_enable() #define hard_irq_enable() local_irq_enable()
#define hard_irq_disable() local_irq_disable() #define hard_irq_disable() local_irq_disable()
static inline int irqs_disabled_flags(unsigned long flags)
{
return (flags & MSR_EE) == 0;
}
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
/* /*
......
#ifndef _ASM_SPARC64_FTRACE
#define _ASM_SPARC64_FTRACE
#ifdef CONFIG_MCOUNT
#define MCOUNT_ADDR ((long)(_mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void _mcount(void);
#endif
#endif
#endif /* _ASM_SPARC64_FTRACE */
...@@ -72,6 +72,8 @@ static inline void alternatives_smp_module_del(struct module *mod) {} ...@@ -72,6 +72,8 @@ static inline void alternatives_smp_module_del(struct module *mod) {}
static inline void alternatives_smp_switch(int smp) {} static inline void alternatives_smp_switch(int smp) {}
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
const unsigned char *const *find_nop_table(void);
/* /*
* Alternative instructions for different CPU types or capabilities. * Alternative instructions for different CPU types or capabilities.
* *
......
#ifndef _ASM_X86_FTRACE
#define _ASM_SPARC64_FTRACE
#ifdef CONFIG_FTRACE
#define MCOUNT_ADDR ((long)(mcount))
#define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void mcount(void);
#endif
#endif /* CONFIG_FTRACE */
#endif /* _ASM_X86_FTRACE */
...@@ -190,8 +190,6 @@ static inline void trace_hardirqs_fixup(void) ...@@ -190,8 +190,6 @@ static inline void trace_hardirqs_fixup(void)
#else #else
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define ARCH_TRACE_IRQS_ON call trace_hardirqs_on_thunk
#define ARCH_TRACE_IRQS_OFF call trace_hardirqs_off_thunk
#define ARCH_LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk #define ARCH_LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk
#define ARCH_LOCKDEP_SYS_EXIT_IRQ \ #define ARCH_LOCKDEP_SYS_EXIT_IRQ \
TRACE_IRQS_ON; \ TRACE_IRQS_ON; \
...@@ -203,24 +201,6 @@ static inline void trace_hardirqs_fixup(void) ...@@ -203,24 +201,6 @@ static inline void trace_hardirqs_fixup(void)
TRACE_IRQS_OFF; TRACE_IRQS_OFF;
#else #else
#define ARCH_TRACE_IRQS_ON \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call trace_hardirqs_on; \
popl %edx; \
popl %ecx; \
popl %eax;
#define ARCH_TRACE_IRQS_OFF \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call trace_hardirqs_off; \
popl %edx; \
popl %ecx; \
popl %eax;
#define ARCH_LOCKDEP_SYS_EXIT \ #define ARCH_LOCKDEP_SYS_EXIT \
pushl %eax; \ pushl %eax; \
pushl %ecx; \ pushl %ecx; \
...@@ -234,8 +214,8 @@ static inline void trace_hardirqs_fixup(void) ...@@ -234,8 +214,8 @@ static inline void trace_hardirqs_fixup(void)
#endif #endif
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
# define TRACE_IRQS_ON ARCH_TRACE_IRQS_ON # define TRACE_IRQS_ON call trace_hardirqs_on_thunk;
# define TRACE_IRQS_OFF ARCH_TRACE_IRQS_OFF # define TRACE_IRQS_OFF call trace_hardirqs_off_thunk;
#else #else
# define TRACE_IRQS_ON # define TRACE_IRQS_ON
# define TRACE_IRQS_OFF # define TRACE_IRQS_OFF
......
...@@ -24,7 +24,8 @@ enum vsyscall_num { ...@@ -24,7 +24,8 @@ enum vsyscall_num {
((unused, __section__ (".vsyscall_gtod_data"),aligned(16))) ((unused, __section__ (".vsyscall_gtod_data"),aligned(16)))
#define __section_vsyscall_clock __attribute__ \ #define __section_vsyscall_clock __attribute__ \
((unused, __section__ (".vsyscall_clock"),aligned(16))) ((unused, __section__ (".vsyscall_clock"),aligned(16)))
#define __vsyscall_fn __attribute__ ((unused,__section__(".vsyscall_fn"))) #define __vsyscall_fn \
__attribute__ ((unused, __section__(".vsyscall_fn"))) notrace
#define VGETCPU_RDTSCP 1 #define VGETCPU_RDTSCP 1
#define VGETCPU_LSL 2 #define VGETCPU_LSL 2
......
#ifndef _LINUX_FTRACE_H
#define _LINUX_FTRACE_H
#ifdef CONFIG_FTRACE
#include <linux/linkage.h>
#include <linux/fs.h>
extern int ftrace_enabled;
extern int
ftrace_enable_sysctl(struct ctl_table *table, int write,
struct file *filp, void __user *buffer, size_t *lenp,
loff_t *ppos);
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip);
struct ftrace_ops {
ftrace_func_t func;
struct ftrace_ops *next;
};
/*
* The ftrace_ops must be a static and should also
* be read_mostly. These functions do modify read_mostly variables
* so use them sparely. Never free an ftrace_op or modify the
* next pointer after it has been registered. Even after unregistering
* it, the next pointer may still be used internally.
*/
int register_ftrace_function(struct ftrace_ops *ops);
int unregister_ftrace_function(struct ftrace_ops *ops);
void clear_ftrace_function(void);
extern void ftrace_stub(unsigned long a0, unsigned long a1);
#else /* !CONFIG_FTRACE */
# define register_ftrace_function(ops) do { } while (0)
# define unregister_ftrace_function(ops) do { } while (0)
# define clear_ftrace_function(ops) do { } while (0)
#endif /* CONFIG_FTRACE */
#ifdef CONFIG_DYNAMIC_FTRACE
# define FTRACE_HASHBITS 10
# define FTRACE_HASHSIZE (1<<FTRACE_HASHBITS)
enum {
FTRACE_FL_FREE = (1 << 0),
FTRACE_FL_FAILED = (1 << 1),
FTRACE_FL_FILTER = (1 << 2),
FTRACE_FL_ENABLED = (1 << 3),
FTRACE_FL_NOTRACE = (1 << 4),
FTRACE_FL_CONVERTED = (1 << 5),
FTRACE_FL_FROZEN = (1 << 6),
};
struct dyn_ftrace {
struct hlist_node node;
unsigned long ip; /* address of mcount call-site */
unsigned long flags;
};
int ftrace_force_update(void);
void ftrace_set_filter(unsigned char *buf, int len, int reset);
/* defined in arch */
extern int ftrace_ip_converted(unsigned long ip);
extern unsigned char *ftrace_nop_replace(void);
extern unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr);
extern int ftrace_dyn_arch_init(void *data);
extern int ftrace_mcount_set(unsigned long *data);
extern int ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char *new_code);
extern int ftrace_update_ftrace_func(ftrace_func_t func);
extern void ftrace_caller(void);
extern void ftrace_call(void);
extern void mcount_call(void);
extern int skip_trace(unsigned long ip);
void ftrace_disable_daemon(void);
void ftrace_enable_daemon(void);
#else
# define skip_trace(ip) ({ 0; })
# define ftrace_force_update() ({ 0; })
# define ftrace_set_filter(buf, len, reset) do { } while (0)
# define ftrace_disable_daemon() do { } while (0)
# define ftrace_enable_daemon() do { } while (0)
#endif /* CONFIG_DYNAMIC_FTRACE */
/* totally disable ftrace - can not re-enable after this */
void ftrace_kill(void);
void ftrace_kill_atomic(void);
static inline void tracer_disable(void)
{
#ifdef CONFIG_FTRACE
ftrace_enabled = 0;
#endif
}
#ifdef CONFIG_FRAME_POINTER
/* TODO: need to fix this for ARM */
# define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0))
# define CALLER_ADDR1 ((unsigned long)__builtin_return_address(1))
# define CALLER_ADDR2 ((unsigned long)__builtin_return_address(2))
# define CALLER_ADDR3 ((unsigned long)__builtin_return_address(3))
# define CALLER_ADDR4 ((unsigned long)__builtin_return_address(4))
# define CALLER_ADDR5 ((unsigned long)__builtin_return_address(5))
# define CALLER_ADDR6 ((unsigned long)__builtin_return_address(6))
#else
# define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0))
# define CALLER_ADDR1 0UL
# define CALLER_ADDR2 0UL
# define CALLER_ADDR3 0UL
# define CALLER_ADDR4 0UL
# define CALLER_ADDR5 0UL
# define CALLER_ADDR6 0UL
#endif
#ifdef CONFIG_IRQSOFF_TRACER
extern void time_hardirqs_on(unsigned long a0, unsigned long a1);
extern void time_hardirqs_off(unsigned long a0, unsigned long a1);
#else
# define time_hardirqs_on(a0, a1) do { } while (0)
# define time_hardirqs_off(a0, a1) do { } while (0)
#endif
#ifdef CONFIG_PREEMPT_TRACER
extern void trace_preempt_on(unsigned long a0, unsigned long a1);
extern void trace_preempt_off(unsigned long a0, unsigned long a1);
#else
# define trace_preempt_on(a0, a1) do { } while (0)
# define trace_preempt_off(a0, a1) do { } while (0)
#endif
#ifdef CONFIG_TRACING
extern void
ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3);
#else
static inline void
ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3) { }
#endif
#endif /* _LINUX_FTRACE_H */
...@@ -12,10 +12,10 @@ ...@@ -12,10 +12,10 @@
#define _LINUX_TRACE_IRQFLAGS_H #define _LINUX_TRACE_IRQFLAGS_H
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
extern void trace_hardirqs_on(void);
extern void trace_hardirqs_off(void);
extern void trace_softirqs_on(unsigned long ip); extern void trace_softirqs_on(unsigned long ip);
extern void trace_softirqs_off(unsigned long ip); extern void trace_softirqs_off(unsigned long ip);
extern void trace_hardirqs_on(void);
extern void trace_hardirqs_off(void);
# define trace_hardirq_context(p) ((p)->hardirq_context) # define trace_hardirq_context(p) ((p)->hardirq_context)
# define trace_softirq_context(p) ((p)->softirq_context) # define trace_softirq_context(p) ((p)->softirq_context)
# define trace_hardirqs_enabled(p) ((p)->hardirqs_enabled) # define trace_hardirqs_enabled(p) ((p)->hardirqs_enabled)
...@@ -41,6 +41,15 @@ ...@@ -41,6 +41,15 @@
# define INIT_TRACE_IRQFLAGS # define INIT_TRACE_IRQFLAGS
#endif #endif
#if defined(CONFIG_IRQSOFF_TRACER) || \
defined(CONFIG_PREEMPT_TRACER)
extern void stop_critical_timings(void);
extern void start_critical_timings(void);
#else
# define stop_critical_timings() do { } while (0)
# define start_critical_timings() do { } while (0)
#endif
#ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
#include <asm/irqflags.h> #include <asm/irqflags.h>
......
...@@ -259,6 +259,10 @@ void recycle_rp_inst(struct kretprobe_instance *ri, struct hlist_head *head); ...@@ -259,6 +259,10 @@ void recycle_rp_inst(struct kretprobe_instance *ri, struct hlist_head *head);
struct jprobe; struct jprobe;
struct kretprobe; struct kretprobe;
static inline struct kprobe *get_kprobe(void *addr)
{
return NULL;
}
static inline struct kprobe *kprobe_running(void) static inline struct kprobe *kprobe_running(void)
{ {
return NULL; return NULL;
......
...@@ -4,6 +4,8 @@ ...@@ -4,6 +4,8 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/linkage.h> #include <asm/linkage.h>
#define notrace __attribute__((no_instrument_function))
#ifdef __cplusplus #ifdef __cplusplus
#define CPP_ASMLINKAGE extern "C" #define CPP_ASMLINKAGE extern "C"
#else #else
......
...@@ -44,8 +44,8 @@ struct marker { ...@@ -44,8 +44,8 @@ struct marker {
*/ */
char state; /* Marker state. */ char state; /* Marker state. */
char ptype; /* probe type : 0 : single, 1 : multi */ char ptype; /* probe type : 0 : single, 1 : multi */
void (*call)(const struct marker *mdata, /* Probe wrapper */ /* Probe wrapper */
void *call_private, const char *fmt, ...); void (*call)(const struct marker *mdata, void *call_private, ...);
struct marker_probe_closure single; struct marker_probe_closure single;
struct marker_probe_closure *multi; struct marker_probe_closure *multi;
} __attribute__((aligned(8))); } __attribute__((aligned(8)));
...@@ -58,8 +58,12 @@ struct marker { ...@@ -58,8 +58,12 @@ struct marker {
* Make sure the alignment of the structure in the __markers section will * Make sure the alignment of the structure in the __markers section will
* not add unwanted padding between the beginning of the section and the * not add unwanted padding between the beginning of the section and the
* structure. Force alignment to the same alignment as the section start. * structure. Force alignment to the same alignment as the section start.
*
* The "generic" argument controls which marker enabling mechanism must be used.
* If generic is true, a variable read is used.
* If generic is false, immediate values are used.
*/ */
#define __trace_mark(name, call_private, format, args...) \ #define __trace_mark(generic, name, call_private, format, args...) \
do { \ do { \
static const char __mstrtab_##name[] \ static const char __mstrtab_##name[] \
__attribute__((section("__markers_strings"))) \ __attribute__((section("__markers_strings"))) \
...@@ -72,15 +76,14 @@ struct marker { ...@@ -72,15 +76,14 @@ struct marker {
__mark_check_format(format, ## args); \ __mark_check_format(format, ## args); \
if (unlikely(__mark_##name.state)) { \ if (unlikely(__mark_##name.state)) { \
(*__mark_##name.call) \ (*__mark_##name.call) \
(&__mark_##name, call_private, \ (&__mark_##name, call_private, ## args);\
format, ## args); \
} \ } \
} while (0) } while (0)
extern void marker_update_probe_range(struct marker *begin, extern void marker_update_probe_range(struct marker *begin,
struct marker *end); struct marker *end);
#else /* !CONFIG_MARKERS */ #else /* !CONFIG_MARKERS */
#define __trace_mark(name, call_private, format, args...) \ #define __trace_mark(generic, name, call_private, format, args...) \
__mark_check_format(format, ## args) __mark_check_format(format, ## args)
static inline void marker_update_probe_range(struct marker *begin, static inline void marker_update_probe_range(struct marker *begin,
struct marker *end) struct marker *end)
...@@ -88,15 +91,30 @@ static inline void marker_update_probe_range(struct marker *begin, ...@@ -88,15 +91,30 @@ static inline void marker_update_probe_range(struct marker *begin,
#endif /* CONFIG_MARKERS */ #endif /* CONFIG_MARKERS */
/** /**
* trace_mark - Marker * trace_mark - Marker using code patching
* @name: marker name, not quoted. * @name: marker name, not quoted.
* @format: format string * @format: format string
* @args...: variable argument list * @args...: variable argument list
* *
* Places a marker. * Places a marker using optimized code patching technique (imv_read())
* to be enabled when immediate values are present.
*/ */
#define trace_mark(name, format, args...) \ #define trace_mark(name, format, args...) \
__trace_mark(name, NULL, format, ## args) __trace_mark(0, name, NULL, format, ## args)
/**
* _trace_mark - Marker using variable read
* @name: marker name, not quoted.
* @format: format string
* @args...: variable argument list
*
* Places a marker using a standard memory read (_imv_read()) to be
* enabled. Should be used for markers in code paths where instruction
* modification based enabling is not welcome. (__init and __exit functions,
* lockdep, some traps, printk).
*/
#define _trace_mark(name, format, args...) \
__trace_mark(1, name, NULL, format, ## args)
/** /**
* MARK_NOARGS - Format string for a marker with no argument. * MARK_NOARGS - Format string for a marker with no argument.
...@@ -117,9 +135,9 @@ static inline void __printf(1, 2) ___mark_check_format(const char *fmt, ...) ...@@ -117,9 +135,9 @@ static inline void __printf(1, 2) ___mark_check_format(const char *fmt, ...)
extern marker_probe_func __mark_empty_function; extern marker_probe_func __mark_empty_function;
extern void marker_probe_cb(const struct marker *mdata, extern void marker_probe_cb(const struct marker *mdata,
void *call_private, const char *fmt, ...); void *call_private, ...);
extern void marker_probe_cb_noarg(const struct marker *mdata, extern void marker_probe_cb_noarg(const struct marker *mdata,
void *call_private, const char *fmt, ...); void *call_private, ...);
/* /*
* Connect a probe to a marker. * Connect a probe to a marker.
......
#ifndef MMIOTRACE_H
#define MMIOTRACE_H
#include <linux/types.h>
#include <linux/list.h>
struct kmmio_probe;
struct pt_regs;
typedef void (*kmmio_pre_handler_t)(struct kmmio_probe *,
struct pt_regs *, unsigned long addr);
typedef void (*kmmio_post_handler_t)(struct kmmio_probe *,
unsigned long condition, struct pt_regs *);
struct kmmio_probe {
struct list_head list; /* kmmio internal list */
unsigned long addr; /* start location of the probe point */
unsigned long len; /* length of the probe region */
kmmio_pre_handler_t pre_handler; /* Called before addr is executed. */
kmmio_post_handler_t post_handler; /* Called after addr is executed */
void *private;
};
/* kmmio is active by some kmmio_probes? */
static inline int is_kmmio_active(void)
{
extern unsigned int kmmio_count;
return kmmio_count;
}
extern int register_kmmio_probe(struct kmmio_probe *p);
extern void unregister_kmmio_probe(struct kmmio_probe *p);
/* Called from page fault handler. */
extern int kmmio_handler(struct pt_regs *regs, unsigned long addr);
/* Called from ioremap.c */
#ifdef CONFIG_MMIOTRACE
extern void mmiotrace_ioremap(resource_size_t offset, unsigned long size,
void __iomem *addr);
extern void mmiotrace_iounmap(volatile void __iomem *addr);
#else
static inline void mmiotrace_ioremap(resource_size_t offset,
unsigned long size, void __iomem *addr)
{
}
static inline void mmiotrace_iounmap(volatile void __iomem *addr)
{
}
#endif /* CONFIG_MMIOTRACE_HOOKS */
enum mm_io_opcode {
MMIO_READ = 0x1, /* struct mmiotrace_rw */
MMIO_WRITE = 0x2, /* struct mmiotrace_rw */
MMIO_PROBE = 0x3, /* struct mmiotrace_map */
MMIO_UNPROBE = 0x4, /* struct mmiotrace_map */
MMIO_MARKER = 0x5, /* raw char data */
MMIO_UNKNOWN_OP = 0x6, /* struct mmiotrace_rw */
};
struct mmiotrace_rw {
resource_size_t phys; /* PCI address of register */
unsigned long value;
unsigned long pc; /* optional program counter */
int map_id;
unsigned char opcode; /* one of MMIO_{READ,WRITE,UNKNOWN_OP} */
unsigned char width; /* size of register access in bytes */
};
struct mmiotrace_map {
resource_size_t phys; /* base address in PCI space */
unsigned long virt; /* base virtual address */
unsigned long len; /* mapping size */
int map_id;
unsigned char opcode; /* MMIO_PROBE or MMIO_UNPROBE */
};
/* in kernel/trace/trace_mmiotrace.c */
extern void enable_mmiotrace(void);
extern void disable_mmiotrace(void);
extern void mmio_trace_rw(struct mmiotrace_rw *rw);
extern void mmio_trace_mapping(struct mmiotrace_map *map);
#endif /* MMIOTRACE_H */
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/list.h> #include <linux/list.h>
#ifdef CONFIG_DEBUG_PREEMPT #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
extern void add_preempt_count(int val); extern void add_preempt_count(int val);
extern void sub_preempt_count(int val); extern void sub_preempt_count(int val);
#else #else
...@@ -52,6 +52,34 @@ do { \ ...@@ -52,6 +52,34 @@ do { \
preempt_check_resched(); \ preempt_check_resched(); \
} while (0) } while (0)
/* For debugging and tracer internals only! */
#define add_preempt_count_notrace(val) \
do { preempt_count() += (val); } while (0)
#define sub_preempt_count_notrace(val) \
do { preempt_count() -= (val); } while (0)
#define inc_preempt_count_notrace() add_preempt_count_notrace(1)
#define dec_preempt_count_notrace() sub_preempt_count_notrace(1)
#define preempt_disable_notrace() \
do { \
inc_preempt_count_notrace(); \
barrier(); \
} while (0)
#define preempt_enable_no_resched_notrace() \
do { \
barrier(); \
dec_preempt_count_notrace(); \
} while (0)
/* preempt_check_resched is OK to trace */
#define preempt_enable_notrace() \
do { \
preempt_enable_no_resched_notrace(); \
barrier(); \
preempt_check_resched(); \
} while (0)
#else #else
#define preempt_disable() do { } while (0) #define preempt_disable() do { } while (0)
...@@ -59,6 +87,10 @@ do { \ ...@@ -59,6 +87,10 @@ do { \
#define preempt_enable() do { } while (0) #define preempt_enable() do { } while (0)
#define preempt_check_resched() do { } while (0) #define preempt_check_resched() do { } while (0)
#define preempt_disable_notrace() do { } while (0)
#define preempt_enable_no_resched_notrace() do { } while (0)
#define preempt_enable_notrace() do { } while (0)
#endif #endif
#ifdef CONFIG_PREEMPT_NOTIFIERS #ifdef CONFIG_PREEMPT_NOTIFIERS
......
...@@ -245,6 +245,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev); ...@@ -245,6 +245,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
extern void init_idle(struct task_struct *idle, int cpu); extern void init_idle(struct task_struct *idle, int cpu);
extern void init_idle_bootup_task(struct task_struct *idle); extern void init_idle_bootup_task(struct task_struct *idle);
extern int runqueue_is_locked(void);
extern cpumask_t nohz_cpu_mask; extern cpumask_t nohz_cpu_mask;
#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ)
extern int select_nohz_load_balancer(int cpu); extern int select_nohz_load_balancer(int cpu);
...@@ -2132,6 +2134,18 @@ static inline void arch_pick_mmap_layout(struct mm_struct *mm) ...@@ -2132,6 +2134,18 @@ static inline void arch_pick_mmap_layout(struct mm_struct *mm)
} }
#endif #endif
#ifdef CONFIG_TRACING
extern void
__trace_special(void *__tr, void *__data,
unsigned long arg1, unsigned long arg2, unsigned long arg3);
#else
static inline void
__trace_special(void *__tr, void *__data,
unsigned long arg1, unsigned long arg2, unsigned long arg3)
{
}
#endif
extern long sched_setaffinity(pid_t pid, const cpumask_t *new_mask); extern long sched_setaffinity(pid_t pid, const cpumask_t *new_mask);
extern long sched_getaffinity(pid_t pid, cpumask_t *mask); extern long sched_getaffinity(pid_t pid, cpumask_t *mask);
...@@ -2226,6 +2240,8 @@ static inline void mm_init_owner(struct mm_struct *mm, struct task_struct *p) ...@@ -2226,6 +2240,8 @@ static inline void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
} }
#endif /* CONFIG_MM_OWNER */ #endif /* CONFIG_MM_OWNER */
#define TASK_STATE_TO_CHAR_STR "RSDTtZX"
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif #endif
...@@ -105,6 +105,8 @@ extern int vm_highmem_is_dirtyable; ...@@ -105,6 +105,8 @@ extern int vm_highmem_is_dirtyable;
extern int block_dump; extern int block_dump;
extern int laptop_mode; extern int laptop_mode;
extern unsigned long determine_dirtyable_memory(void);
extern int dirty_ratio_handler(struct ctl_table *table, int write, extern int dirty_ratio_handler(struct ctl_table *table, int write,
struct file *filp, void __user *buffer, size_t *lenp, struct file *filp, void __user *buffer, size_t *lenp,
loff_t *ppos); loff_t *ppos);
......
...@@ -11,6 +11,18 @@ obj-y = sched.o fork.o exec_domain.o panic.o printk.o profile.o \ ...@@ -11,6 +11,18 @@ obj-y = sched.o fork.o exec_domain.o panic.o printk.o profile.o \
hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \ hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
notifier.o ksysfs.o pm_qos_params.o sched_clock.o notifier.o ksysfs.o pm_qos_params.o sched_clock.o
CFLAGS_REMOVE_sched.o = -mno-spe
ifdef CONFIG_FTRACE
# Do not trace debug files and internal ftrace files
CFLAGS_REMOVE_lockdep.o = -pg
CFLAGS_REMOVE_lockdep_proc.o = -pg
CFLAGS_REMOVE_mutex-debug.o = -pg
CFLAGS_REMOVE_rtmutex-debug.o = -pg
CFLAGS_REMOVE_cgroup-debug.o = -pg
CFLAGS_REMOVE_sched_clock.o = -pg
endif
obj-$(CONFIG_SYSCTL_SYSCALL_CHECK) += sysctl_check.o obj-$(CONFIG_SYSCTL_SYSCALL_CHECK) += sysctl_check.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += time/ obj-y += time/
...@@ -69,6 +81,8 @@ obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o ...@@ -69,6 +81,8 @@ obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
obj-$(CONFIG_MARKERS) += marker.o obj-$(CONFIG_MARKERS) += marker.o
obj-$(CONFIG_LATENCYTOP) += latencytop.o obj-$(CONFIG_LATENCYTOP) += latencytop.o
obj-$(CONFIG_FTRACE) += trace/
obj-$(CONFIG_TRACING) += trace/
obj-$(CONFIG_SMP) += sched_cpupri.o obj-$(CONFIG_SMP) += sched_cpupri.o
ifneq ($(CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER),y) ifneq ($(CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER),y)
......
...@@ -909,7 +909,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, ...@@ -909,7 +909,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
rt_mutex_init_task(p); rt_mutex_init_task(p);
#ifdef CONFIG_TRACE_IRQFLAGS #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_LOCKDEP)
DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled); DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
#endif #endif
......
...@@ -39,6 +39,7 @@ ...@@ -39,6 +39,7 @@
#include <linux/irqflags.h> #include <linux/irqflags.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/hash.h> #include <linux/hash.h>
#include <linux/ftrace.h>
#include <asm/sections.h> #include <asm/sections.h>
...@@ -81,6 +82,8 @@ static int graph_lock(void) ...@@ -81,6 +82,8 @@ static int graph_lock(void)
__raw_spin_unlock(&lockdep_lock); __raw_spin_unlock(&lockdep_lock);
return 0; return 0;
} }
/* prevent any recursions within lockdep from causing deadlocks */
current->lockdep_recursion++;
return 1; return 1;
} }
...@@ -89,6 +92,7 @@ static inline int graph_unlock(void) ...@@ -89,6 +92,7 @@ static inline int graph_unlock(void)
if (debug_locks && !__raw_spin_is_locked(&lockdep_lock)) if (debug_locks && !__raw_spin_is_locked(&lockdep_lock))
return DEBUG_LOCKS_WARN_ON(1); return DEBUG_LOCKS_WARN_ON(1);
current->lockdep_recursion--;
__raw_spin_unlock(&lockdep_lock); __raw_spin_unlock(&lockdep_lock);
return 0; return 0;
} }
...@@ -982,7 +986,7 @@ check_noncircular(struct lock_class *source, unsigned int depth) ...@@ -982,7 +986,7 @@ check_noncircular(struct lock_class *source, unsigned int depth)
return 1; return 1;
} }
#ifdef CONFIG_TRACE_IRQFLAGS #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
/* /*
* Forwards and backwards subgraph searching, for the purposes of * Forwards and backwards subgraph searching, for the purposes of
* proving that two subgraphs can be connected by a new dependency * proving that two subgraphs can be connected by a new dependency
...@@ -1680,7 +1684,7 @@ valid_state(struct task_struct *curr, struct held_lock *this, ...@@ -1680,7 +1684,7 @@ valid_state(struct task_struct *curr, struct held_lock *this,
static int mark_lock(struct task_struct *curr, struct held_lock *this, static int mark_lock(struct task_struct *curr, struct held_lock *this,
enum lock_usage_bit new_bit); enum lock_usage_bit new_bit);
#ifdef CONFIG_TRACE_IRQFLAGS #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
/* /*
* print irq inversion bug: * print irq inversion bug:
...@@ -2013,11 +2017,13 @@ void early_boot_irqs_on(void) ...@@ -2013,11 +2017,13 @@ void early_boot_irqs_on(void)
/* /*
* Hardirqs will be enabled: * Hardirqs will be enabled:
*/ */
void trace_hardirqs_on(void) void trace_hardirqs_on_caller(unsigned long a0)
{ {
struct task_struct *curr = current; struct task_struct *curr = current;
unsigned long ip; unsigned long ip;
time_hardirqs_on(CALLER_ADDR0, a0);
if (unlikely(!debug_locks || current->lockdep_recursion)) if (unlikely(!debug_locks || current->lockdep_recursion))
return; return;
...@@ -2055,16 +2061,23 @@ void trace_hardirqs_on(void) ...@@ -2055,16 +2061,23 @@ void trace_hardirqs_on(void)
curr->hardirq_enable_event = ++curr->irq_events; curr->hardirq_enable_event = ++curr->irq_events;
debug_atomic_inc(&hardirqs_on_events); debug_atomic_inc(&hardirqs_on_events);
} }
EXPORT_SYMBOL(trace_hardirqs_on_caller);
void trace_hardirqs_on(void)
{
trace_hardirqs_on_caller(CALLER_ADDR0);
}
EXPORT_SYMBOL(trace_hardirqs_on); EXPORT_SYMBOL(trace_hardirqs_on);
/* /*
* Hardirqs were disabled: * Hardirqs were disabled:
*/ */
void trace_hardirqs_off(void) void trace_hardirqs_off_caller(unsigned long a0)
{ {
struct task_struct *curr = current; struct task_struct *curr = current;
time_hardirqs_off(CALLER_ADDR0, a0);
if (unlikely(!debug_locks || current->lockdep_recursion)) if (unlikely(!debug_locks || current->lockdep_recursion))
return; return;
...@@ -2082,7 +2095,12 @@ void trace_hardirqs_off(void) ...@@ -2082,7 +2095,12 @@ void trace_hardirqs_off(void)
} else } else
debug_atomic_inc(&redundant_hardirqs_off); debug_atomic_inc(&redundant_hardirqs_off);
} }
EXPORT_SYMBOL(trace_hardirqs_off_caller);
void trace_hardirqs_off(void)
{
trace_hardirqs_off_caller(CALLER_ADDR0);
}
EXPORT_SYMBOL(trace_hardirqs_off); EXPORT_SYMBOL(trace_hardirqs_off);
/* /*
...@@ -2246,7 +2264,7 @@ static inline int separate_irq_context(struct task_struct *curr, ...@@ -2246,7 +2264,7 @@ static inline int separate_irq_context(struct task_struct *curr,
* Mark a lock with a usage bit, and validate the state transition: * Mark a lock with a usage bit, and validate the state transition:
*/ */
static int mark_lock(struct task_struct *curr, struct held_lock *this, static int mark_lock(struct task_struct *curr, struct held_lock *this,
enum lock_usage_bit new_bit) enum lock_usage_bit new_bit)
{ {
unsigned int new_mask = 1 << new_bit, ret = 1; unsigned int new_mask = 1 << new_bit, ret = 1;
...@@ -2686,7 +2704,7 @@ static void check_flags(unsigned long flags) ...@@ -2686,7 +2704,7 @@ static void check_flags(unsigned long flags)
* and also avoid lockdep recursion: * and also avoid lockdep recursion:
*/ */
void lock_acquire(struct lockdep_map *lock, unsigned int subclass, void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
int trylock, int read, int check, unsigned long ip) int trylock, int read, int check, unsigned long ip)
{ {
unsigned long flags; unsigned long flags;
...@@ -2708,7 +2726,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass, ...@@ -2708,7 +2726,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
EXPORT_SYMBOL_GPL(lock_acquire); EXPORT_SYMBOL_GPL(lock_acquire);
void lock_release(struct lockdep_map *lock, int nested, unsigned long ip) void lock_release(struct lockdep_map *lock, int nested,
unsigned long ip)
{ {
unsigned long flags; unsigned long flags;
......
...@@ -55,8 +55,8 @@ static DEFINE_MUTEX(markers_mutex); ...@@ -55,8 +55,8 @@ static DEFINE_MUTEX(markers_mutex);
struct marker_entry { struct marker_entry {
struct hlist_node hlist; struct hlist_node hlist;
char *format; char *format;
void (*call)(const struct marker *mdata, /* Probe wrapper */ /* Probe wrapper */
void *call_private, const char *fmt, ...); void (*call)(const struct marker *mdata, void *call_private, ...);
struct marker_probe_closure single; struct marker_probe_closure single;
struct marker_probe_closure *multi; struct marker_probe_closure *multi;
int refcount; /* Number of times armed. 0 if disarmed. */ int refcount; /* Number of times armed. 0 if disarmed. */
...@@ -91,15 +91,13 @@ EXPORT_SYMBOL_GPL(__mark_empty_function); ...@@ -91,15 +91,13 @@ EXPORT_SYMBOL_GPL(__mark_empty_function);
* marker_probe_cb Callback that prepares the variable argument list for probes. * marker_probe_cb Callback that prepares the variable argument list for probes.
* @mdata: pointer of type struct marker * @mdata: pointer of type struct marker
* @call_private: caller site private data * @call_private: caller site private data
* @fmt: format string
* @...: Variable argument list. * @...: Variable argument list.
* *
* Since we do not use "typical" pointer based RCU in the 1 argument case, we * Since we do not use "typical" pointer based RCU in the 1 argument case, we
* need to put a full smp_rmb() in this branch. This is why we do not use * need to put a full smp_rmb() in this branch. This is why we do not use
* rcu_dereference() for the pointer read. * rcu_dereference() for the pointer read.
*/ */
void marker_probe_cb(const struct marker *mdata, void *call_private, void marker_probe_cb(const struct marker *mdata, void *call_private, ...)
const char *fmt, ...)
{ {
va_list args; va_list args;
char ptype; char ptype;
...@@ -120,8 +118,9 @@ void marker_probe_cb(const struct marker *mdata, void *call_private, ...@@ -120,8 +118,9 @@ void marker_probe_cb(const struct marker *mdata, void *call_private,
/* Must read the ptr before private data. They are not data /* Must read the ptr before private data. They are not data
* dependant, so we put an explicit smp_rmb() here. */ * dependant, so we put an explicit smp_rmb() here. */
smp_rmb(); smp_rmb();
va_start(args, fmt); va_start(args, call_private);
func(mdata->single.probe_private, call_private, fmt, &args); func(mdata->single.probe_private, call_private, mdata->format,
&args);
va_end(args); va_end(args);
} else { } else {
struct marker_probe_closure *multi; struct marker_probe_closure *multi;
...@@ -136,9 +135,9 @@ void marker_probe_cb(const struct marker *mdata, void *call_private, ...@@ -136,9 +135,9 @@ void marker_probe_cb(const struct marker *mdata, void *call_private,
smp_read_barrier_depends(); smp_read_barrier_depends();
multi = mdata->multi; multi = mdata->multi;
for (i = 0; multi[i].func; i++) { for (i = 0; multi[i].func; i++) {
va_start(args, fmt); va_start(args, call_private);
multi[i].func(multi[i].probe_private, call_private, fmt, multi[i].func(multi[i].probe_private, call_private,
&args); mdata->format, &args);
va_end(args); va_end(args);
} }
} }
...@@ -150,13 +149,11 @@ EXPORT_SYMBOL_GPL(marker_probe_cb); ...@@ -150,13 +149,11 @@ EXPORT_SYMBOL_GPL(marker_probe_cb);
* marker_probe_cb Callback that does not prepare the variable argument list. * marker_probe_cb Callback that does not prepare the variable argument list.
* @mdata: pointer of type struct marker * @mdata: pointer of type struct marker
* @call_private: caller site private data * @call_private: caller site private data
* @fmt: format string
* @...: Variable argument list. * @...: Variable argument list.
* *
* Should be connected to markers "MARK_NOARGS". * Should be connected to markers "MARK_NOARGS".
*/ */
void marker_probe_cb_noarg(const struct marker *mdata, void marker_probe_cb_noarg(const struct marker *mdata, void *call_private, ...)
void *call_private, const char *fmt, ...)
{ {
va_list args; /* not initialized */ va_list args; /* not initialized */
char ptype; char ptype;
...@@ -172,7 +169,8 @@ void marker_probe_cb_noarg(const struct marker *mdata, ...@@ -172,7 +169,8 @@ void marker_probe_cb_noarg(const struct marker *mdata,
/* Must read the ptr before private data. They are not data /* Must read the ptr before private data. They are not data
* dependant, so we put an explicit smp_rmb() here. */ * dependant, so we put an explicit smp_rmb() here. */
smp_rmb(); smp_rmb();
func(mdata->single.probe_private, call_private, fmt, &args); func(mdata->single.probe_private, call_private, mdata->format,
&args);
} else { } else {
struct marker_probe_closure *multi; struct marker_probe_closure *multi;
int i; int i;
...@@ -186,8 +184,8 @@ void marker_probe_cb_noarg(const struct marker *mdata, ...@@ -186,8 +184,8 @@ void marker_probe_cb_noarg(const struct marker *mdata,
smp_read_barrier_depends(); smp_read_barrier_depends();
multi = mdata->multi; multi = mdata->multi;
for (i = 0; multi[i].func; i++) for (i = 0; multi[i].func; i++)
multi[i].func(multi[i].probe_private, call_private, fmt, multi[i].func(multi[i].probe_private, call_private,
&args); mdata->format, &args);
} }
preempt_enable(); preempt_enable();
} }
......
...@@ -1046,7 +1046,9 @@ void release_console_sem(void) ...@@ -1046,7 +1046,9 @@ void release_console_sem(void)
_log_end = log_end; _log_end = log_end;
con_start = log_end; /* Flush */ con_start = log_end; /* Flush */
spin_unlock(&logbuf_lock); spin_unlock(&logbuf_lock);
stop_critical_timings(); /* don't trace print latency */
call_console_drivers(_con_start, _log_end); call_console_drivers(_con_start, _log_end);
start_critical_timings();
local_irq_restore(flags); local_irq_restore(flags);
} }
console_locked = 0; console_locked = 0;
......
...@@ -70,6 +70,7 @@ ...@@ -70,6 +70,7 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/ftrace.h>
#include <asm/tlb.h> #include <asm/tlb.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
...@@ -645,6 +646,24 @@ static inline void update_rq_clock(struct rq *rq) ...@@ -645,6 +646,24 @@ static inline void update_rq_clock(struct rq *rq)
# define const_debug static const # define const_debug static const
#endif #endif
/**
* runqueue_is_locked
*
* Returns true if the current cpu runqueue is locked.
* This interface allows printk to be called with the runqueue lock
* held and know whether or not it is OK to wake up the klogd.
*/
int runqueue_is_locked(void)
{
int cpu = get_cpu();
struct rq *rq = cpu_rq(cpu);
int ret;
ret = spin_is_locked(&rq->lock);
put_cpu();
return ret;
}
/* /*
* Debugging: various feature bits * Debugging: various feature bits
*/ */
...@@ -2318,6 +2337,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync) ...@@ -2318,6 +2337,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
success = 1; success = 1;
out_running: out_running:
trace_mark(kernel_sched_wakeup,
"pid %d state %ld ## rq %p task %p rq->curr %p",
p->pid, p->state, rq, p, rq->curr);
check_preempt_curr(rq, p); check_preempt_curr(rq, p);
p->state = TASK_RUNNING; p->state = TASK_RUNNING;
...@@ -2450,6 +2472,9 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags) ...@@ -2450,6 +2472,9 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
p->sched_class->task_new(rq, p); p->sched_class->task_new(rq, p);
inc_nr_running(rq); inc_nr_running(rq);
} }
trace_mark(kernel_sched_wakeup_new,
"pid %d state %ld ## rq %p task %p rq->curr %p",
p->pid, p->state, rq, p, rq->curr);
check_preempt_curr(rq, p); check_preempt_curr(rq, p);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (p->sched_class->task_wake_up) if (p->sched_class->task_wake_up)
...@@ -2622,6 +2647,11 @@ context_switch(struct rq *rq, struct task_struct *prev, ...@@ -2622,6 +2647,11 @@ context_switch(struct rq *rq, struct task_struct *prev,
struct mm_struct *mm, *oldmm; struct mm_struct *mm, *oldmm;
prepare_task_switch(rq, prev, next); prepare_task_switch(rq, prev, next);
trace_mark(kernel_sched_schedule,
"prev_pid %d next_pid %d prev_state %ld "
"## rq %p prev %p next %p",
prev->pid, next->pid, prev->state,
rq, prev, next);
mm = next->mm; mm = next->mm;
oldmm = prev->active_mm; oldmm = prev->active_mm;
/* /*
...@@ -4221,26 +4251,44 @@ void scheduler_tick(void) ...@@ -4221,26 +4251,44 @@ void scheduler_tick(void)
#endif #endif
} }
#if defined(CONFIG_PREEMPT) && defined(CONFIG_DEBUG_PREEMPT) #if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
defined(CONFIG_PREEMPT_TRACER))
static inline unsigned long get_parent_ip(unsigned long addr)
{
if (in_lock_functions(addr)) {
addr = CALLER_ADDR2;
if (in_lock_functions(addr))
addr = CALLER_ADDR3;
}
return addr;
}
void __kprobes add_preempt_count(int val) void __kprobes add_preempt_count(int val)
{ {
#ifdef CONFIG_DEBUG_PREEMPT
/* /*
* Underflow? * Underflow?
*/ */
if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0))) if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
return; return;
#endif
preempt_count() += val; preempt_count() += val;
#ifdef CONFIG_DEBUG_PREEMPT
/* /*
* Spinlock count overflowing soon? * Spinlock count overflowing soon?
*/ */
DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
PREEMPT_MASK - 10); PREEMPT_MASK - 10);
#endif
if (preempt_count() == val)
trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
} }
EXPORT_SYMBOL(add_preempt_count); EXPORT_SYMBOL(add_preempt_count);
void __kprobes sub_preempt_count(int val) void __kprobes sub_preempt_count(int val)
{ {
#ifdef CONFIG_DEBUG_PREEMPT
/* /*
* Underflow? * Underflow?
*/ */
...@@ -4252,7 +4300,10 @@ void __kprobes sub_preempt_count(int val) ...@@ -4252,7 +4300,10 @@ void __kprobes sub_preempt_count(int val)
if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) && if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
!(preempt_count() & PREEMPT_MASK))) !(preempt_count() & PREEMPT_MASK)))
return; return;
#endif
if (preempt_count() == val)
trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
preempt_count() -= val; preempt_count() -= val;
} }
EXPORT_SYMBOL(sub_preempt_count); EXPORT_SYMBOL(sub_preempt_count);
...@@ -5566,7 +5617,7 @@ long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval) ...@@ -5566,7 +5617,7 @@ long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
return retval; return retval;
} }
static const char stat_nam[] = "RSDTtZX"; static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
void sched_show_task(struct task_struct *p) void sched_show_task(struct task_struct *p)
{ {
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/semaphore.h> #include <linux/semaphore.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/ftrace.h>
static noinline void __down(struct semaphore *sem); static noinline void __down(struct semaphore *sem);
static noinline int __down_interruptible(struct semaphore *sem); static noinline int __down_interruptible(struct semaphore *sem);
......
...@@ -436,7 +436,7 @@ int __lockfunc _spin_trylock_bh(spinlock_t *lock) ...@@ -436,7 +436,7 @@ int __lockfunc _spin_trylock_bh(spinlock_t *lock)
} }
EXPORT_SYMBOL(_spin_trylock_bh); EXPORT_SYMBOL(_spin_trylock_bh);
int in_lock_functions(unsigned long addr) notrace int in_lock_functions(unsigned long addr)
{ {
/* Linker adds these: start and end of __lockfunc functions */ /* Linker adds these: start and end of __lockfunc functions */
extern char __lock_text_start[], __lock_text_end[]; extern char __lock_text_start[], __lock_text_end[];
......
...@@ -46,6 +46,7 @@ ...@@ -46,6 +46,7 @@
#include <linux/nfs_fs.h> #include <linux/nfs_fs.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/ftrace.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/processor.h> #include <asm/processor.h>
...@@ -463,6 +464,16 @@ static struct ctl_table kern_table[] = { ...@@ -463,6 +464,16 @@ static struct ctl_table kern_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = &proc_dointvec, .proc_handler = &proc_dointvec,
}, },
#ifdef CONFIG_FTRACE
{
.ctl_name = CTL_UNNUMBERED,
.procname = "ftrace_enabled",
.data = &ftrace_enabled,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &ftrace_enable_sysctl,
},
#endif
#ifdef CONFIG_KMOD #ifdef CONFIG_KMOD
{ {
.ctl_name = KERN_MODPROBE, .ctl_name = KERN_MODPROBE,
......
#
# Architectures that offer an FTRACE implementation should select HAVE_FTRACE:
#
config HAVE_FTRACE
bool
config HAVE_DYNAMIC_FTRACE
bool
config TRACER_MAX_TRACE
bool
config TRACING
bool
select DEBUG_FS
select STACKTRACE
config FTRACE
bool "Kernel Function Tracer"
depends on HAVE_FTRACE
select FRAME_POINTER
select TRACING
select CONTEXT_SWITCH_TRACER
help
Enable the kernel to trace every kernel function. This is done
by using a compiler feature to insert a small, 5-byte No-Operation
instruction to the beginning of every kernel function, which NOP
sequence is then dynamically patched into a tracer call when
tracing is enabled by the administrator. If it's runtime disabled
(the bootup default), then the overhead of the instructions is very
small and not measurable even in micro-benchmarks.
config IRQSOFF_TRACER
bool "Interrupts-off Latency Tracer"
default n
depends on TRACE_IRQFLAGS_SUPPORT
depends on GENERIC_TIME
depends on HAVE_FTRACE
select TRACE_IRQFLAGS
select TRACING
select TRACER_MAX_TRACE
help
This option measures the time spent in irqs-off critical
sections, with microsecond accuracy.
The default measurement method is a maximum search, which is
disabled by default and can be runtime (re-)started
via:
echo 0 > /debugfs/tracing/tracing_max_latency
(Note that kernel size and overhead increases with this option
enabled. This option and the preempt-off timing option can be
used together or separately.)
config PREEMPT_TRACER
bool "Preemption-off Latency Tracer"
default n
depends on GENERIC_TIME
depends on PREEMPT
depends on HAVE_FTRACE
select TRACING
select TRACER_MAX_TRACE
help
This option measures the time spent in preemption off critical
sections, with microsecond accuracy.
The default measurement method is a maximum search, which is
disabled by default and can be runtime (re-)started
via:
echo 0 > /debugfs/tracing/tracing_max_latency
(Note that kernel size and overhead increases with this option
enabled. This option and the irqs-off timing option can be
used together or separately.)
config SYSPROF_TRACER
bool "Sysprof Tracer"
depends on X86
select TRACING
help
This tracer provides the trace needed by the 'Sysprof' userspace
tool.
config SCHED_TRACER
bool "Scheduling Latency Tracer"
depends on HAVE_FTRACE
select TRACING
select CONTEXT_SWITCH_TRACER
select TRACER_MAX_TRACE
help
This tracer tracks the latency of the highest priority task
to be scheduled in, starting from the point it has woken up.
config CONTEXT_SWITCH_TRACER
bool "Trace process context switches"
depends on HAVE_FTRACE
select TRACING
select MARKERS
help
This tracer gets called from the context switch and records
all switching of tasks.
config DYNAMIC_FTRACE
bool "enable/disable ftrace tracepoints dynamically"
depends on FTRACE
depends on HAVE_DYNAMIC_FTRACE
default y
help
This option will modify all the calls to ftrace dynamically
(will patch them out of the binary image and replaces them
with a No-Op instruction) as they are called. A table is
created to dynamically enable them again.
This way a CONFIG_FTRACE kernel is slightly larger, but otherwise
has native performance as long as no tracing is active.
The changes to the code are done by a kernel thread that
wakes up once a second and checks to see if any ftrace calls
were made. If so, it runs stop_machine (stops all CPUS)
and modifies the code to jump over the call to ftrace.
config FTRACE_SELFTEST
bool
config FTRACE_STARTUP_TEST
bool "Perform a startup test on ftrace"
depends on TRACING
select FTRACE_SELFTEST
help
This option performs a series of startup tests on ftrace. On bootup
a series of tests are made to verify that the tracer is
functioning properly. It will do tests on all the configured
tracers of ftrace.
# Do not instrument the tracer itself:
ifdef CONFIG_FTRACE
ORIG_CFLAGS := $(KBUILD_CFLAGS)
KBUILD_CFLAGS = $(subst -pg,,$(ORIG_CFLAGS))
# selftest needs instrumentation
CFLAGS_trace_selftest_dynamic.o = -pg
obj-y += trace_selftest_dynamic.o
endif
obj-$(CONFIG_FTRACE) += libftrace.o
obj-$(CONFIG_TRACING) += trace.o
obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
obj-$(CONFIG_SYSPROF_TRACER) += trace_sysprof.o
obj-$(CONFIG_FTRACE) += trace_functions.o
obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
obj-$(CONFIG_PREEMPT_TRACER) += trace_irqsoff.o
obj-$(CONFIG_SCHED_TRACER) += trace_sched_wakeup.o
obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o
libftrace-y := ftrace.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* ring buffer based function tracer
*
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
* Copyright (C) 2008 Ingo Molnar <mingo@redhat.com>
*
* Based on code from the latency_tracer, that is:
*
* Copyright (C) 2004-2006 Ingo Molnar
* Copyright (C) 2004 William Lee Irwin III
*/
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/ftrace.h>
#include <linux/fs.h>
#include "trace.h"
static void function_reset(struct trace_array *tr)
{
int cpu;
tr->time_start = ftrace_now(tr->cpu);
for_each_online_cpu(cpu)
tracing_reset(tr->data[cpu]);
}
static void start_function_trace(struct trace_array *tr)
{
tr->cpu = get_cpu();
function_reset(tr);
put_cpu();
tracing_start_cmdline_record();
tracing_start_function_trace();
}
static void stop_function_trace(struct trace_array *tr)
{
tracing_stop_function_trace();
tracing_stop_cmdline_record();
}
static void function_trace_init(struct trace_array *tr)
{
if (tr->ctrl)
start_function_trace(tr);
}
static void function_trace_reset(struct trace_array *tr)
{
if (tr->ctrl)
stop_function_trace(tr);
}
static void function_trace_ctrl_update(struct trace_array *tr)
{
if (tr->ctrl)
start_function_trace(tr);
else
stop_function_trace(tr);
}
static struct tracer function_trace __read_mostly =
{
.name = "ftrace",
.init = function_trace_init,
.reset = function_trace_reset,
.ctrl_update = function_trace_ctrl_update,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_function,
#endif
};
static __init int init_function_trace(void)
{
return register_tracer(&function_trace);
}
device_initcall(init_function_trace);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#include "trace.h"
int DYN_FTRACE_TEST_NAME(void)
{
/* used to call mcount */
return 0;
}
This diff is collapsed.
...@@ -634,6 +634,8 @@ config LATENCYTOP ...@@ -634,6 +634,8 @@ config LATENCYTOP
Enable this option if you want to use the LatencyTOP tool Enable this option if you want to use the LatencyTOP tool
to find out which userspace is blocking on what kernel operations. to find out which userspace is blocking on what kernel operations.
source kernel/trace/Kconfig
config PROVIDE_OHCI1394_DMA_INIT config PROVIDE_OHCI1394_DMA_INIT
bool "Remote debugging over FireWire early on boot" bool "Remote debugging over FireWire early on boot"
depends on PCI && X86 depends on PCI && X86
......
...@@ -8,6 +8,15 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \ ...@@ -8,6 +8,15 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
sha1.o irq_regs.o reciprocal_div.o argv_split.o \ sha1.o irq_regs.o reciprocal_div.o argv_split.o \
proportions.o prio_heap.o ratelimit.o proportions.o prio_heap.o ratelimit.o
ifdef CONFIG_FTRACE
# Do not profile string.o, since it may be used in early boot or vdso
CFLAGS_REMOVE_string.o = -pg
# Also do not profile any debug utilities
CFLAGS_REMOVE_spinlock_debug.o = -pg
CFLAGS_REMOVE_list_debug.o = -pg
CFLAGS_REMOVE_debugobjects.o = -pg
endif
lib-$(CONFIG_MMU) += ioremap.o lib-$(CONFIG_MMU) += ioremap.o
lib-$(CONFIG_SMP) += cpumask.o lib-$(CONFIG_SMP) += cpumask.o
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/sched.h> #include <linux/sched.h>
unsigned int debug_smp_processor_id(void) notrace unsigned int debug_smp_processor_id(void)
{ {
unsigned long preempt_count = preempt_count(); unsigned long preempt_count = preempt_count();
int this_cpu = raw_smp_processor_id(); int this_cpu = raw_smp_processor_id();
...@@ -37,7 +37,7 @@ unsigned int debug_smp_processor_id(void) ...@@ -37,7 +37,7 @@ unsigned int debug_smp_processor_id(void)
/* /*
* Avoid recursion: * Avoid recursion:
*/ */
preempt_disable(); preempt_disable_notrace();
if (!printk_ratelimit()) if (!printk_ratelimit())
goto out_enable; goto out_enable;
...@@ -49,7 +49,7 @@ unsigned int debug_smp_processor_id(void) ...@@ -49,7 +49,7 @@ unsigned int debug_smp_processor_id(void)
dump_stack(); dump_stack();
out_enable: out_enable:
preempt_enable_no_resched(); preempt_enable_no_resched_notrace();
out: out:
return this_cpu; return this_cpu;
} }
......
...@@ -126,8 +126,6 @@ static void background_writeout(unsigned long _min_pages); ...@@ -126,8 +126,6 @@ static void background_writeout(unsigned long _min_pages);
static struct prop_descriptor vm_completions; static struct prop_descriptor vm_completions;
static struct prop_descriptor vm_dirties; static struct prop_descriptor vm_dirties;
static unsigned long determine_dirtyable_memory(void);
/* /*
* couple the period to the dirty_ratio: * couple the period to the dirty_ratio:
* *
...@@ -347,7 +345,13 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) ...@@ -347,7 +345,13 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
#endif #endif
} }
static unsigned long determine_dirtyable_memory(void) /**
* determine_dirtyable_memory - amount of memory that may be used
*
* Returns the numebr of pages that can currently be freed and used
* by the kernel for direct mappings.
*/
unsigned long determine_dirtyable_memory(void)
{ {
unsigned long x; unsigned long x;
......
...@@ -96,7 +96,8 @@ basename_flags = -D"KBUILD_BASENAME=KBUILD_STR($(call name-fix,$(basetarget)))" ...@@ -96,7 +96,8 @@ basename_flags = -D"KBUILD_BASENAME=KBUILD_STR($(call name-fix,$(basetarget)))"
modname_flags = $(if $(filter 1,$(words $(modname))),\ modname_flags = $(if $(filter 1,$(words $(modname))),\
-D"KBUILD_MODNAME=KBUILD_STR($(call name-fix,$(modname)))") -D"KBUILD_MODNAME=KBUILD_STR($(call name-fix,$(modname)))")
_c_flags = $(KBUILD_CFLAGS) $(ccflags-y) $(CFLAGS_$(basetarget).o) orig_c_flags = $(KBUILD_CFLAGS) $(ccflags-y) $(CFLAGS_$(basetarget).o)
_c_flags = $(filter-out $(CFLAGS_REMOVE_$(basetarget).o), $(orig_c_flags))
_a_flags = $(KBUILD_AFLAGS) $(asflags-y) $(AFLAGS_$(basetarget).o) _a_flags = $(KBUILD_AFLAGS) $(asflags-y) $(AFLAGS_$(basetarget).o)
_cpp_flags = $(KBUILD_CPPFLAGS) $(cppflags-y) $(CPPFLAGS_$(@F)) _cpp_flags = $(KBUILD_CPPFLAGS) $(cppflags-y) $(CPPFLAGS_$(@F))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment