Commit e7f2f991 authored by Linus Torvalds's avatar Linus Torvalds

Merge phase #5 (misc) of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

Merges oprofile, timers/hpet, x86/traps, x86/time, and x86/core misc items.

* 'x86-core-v4-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (132 commits)
  x86: change early_ioremap to use slots instead of nesting
  x86: adjust dependencies for CONFIG_X86_CMOV
  dumpstack: x86: various small unification steps, fix
  x86: remove additional_cpus
  x86: remove additional_cpus configurability
  x86: improve UP kernel when CPU-hotplug and SMP is enabled
  dumpstack: x86: various small unification steps
  dumpstack: i386: make kstack= an early boot-param and add oops=panic
  dumpstack: x86: use log_lvl and unify trace formatting
  dumptrace: x86: consistently include loglevel, print stack switch
  dumpstack: x86: add "end" parameter to valid_stack_ptr and print_context_stack
  dumpstack: x86: make printk_address equal
  dumpstack: x86: move die_nmi to dumpstack_32.c
  traps: x86: finalize unification of traps.c
  traps: x86: make traps_32.c and traps_64.c equal
  traps: x86: various noop-changes preparing for unification of traps_xx.c
  traps: x86_64: use task_pid_nr(tsk) instead of tsk->pid in do_general_protection
  traps: i386: expand clear_mem_error and remove from mach_traps.h
  traps: x86_64: make io_check_error equal to the one on i386
  traps: i386: use preempt_conditional_sti/cli in do_int3
  ...
parents 11a96d18 c00193f9
...@@ -159,8 +159,6 @@ hayes-esp.txt ...@@ -159,8 +159,6 @@ hayes-esp.txt
- info on using the Hayes ESP serial driver. - info on using the Hayes ESP serial driver.
highuid.txt highuid.txt
- notes on the change from 16 bit to 32 bit user/group IDs. - notes on the change from 16 bit to 32 bit user/group IDs.
hpet.txt
- High Precision Event Timer Driver for Linux.
timers/ timers/
- info on the timer related topics - info on the timer related topics
hw_random.txt hw_random.txt
......
00-INDEX
- this file
highres.txt
- High resolution timers and dynamic ticks design notes
hpet.txt
- High Precision Event Timer Driver for Linux
hrtimers.txt
- subsystem for high-resolution kernel timers
timer_stats.txt
- timer usage statistics
High Precision Event Timer Driver for Linux High Precision Event Timer Driver for Linux
The High Precision Event Timer (HPET) hardware is the future replacement The High Precision Event Timer (HPET) hardware follows a specification
for the 8254 and Real Time Clock (RTC) periodic timer functionality. by Intel and Microsoft which can be found at
Each HPET can have up to 32 timers. It is possible to configure the
first two timers as legacy replacements for 8254 and RTC periodic timers. http://www.intel.com/technology/architecture/hpetspec.htm
A specification done by Intel and Microsoft can be found at
<http://www.intel.com/technology/architecture/hpetspec.htm>. Each HPET has one fixed-rate counter (at 10+ MHz, hence "High Precision")
and up to 32 comparators. Normally three or more comparators are provided,
each of which can generate oneshot interupts and at least one of which has
additional hardware to support periodic interrupts. The comparators are
also called "timers", which can be misleading since usually timers are
independent of each other ... these share a counter, complicating resets.
HPET devices can support two interrupt routing modes. In one mode, the
comparators are additional interrupt sources with no particular system
role. Many x86 BIOS writers don't route HPET interrupts at all, which
prevents use of that mode. They support the other "legacy replacement"
mode where the first two comparators block interrupts from 8254 timers
and from the RTC.
The driver supports detection of HPET driver allocation and initialization The driver supports detection of HPET driver allocation and initialization
of the HPET before the driver module_init routine is called. This enables of the HPET before the driver module_init routine is called. This enables
platform code which uses timer 0 or 1 as the main timer to intercept HPET platform code which uses timer 0 or 1 as the main timer to intercept HPET
initialization. An example of this initialization can be found in initialization. An example of this initialization can be found in
arch/i386/kernel/time_hpet.c. arch/x86/kernel/hpet.c.
The driver provides two APIs which are very similar to the API found in The driver provides a userspace API which resembles the API found in the
the rtc.c driver. There is a user space API and a kernel space API. RTC driver framework. An example user space program is provided below.
An example user space program is provided below.
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
...@@ -286,15 +297,3 @@ out: ...@@ -286,15 +297,3 @@ out:
return; return;
} }
The kernel API has three interfaces exported from the driver:
hpet_register(struct hpet_task *tp, int periodic)
hpet_unregister(struct hpet_task *tp)
hpet_control(struct hpet_task *tp, unsigned int cmd, unsigned long arg)
The kernel module using this interface fills in the ht_func and ht_data
members of the hpet_task structure before calling hpet_register.
hpet_control simply vectors to the hpet_ioctl routine and has the same
commands and respective arguments as the user API. hpet_unregister
is used to terminate usage of the HPET timer reserved by hpet_register.
...@@ -13,6 +13,20 @@ config OPROFILE ...@@ -13,6 +13,20 @@ config OPROFILE
If unsure, say N. If unsure, say N.
config OPROFILE_IBS
bool "OProfile AMD IBS support (EXPERIMENTAL)"
default n
depends on OPROFILE && SMP && X86
help
Instruction-Based Sampling (IBS) is a new profiling
technique that provides rich, precise program performance
information. IBS is introduced by AMD Family10h processors
(AMD Opteron Quad-Core processor “Barcelona”) to overcome
the limitations of conventional performance counter
sampling.
If unsure, say N.
config HAVE_OPROFILE config HAVE_OPROFILE
def_bool n def_bool n
......
...@@ -38,8 +38,7 @@ config M386 ...@@ -38,8 +38,7 @@ config M386
- "Crusoe" for the Transmeta Crusoe series. - "Crusoe" for the Transmeta Crusoe series.
- "Efficeon" for the Transmeta Efficeon series. - "Efficeon" for the Transmeta Efficeon series.
- "Winchip-C6" for original IDT Winchip. - "Winchip-C6" for original IDT Winchip.
- "Winchip-2" for IDT Winchip 2. - "Winchip-2" for IDT Winchips with 3dNow! capabilities.
- "Winchip-2A" for IDT Winchips with 3dNow! capabilities.
- "GeodeGX1" for Geode GX1 (Cyrix MediaGX). - "GeodeGX1" for Geode GX1 (Cyrix MediaGX).
- "Geode GX/LX" For AMD Geode GX and LX processors. - "Geode GX/LX" For AMD Geode GX and LX processors.
- "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3. - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
...@@ -194,19 +193,11 @@ config MWINCHIPC6 ...@@ -194,19 +193,11 @@ config MWINCHIPC6
treat this chip as a 586TSC with some extended instructions treat this chip as a 586TSC with some extended instructions
and alignment requirements. and alignment requirements.
config MWINCHIP2
bool "Winchip-2"
depends on X86_32
help
Select this for an IDT Winchip-2. Linux and GCC
treat this chip as a 586TSC with some extended instructions
and alignment requirements.
config MWINCHIP3D config MWINCHIP3D
bool "Winchip-2A/Winchip-3" bool "Winchip-2/Winchip-2A/Winchip-3"
depends on X86_32 depends on X86_32
help help
Select this for an IDT Winchip-2A or 3. Linux and GCC Select this for an IDT Winchip-2, 2A or 3. Linux and GCC
treat this chip as a 586TSC with some extended instructions treat this chip as a 586TSC with some extended instructions
and alignment requirements. Also enable out of order memory and alignment requirements. Also enable out of order memory
stores for this CPU, which can increase performance of some stores for this CPU, which can increase performance of some
...@@ -318,7 +309,7 @@ config X86_L1_CACHE_SHIFT ...@@ -318,7 +309,7 @@ config X86_L1_CACHE_SHIFT
int int
default "7" if MPENTIUM4 || X86_GENERIC || GENERIC_CPU || MPSC default "7" if MPENTIUM4 || X86_GENERIC || GENERIC_CPU || MPSC
default "4" if X86_ELAN || M486 || M386 || MGEODEGX1 default "4" if X86_ELAN || M486 || M386 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7 default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7
config X86_XADD config X86_XADD
...@@ -360,7 +351,7 @@ config X86_POPAD_OK ...@@ -360,7 +351,7 @@ config X86_POPAD_OK
config X86_ALIGNMENT_16 config X86_ALIGNMENT_16
def_bool y def_bool y
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 || MGEODEGX1 depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 || MGEODEGX1
config X86_INTEL_USERCOPY config X86_INTEL_USERCOPY
def_bool y def_bool y
...@@ -368,7 +359,7 @@ config X86_INTEL_USERCOPY ...@@ -368,7 +359,7 @@ config X86_INTEL_USERCOPY
config X86_USE_PPRO_CHECKSUM config X86_USE_PPRO_CHECKSUM
def_bool y def_bool y
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX || MCORE2 depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX || MCORE2
config X86_USE_3DNOW config X86_USE_3DNOW
def_bool y def_bool y
...@@ -376,7 +367,7 @@ config X86_USE_3DNOW ...@@ -376,7 +367,7 @@ config X86_USE_3DNOW
config X86_OOSTORE config X86_OOSTORE
def_bool y def_bool y
depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR depends on (MWINCHIP3D || MWINCHIPC6) && MTRR
# #
# P6_NOPs are a relatively minor optimization that require a family >= # P6_NOPs are a relatively minor optimization that require a family >=
...@@ -396,7 +387,7 @@ config X86_P6_NOP ...@@ -396,7 +387,7 @@ config X86_P6_NOP
config X86_TSC config X86_TSC
def_bool y def_bool y
depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64 depends on ((MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64
config X86_CMPXCHG64 config X86_CMPXCHG64
def_bool y def_bool y
...@@ -406,7 +397,7 @@ config X86_CMPXCHG64 ...@@ -406,7 +397,7 @@ config X86_CMPXCHG64
# generates cmov. # generates cmov.
config X86_CMOV config X86_CMOV
def_bool y def_bool y
depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || X86_64) depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64)
config X86_MINIMUM_CPU_FAMILY config X86_MINIMUM_CPU_FAMILY
int int
...@@ -417,7 +408,7 @@ config X86_MINIMUM_CPU_FAMILY ...@@ -417,7 +408,7 @@ config X86_MINIMUM_CPU_FAMILY
config X86_DEBUGCTLMSR config X86_DEBUGCTLMSR
def_bool y def_bool y
depends on !(MK6 || MWINCHIPC6 || MWINCHIP2 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386) depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386)
menuconfig PROCESSOR_SELECT menuconfig PROCESSOR_SELECT
bool "Supported processor vendors" if EMBEDDED bool "Supported processor vendors" if EMBEDDED
......
...@@ -28,7 +28,6 @@ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon) ...@@ -28,7 +28,6 @@ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0 cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0 cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586) cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
cflags-$(CONFIG_MWINCHIP2) += $(call cc-option,-march=winchip2,-march=i586)
cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586) cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586)
cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0 cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686) cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
......
...@@ -213,7 +213,6 @@ CONFIG_M686=y ...@@ -213,7 +213,6 @@ CONFIG_M686=y
# CONFIG_MCRUSOE is not set # CONFIG_MCRUSOE is not set
# CONFIG_MEFFICEON is not set # CONFIG_MEFFICEON is not set
# CONFIG_MWINCHIPC6 is not set # CONFIG_MWINCHIPC6 is not set
# CONFIG_MWINCHIP2 is not set
# CONFIG_MWINCHIP3D is not set # CONFIG_MWINCHIP3D is not set
# CONFIG_MGEODEGX1 is not set # CONFIG_MGEODEGX1 is not set
# CONFIG_MGEODE_LX is not set # CONFIG_MGEODE_LX is not set
......
...@@ -210,7 +210,6 @@ CONFIG_X86_PC=y ...@@ -210,7 +210,6 @@ CONFIG_X86_PC=y
# CONFIG_MCRUSOE is not set # CONFIG_MCRUSOE is not set
# CONFIG_MEFFICEON is not set # CONFIG_MEFFICEON is not set
# CONFIG_MWINCHIPC6 is not set # CONFIG_MWINCHIPC6 is not set
# CONFIG_MWINCHIP2 is not set
# CONFIG_MWINCHIP3D is not set # CONFIG_MWINCHIP3D is not set
# CONFIG_MGEODEGX1 is not set # CONFIG_MGEODEGX1 is not set
# CONFIG_MGEODE_LX is not set # CONFIG_MGEODE_LX is not set
......
...@@ -39,11 +39,11 @@ ...@@ -39,11 +39,11 @@
.endm .endm
/* clobbers %eax */ /* clobbers %eax */
.macro CLEAR_RREGS .macro CLEAR_RREGS _r9=rax
xorl %eax,%eax xorl %eax,%eax
movq %rax,R11(%rsp) movq %rax,R11(%rsp)
movq %rax,R10(%rsp) movq %rax,R10(%rsp)
movq %rax,R9(%rsp) movq %\_r9,R9(%rsp)
movq %rax,R8(%rsp) movq %rax,R8(%rsp)
.endm .endm
...@@ -52,11 +52,10 @@ ...@@ -52,11 +52,10 @@
* We don't reload %eax because syscall_trace_enter() returned * We don't reload %eax because syscall_trace_enter() returned
* the value it wants us to use in the table lookup. * the value it wants us to use in the table lookup.
*/ */
.macro LOAD_ARGS32 offset .macro LOAD_ARGS32 offset, _r9=0
movl \offset(%rsp),%r11d .if \_r9
movl \offset+8(%rsp),%r10d
movl \offset+16(%rsp),%r9d movl \offset+16(%rsp),%r9d
movl \offset+24(%rsp),%r8d .endif
movl \offset+40(%rsp),%ecx movl \offset+40(%rsp),%ecx
movl \offset+48(%rsp),%edx movl \offset+48(%rsp),%edx
movl \offset+56(%rsp),%esi movl \offset+56(%rsp),%esi
...@@ -145,7 +144,7 @@ ENTRY(ia32_sysenter_target) ...@@ -145,7 +144,7 @@ ENTRY(ia32_sysenter_target)
SAVE_ARGS 0,0,1 SAVE_ARGS 0,0,1
/* no need to do an access_ok check here because rbp has been /* no need to do an access_ok check here because rbp has been
32bit zero extended */ 32bit zero extended */
1: movl (%rbp),%r9d 1: movl (%rbp),%ebp
.section __ex_table,"a" .section __ex_table,"a"
.quad 1b,ia32_badarg .quad 1b,ia32_badarg
.previous .previous
...@@ -157,7 +156,7 @@ ENTRY(ia32_sysenter_target) ...@@ -157,7 +156,7 @@ ENTRY(ia32_sysenter_target)
cmpl $(IA32_NR_syscalls-1),%eax cmpl $(IA32_NR_syscalls-1),%eax
ja ia32_badsys ja ia32_badsys
sysenter_do_call: sysenter_do_call:
IA32_ARG_FIXUP 1 IA32_ARG_FIXUP
sysenter_dispatch: sysenter_dispatch:
call *ia32_sys_call_table(,%rax,8) call *ia32_sys_call_table(,%rax,8)
movq %rax,RAX-ARGOFFSET(%rsp) movq %rax,RAX-ARGOFFSET(%rsp)
...@@ -234,20 +233,17 @@ sysexit_audit: ...@@ -234,20 +233,17 @@ sysexit_audit:
#endif #endif
sysenter_tracesys: sysenter_tracesys:
xchgl %r9d,%ebp
#ifdef CONFIG_AUDITSYSCALL #ifdef CONFIG_AUDITSYSCALL
testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags(%r10) testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags(%r10)
jz sysenter_auditsys jz sysenter_auditsys
#endif #endif
SAVE_REST SAVE_REST
CLEAR_RREGS CLEAR_RREGS
movq %r9,R9(%rsp)
movq $-ENOSYS,RAX(%rsp)/* ptrace can change this for a bad syscall */ movq $-ENOSYS,RAX(%rsp)/* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
RESTORE_REST RESTORE_REST
xchgl %ebp,%r9d
cmpl $(IA32_NR_syscalls-1),%eax cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */ ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */
jmp sysenter_do_call jmp sysenter_do_call
...@@ -314,9 +310,9 @@ ENTRY(ia32_cstar_target) ...@@ -314,9 +310,9 @@ ENTRY(ia32_cstar_target)
testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%r10) testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%r10)
CFI_REMEMBER_STATE CFI_REMEMBER_STATE
jnz cstar_tracesys jnz cstar_tracesys
cstar_do_call:
cmpl $IA32_NR_syscalls-1,%eax cmpl $IA32_NR_syscalls-1,%eax
ja ia32_badsys ja ia32_badsys
cstar_do_call:
IA32_ARG_FIXUP 1 IA32_ARG_FIXUP 1
cstar_dispatch: cstar_dispatch:
call *ia32_sys_call_table(,%rax,8) call *ia32_sys_call_table(,%rax,8)
...@@ -357,15 +353,13 @@ cstar_tracesys: ...@@ -357,15 +353,13 @@ cstar_tracesys:
#endif #endif
xchgl %r9d,%ebp xchgl %r9d,%ebp
SAVE_REST SAVE_REST
CLEAR_RREGS CLEAR_RREGS r9
movq %r9,R9(%rsp)
movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */ movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 ARGOFFSET, 1 /* reload args from stack in case ptrace changed it */
RESTORE_REST RESTORE_REST
xchgl %ebp,%r9d xchgl %ebp,%r9d
movl RSP-ARGOFFSET(%rsp), %r8d
cmpl $(IA32_NR_syscalls-1),%eax cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */ ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */
jmp cstar_do_call jmp cstar_do_call
......
...@@ -23,7 +23,7 @@ CFLAGS_hpet.o := $(nostackp) ...@@ -23,7 +23,7 @@ CFLAGS_hpet.o := $(nostackp)
CFLAGS_tsc.o := $(nostackp) CFLAGS_tsc.o := $(nostackp)
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
obj-y += traps_$(BITS).o irq_$(BITS).o obj-y += traps.o irq_$(BITS).o dumpstack_$(BITS).o
obj-y += time_$(BITS).o ioport.o ldt.o obj-y += time_$(BITS).o ioport.o ldt.o
obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o
obj-$(CONFIG_X86_VISWS) += visws_quirks.o obj-$(CONFIG_X86_VISWS) += visws_quirks.o
......
...@@ -444,7 +444,7 @@ void __init alternative_instructions(void) ...@@ -444,7 +444,7 @@ void __init alternative_instructions(void)
_text, _etext); _text, _etext);
/* Only switch to UP mode if we don't immediately boot others */ /* Only switch to UP mode if we don't immediately boot others */
if (num_possible_cpus() == 1 || setup_max_cpus <= 1) if (num_present_cpus() == 1 || setup_max_cpus <= 1)
alternatives_smp_switch(0); alternatives_smp_switch(0);
} }
#endif #endif
......
...@@ -295,6 +295,9 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen) ...@@ -295,6 +295,9 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
* *
* Vector mappings are hard coded. On K8 only offset 0 (APIC500) and * Vector mappings are hard coded. On K8 only offset 0 (APIC500) and
* MCE interrupts are supported. Thus MCE offset must be set to 0. * MCE interrupts are supported. Thus MCE offset must be set to 0.
*
* If mask=1, the LVT entry does not generate interrupts while mask=0
* enables the vector. See also the BKDGs.
*/ */
#define APIC_EILVT_LVTOFF_MCE 0 #define APIC_EILVT_LVTOFF_MCE 0
...@@ -319,6 +322,7 @@ u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask) ...@@ -319,6 +322,7 @@ u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask)
setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask); setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask);
return APIC_EILVT_LVTOFF_IBS; return APIC_EILVT_LVTOFF_IBS;
} }
EXPORT_SYMBOL_GPL(setup_APIC_eilvt_ibs);
/* /*
* Program the next event, relative to now * Program the next event, relative to now
......
...@@ -307,6 +307,9 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen) ...@@ -307,6 +307,9 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
* *
* Vector mappings are hard coded. On K8 only offset 0 (APIC500) and * Vector mappings are hard coded. On K8 only offset 0 (APIC500) and
* MCE interrupts are supported. Thus MCE offset must be set to 0. * MCE interrupts are supported. Thus MCE offset must be set to 0.
*
* If mask=1, the LVT entry does not generate interrupts while mask=0
* enables the vector. See also the BKDGs.
*/ */
#define APIC_EILVT_LVTOFF_MCE 0 #define APIC_EILVT_LVTOFF_MCE 0
...@@ -331,6 +334,7 @@ u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask) ...@@ -331,6 +334,7 @@ u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask)
setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask); setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask);
return APIC_EILVT_LVTOFF_IBS; return APIC_EILVT_LVTOFF_IBS;
} }
EXPORT_SYMBOL_GPL(setup_APIC_eilvt_ibs);
/* /*
* Program the next event, relative to now * Program the next event, relative to now
......
...@@ -124,7 +124,14 @@ static inline int flag_is_changeable_p(u32 flag) ...@@ -124,7 +124,14 @@ static inline int flag_is_changeable_p(u32 flag)
{ {
u32 f1, f2; u32 f1, f2;
asm("pushfl\n\t" /*
* Cyrix and IDT cpus allow disabling of CPUID
* so the code below may return different results
* when it is executed before and after enabling
* the CPUID. Add "volatile" to not allow gcc to
* optimize the subsequent calls to this function.
*/
asm volatile ("pushfl\n\t"
"pushfl\n\t" "pushfl\n\t"
"popl %0\n\t" "popl %0\n\t"
"movl %0,%1\n\t" "movl %0,%1\n\t"
...@@ -719,12 +726,24 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c) ...@@ -719,12 +726,24 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
#endif #endif
} }
#ifdef CONFIG_X86_64
static void vgetcpu_set_mode(void)
{
if (cpu_has(&boot_cpu_data, X86_FEATURE_RDTSCP))
vgetcpu_mode = VGETCPU_RDTSCP;
else
vgetcpu_mode = VGETCPU_LSL;
}
#endif
void __init identify_boot_cpu(void) void __init identify_boot_cpu(void)
{ {
identify_cpu(&boot_cpu_data); identify_cpu(&boot_cpu_data);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
sysenter_setup(); sysenter_setup();
enable_sep_cpu(); enable_sep_cpu();
#else
vgetcpu_set_mode();
#endif #endif
} }
...@@ -797,7 +816,7 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c) ...@@ -797,7 +816,7 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
else if (c->cpuid_level >= 0) else if (c->cpuid_level >= 0)
vendor = c->x86_vendor_id; vendor = c->x86_vendor_id;
if (vendor && strncmp(c->x86_model_id, vendor, strlen(vendor))) if (vendor && !strstr(c->x86_model_id, vendor))
printk(KERN_CONT "%s ", vendor); printk(KERN_CONT "%s ", vendor);
if (c->x86_model_id[0]) if (c->x86_model_id[0])
......
...@@ -66,6 +66,6 @@ struct tss_struct doublefault_tss __cacheline_aligned = { ...@@ -66,6 +66,6 @@ struct tss_struct doublefault_tss __cacheline_aligned = {
.ds = __USER_DS, .ds = __USER_DS,
.fs = __KERNEL_PERCPU, .fs = __KERNEL_PERCPU,
.__cr3 = __phys_addr_const((unsigned long)swapper_pg_dir) .__cr3 = __pa_nodebug(swapper_pg_dir),
} }
}; };
/*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs
*/
#include <linux/kallsyms.h>
#include <linux/kprobes.h>
#include <linux/uaccess.h>
#include <linux/utsname.h>
#include <linux/hardirq.h>
#include <linux/kdebug.h>
#include <linux/module.h>
#include <linux/ptrace.h>
#include <linux/kexec.h>
#include <linux/bug.h>
#include <linux/nmi.h>
#include <asm/stacktrace.h>
#define STACKSLOTS_PER_LINE 8
#define get_bp(bp) asm("movl %%ebp, %0" : "=r" (bp) :)
int panic_on_unrecovered_nmi;
int kstack_depth_to_print = 3 * STACKSLOTS_PER_LINE;
static unsigned int code_bytes = 64;
static int die_counter;
void printk_address(unsigned long address, int reliable)
{
printk(" [<%p>] %s%pS\n", (void *) address,
reliable ? "" : "? ", (void *) address);
}
static inline int valid_stack_ptr(struct thread_info *tinfo,
void *p, unsigned int size, void *end)
{
void *t = tinfo;
if (end) {
if (p < end && p >= (end-THREAD_SIZE))
return 1;
else
return 0;
}
return p > t && p < t + THREAD_SIZE - size;
}
/* The form of the top of the frame on the stack */
struct stack_frame {
struct stack_frame *next_frame;
unsigned long return_address;
};
static inline unsigned long
print_context_stack(struct thread_info *tinfo,
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data,
unsigned long *end)
{
struct stack_frame *frame = (struct stack_frame *)bp;
while (valid_stack_ptr(tinfo, stack, sizeof(*stack), end)) {
unsigned long addr;
addr = *stack;
if (__kernel_text_address(addr)) {
if ((unsigned long) stack == bp + sizeof(long)) {
ops->address(data, addr, 1);
frame = frame->next_frame;
bp = (unsigned long) frame;
} else {
ops->address(data, addr, bp == 0);
}
}
stack++;
}
return bp;
}
void dump_trace(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data)
{
if (!task)
task = current;
if (!stack) {
unsigned long dummy;
stack = &dummy;
if (task && task != current)
stack = (unsigned long *)task->thread.sp;
}
#ifdef CONFIG_FRAME_POINTER
if (!bp) {
if (task == current) {
/* Grab bp right from our regs */
get_bp(bp);
} else {
/* bp is the last reg pushed by switch_to */
bp = *(unsigned long *) task->thread.sp;
}
}
#endif
for (;;) {
struct thread_info *context;
context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
bp = print_context_stack(context, stack, bp, ops, data, NULL);
stack = (unsigned long *)context->previous_esp;
if (!stack)
break;
if (ops->stack(data, "IRQ") < 0)
break;
touch_nmi_watchdog();
}
}
EXPORT_SYMBOL(dump_trace);
static void
print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
{
printk(data);
print_symbol(msg, symbol);
printk("\n");
}
static void print_trace_warning(void *data, char *msg)
{
printk("%s%s\n", (char *)data, msg);
}
static int print_trace_stack(void *data, char *name)
{
printk("%s <%s> ", (char *)data, name);
return 0;
}
/*
* Print one address/symbol entries per line.
*/
static void print_trace_address(void *data, unsigned long addr, int reliable)
{
touch_nmi_watchdog();
printk(data);
printk_address(addr, reliable);
}
static const struct stacktrace_ops print_trace_ops = {
.warning = print_trace_warning,
.warning_symbol = print_trace_warning_symbol,
.stack = print_trace_stack,
.address = print_trace_address,
};
static void
show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp, char *log_lvl)
{
printk("%sCall Trace:\n", log_lvl);
dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl);
}
void show_trace(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp)
{
show_trace_log_lvl(task, regs, stack, bp, "");
}
static void
show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *sp, unsigned long bp, char *log_lvl)
{
unsigned long *stack;
int i;
if (sp == NULL) {
if (task)
sp = (unsigned long *)task->thread.sp;
else
sp = (unsigned long *)&sp;
}
stack = sp;
for (i = 0; i < kstack_depth_to_print; i++) {
if (kstack_end(stack))
break;
if (i && ((i % STACKSLOTS_PER_LINE) == 0))
printk("\n%s", log_lvl);
printk(" %08lx", *stack++);
touch_nmi_watchdog();
}
printk("\n");
show_trace_log_lvl(task, regs, sp, bp, log_lvl);
}
void show_stack(struct task_struct *task, unsigned long *sp)
{
show_stack_log_lvl(task, NULL, sp, 0, "");
}
/*
* The architecture-independent dump_stack generator
*/
void dump_stack(void)
{
unsigned long bp = 0;
unsigned long stack;
#ifdef CONFIG_FRAME_POINTER
if (!bp)
get_bp(bp);
#endif
printk("Pid: %d, comm: %.20s %s %s %.*s\n",
current->pid, current->comm, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
init_utsname()->version);
show_trace(NULL, NULL, &stack, bp);
}
EXPORT_SYMBOL(dump_stack);
void show_registers(struct pt_regs *regs)
{
int i;
print_modules();
__show_regs(regs, 0);
printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)\n",
TASK_COMM_LEN, current->comm, task_pid_nr(current),
current_thread_info(), current, task_thread_info(current));
/*
* When in-kernel, we also print out the stack and code at the
* time of the fault..
*/
if (!user_mode_vm(regs)) {
unsigned int code_prologue = code_bytes * 43 / 64;
unsigned int code_len = code_bytes;
unsigned char c;
u8 *ip;
printk(KERN_EMERG "Stack:\n");
show_stack_log_lvl(NULL, regs, &regs->sp,
0, KERN_EMERG);
printk(KERN_EMERG "Code: ");
ip = (u8 *)regs->ip - code_prologue;
if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
/* try starting at IP */
ip = (u8 *)regs->ip;
code_len = code_len - code_prologue + 1;
}
for (i = 0; i < code_len; i++, ip++) {
if (ip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(ip, c)) {
printk(" Bad EIP value.");
break;
}
if (ip == (u8 *)regs->ip)
printk("<%02x> ", c);
else
printk("%02x ", c);
}
}
printk("\n");
}
int is_valid_bugaddr(unsigned long ip)
{
unsigned short ud2;
if (ip < PAGE_OFFSET)
return 0;
if (probe_kernel_address((unsigned short *)ip, ud2))
return 0;
return ud2 == 0x0b0f;
}
static raw_spinlock_t die_lock = __RAW_SPIN_LOCK_UNLOCKED;
static int die_owner = -1;
static unsigned int die_nest_count;
unsigned __kprobes long oops_begin(void)
{
unsigned long flags;
oops_enter();
if (die_owner != raw_smp_processor_id()) {
console_verbose();
raw_local_irq_save(flags);
__raw_spin_lock(&die_lock);
die_owner = smp_processor_id();
die_nest_count = 0;
bust_spinlocks(1);
} else {
raw_local_irq_save(flags);
}
die_nest_count++;
return flags;
}
void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
{
bust_spinlocks(0);
die_owner = -1;
add_taint(TAINT_DIE);
__raw_spin_unlock(&die_lock);
raw_local_irq_restore(flags);
if (!regs)
return;
if (kexec_should_crash(current))
crash_kexec(regs);
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops)
panic("Fatal exception");
oops_exit();
do_exit(signr);
}
int __kprobes __die(const char *str, struct pt_regs *regs, long err)
{
unsigned short ss;
unsigned long sp;
printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT
printk("PREEMPT ");
#endif
#ifdef CONFIG_SMP
printk("SMP ");
#endif
#ifdef CONFIG_DEBUG_PAGEALLOC
printk("DEBUG_PAGEALLOC");
#endif
printk("\n");
if (notify_die(DIE_OOPS, str, regs, err,
current->thread.trap_no, SIGSEGV) == NOTIFY_STOP)
return 1;
show_registers(regs);
/* Executive summary in case the oops scrolled away */
sp = (unsigned long) (&regs->sp);
savesegment(ss, ss);
if (user_mode(regs)) {
sp = regs->sp;
ss = regs->ss & 0xffff;
}
printk(KERN_EMERG "EIP: [<%08lx>] ", regs->ip);
print_symbol("%s", regs->ip);
printk(" SS:ESP %04x:%08lx\n", ss, sp);
return 0;
}
/*
* This is gone through when something in the kernel has done something bad
* and is about to be terminated:
*/
void die(const char *str, struct pt_regs *regs, long err)
{
unsigned long flags = oops_begin();
if (die_nest_count < 3) {
report_bug(regs->ip, regs);
if (__die(str, regs, err))
regs = NULL;
} else {
printk(KERN_EMERG "Recursive die() failure, output suppressed\n");
}
oops_end(flags, regs, SIGSEGV);
}
static DEFINE_SPINLOCK(nmi_print_lock);
void notrace __kprobes
die_nmi(char *str, struct pt_regs *regs, int do_panic)
{
if (notify_die(DIE_NMIWATCHDOG, str, regs, 0, 2, SIGINT) == NOTIFY_STOP)
return;
spin_lock(&nmi_print_lock);
/*
* We are in trouble anyway, lets at least try
* to get a message out:
*/
bust_spinlocks(1);
printk(KERN_EMERG "%s", str);
printk(" on CPU%d, ip %08lx, registers:\n",
smp_processor_id(), regs->ip);
show_registers(regs);
if (do_panic)
panic("Non maskable interrupt");
console_silent();
spin_unlock(&nmi_print_lock);
bust_spinlocks(0);
/*
* If we are in kernel we are probably nested up pretty bad
* and might aswell get out now while we still can:
*/
if (!user_mode_vm(regs)) {
current->thread.trap_no = 2;
crash_kexec(regs);
}
do_exit(SIGSEGV);
}
static int __init oops_setup(char *s)
{
if (!s)
return -EINVAL;
if (!strcmp(s, "panic"))
panic_on_oops = 1;
return 0;
}
early_param("oops", oops_setup);
static int __init kstack_setup(char *s)
{
if (!s)
return -EINVAL;
kstack_depth_to_print = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("kstack", kstack_setup);
static int __init code_bytes_setup(char *s)
{
code_bytes = simple_strtoul(s, NULL, 0);
if (code_bytes > 8192)
code_bytes = 8192;
return 1;
}
__setup("code_bytes=", code_bytes_setup);
/* /*
* Copyright (C) 1991, 1992 Linus Torvalds * Copyright (C) 1991, 1992 Linus Torvalds
* Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs * Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs
*
* Pentium III FXSR, SSE support
* Gareth Hughes <gareth@valinux.com>, May 2000
*/ */
/*
* 'Traps.c' handles hardware traps and faults after we have saved some
* state in 'entry.S'.
*/
#include <linux/moduleparam.h>
#include <linux/interrupt.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/spinlock.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/hardirq.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/string.h>
#include <linux/unwind.h>
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/kexec.h> #include <linux/kexec.h>
#include <linux/sched.h>
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/nmi.h> #include <linux/nmi.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/io.h>
#if defined(CONFIG_EDAC)
#include <linux/edac.h>
#endif
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/processor.h>
#include <asm/debugreg.h>
#include <asm/atomic.h>
#include <asm/system.h>
#include <asm/unwind.h>
#include <asm/desc.h>
#include <asm/i387.h>
#include <asm/pgalloc.h>
#include <asm/proto.h>
#include <asm/pda.h>
#include <asm/traps.h>
#include <mach_traps.h> #define STACKSLOTS_PER_LINE 4
#define get_bp(bp) asm("movq %%rbp, %0" : "=r" (bp) :)
int panic_on_unrecovered_nmi; int panic_on_unrecovered_nmi;
int kstack_depth_to_print = 12; int kstack_depth_to_print = 3 * STACKSLOTS_PER_LINE;
static unsigned int code_bytes = 64; static unsigned int code_bytes = 64;
static int ignore_nmis;
static int die_counter; static int die_counter;
static inline void conditional_sti(struct pt_regs *regs)
{
if (regs->flags & X86_EFLAGS_IF)
local_irq_enable();
}
static inline void preempt_conditional_sti(struct pt_regs *regs)
{
inc_preempt_count();
if (regs->flags & X86_EFLAGS_IF)
local_irq_enable();
}
static inline void preempt_conditional_cli(struct pt_regs *regs)
{
if (regs->flags & X86_EFLAGS_IF)
local_irq_disable();
/* Make sure to not schedule here because we could be running
on an exception stack. */
dec_preempt_count();
}
void printk_address(unsigned long address, int reliable) void printk_address(unsigned long address, int reliable)
{ {
printk(" [<%016lx>] %s%pS\n", printk(" [<%p>] %s%pS\n", (void *) address,
address, reliable ? "" : "? ", (void *) address); reliable ? "" : "? ", (void *) address);
} }
static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
...@@ -202,7 +144,7 @@ print_context_stack(struct thread_info *tinfo, ...@@ -202,7 +144,7 @@ print_context_stack(struct thread_info *tinfo,
addr = *stack; addr = *stack;
if (__kernel_text_address(addr)) { if (__kernel_text_address(addr)) {
if ((unsigned long) stack == bp + 8) { if ((unsigned long) stack == bp + sizeof(long)) {
ops->address(data, addr, 1); ops->address(data, addr, 1);
frame = frame->next_frame; frame = frame->next_frame;
bp = (unsigned long) frame; bp = (unsigned long) frame;
...@@ -238,7 +180,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, ...@@ -238,7 +180,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
if (!bp) { if (!bp) {
if (task == current) { if (task == current) {
/* Grab bp right from our regs */ /* Grab bp right from our regs */
asm("movq %%rbp, %0" : "=r" (bp) : ); get_bp(bp);
} else { } else {
/* bp is the last reg pushed by switch_to */ /* bp is the last reg pushed by switch_to */
bp = *(unsigned long *) task->thread.sp; bp = *(unsigned long *) task->thread.sp;
...@@ -308,24 +250,29 @@ EXPORT_SYMBOL(dump_trace); ...@@ -308,24 +250,29 @@ EXPORT_SYMBOL(dump_trace);
static void static void
print_trace_warning_symbol(void *data, char *msg, unsigned long symbol) print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
{ {
printk(data);
print_symbol(msg, symbol); print_symbol(msg, symbol);
printk("\n"); printk("\n");
} }
static void print_trace_warning(void *data, char *msg) static void print_trace_warning(void *data, char *msg)
{ {
printk("%s\n", msg); printk("%s%s\n", (char *)data, msg);
} }
static int print_trace_stack(void *data, char *name) static int print_trace_stack(void *data, char *name)
{ {
printk(" <%s> ", name); printk("%s <%s> ", (char *)data, name);
return 0; return 0;
} }
/*
* Print one address/symbol entries per line.
*/
static void print_trace_address(void *data, unsigned long addr, int reliable) static void print_trace_address(void *data, unsigned long addr, int reliable)
{ {
touch_nmi_watchdog(); touch_nmi_watchdog();
printk(data);
printk_address(addr, reliable); printk_address(addr, reliable);
} }
...@@ -340,7 +287,7 @@ static void ...@@ -340,7 +287,7 @@ static void
show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp, char *log_lvl) unsigned long *stack, unsigned long bp, char *log_lvl)
{ {
printk("Call Trace:\n"); printk("%sCall Trace:\n", log_lvl);
dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl); dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl);
} }
...@@ -385,8 +332,8 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs, ...@@ -385,8 +332,8 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
if (((long) stack & (THREAD_SIZE-1)) == 0) if (((long) stack & (THREAD_SIZE-1)) == 0)
break; break;
} }
if (i && ((i % 4) == 0)) if (i && ((i % STACKSLOTS_PER_LINE) == 0))
printk("\n"); printk("\n%s", log_lvl);
printk(" %016lx", *stack++); printk(" %016lx", *stack++);
touch_nmi_watchdog(); touch_nmi_watchdog();
} }
...@@ -409,7 +356,7 @@ void dump_stack(void) ...@@ -409,7 +356,7 @@ void dump_stack(void)
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
if (!bp) if (!bp)
asm("movq %%rbp, %0" : "=r" (bp) : ); get_bp(bp);
#endif #endif
printk("Pid: %d, comm: %.20s %s %s %.*s\n", printk("Pid: %d, comm: %.20s %s %s %.*s\n",
...@@ -430,7 +377,7 @@ void show_registers(struct pt_regs *regs) ...@@ -430,7 +377,7 @@ void show_registers(struct pt_regs *regs)
sp = regs->sp; sp = regs->sp;
printk("CPU %d ", cpu); printk("CPU %d ", cpu);
__show_regs(regs); __show_regs(regs, 1);
printk("Process %s (pid: %d, threadinfo %p, task %p)\n", printk("Process %s (pid: %d, threadinfo %p, task %p)\n",
cur->comm, cur->pid, task_thread_info(cur), cur); cur->comm, cur->pid, task_thread_info(cur), cur);
...@@ -444,15 +391,15 @@ void show_registers(struct pt_regs *regs) ...@@ -444,15 +391,15 @@ void show_registers(struct pt_regs *regs)
unsigned char c; unsigned char c;
u8 *ip; u8 *ip;
printk("Stack: "); printk(KERN_EMERG "Stack:\n");
show_stack_log_lvl(NULL, regs, (unsigned long *)sp, show_stack_log_lvl(NULL, regs, (unsigned long *)sp,
regs->bp, ""); regs->bp, KERN_EMERG);
printk(KERN_EMERG "Code: "); printk(KERN_EMERG "Code: ");
ip = (u8 *)regs->ip - code_prologue; ip = (u8 *)regs->ip - code_prologue;
if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) { if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
/* try starting at RIP */ /* try starting at IP */
ip = (u8 *)regs->ip; ip = (u8 *)regs->ip;
code_len = code_len - code_prologue + 1; code_len = code_len - code_prologue + 1;
} }
...@@ -521,6 +468,8 @@ void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr) ...@@ -521,6 +468,8 @@ void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
oops_exit(); oops_exit();
return; return;
} }
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops) if (panic_on_oops)
panic("Fatal exception"); panic("Fatal exception");
oops_exit(); oops_exit();
...@@ -529,7 +478,7 @@ void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr) ...@@ -529,7 +478,7 @@ void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
int __kprobes __die(const char *str, struct pt_regs *regs, long err) int __kprobes __die(const char *str, struct pt_regs *regs, long err)
{ {
printk(KERN_EMERG "%s: %04lx [%u] ", str, err & 0xffff, ++die_counter); printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
printk("PREEMPT "); printk("PREEMPT ");
#endif #endif
...@@ -594,596 +543,6 @@ die_nmi(char *str, struct pt_regs *regs, int do_panic) ...@@ -594,596 +543,6 @@ die_nmi(char *str, struct pt_regs *regs, int do_panic)
do_exit(SIGBUS); do_exit(SIGBUS);
} }
static void __kprobes
do_trap(int trapnr, int signr, char *str, struct pt_regs *regs,
long error_code, siginfo_t *info)
{
struct task_struct *tsk = current;
if (!user_mode(regs))
goto kernel_trap;
/*
* We want error_code and trap_no set for userspace faults and
* kernelspace faults which result in die(), but not
* kernelspace faults which are fixed up. die() gives the
* process no chance to handle the signal and notice the
* kernel fault information, so that won't result in polluting
* the information about previously queued, but not yet
* delivered, faults. See also do_general_protection below.
*/
tsk->thread.error_code = error_code;
tsk->thread.trap_no = trapnr;
if (show_unhandled_signals && unhandled_signal(tsk, signr) &&
printk_ratelimit()) {
printk(KERN_INFO
"%s[%d] trap %s ip:%lx sp:%lx error:%lx",
tsk->comm, tsk->pid, str,
regs->ip, regs->sp, error_code);
print_vma_addr(" in ", regs->ip);
printk("\n");
}
if (info)
force_sig_info(signr, info, tsk);
else
force_sig(signr, tsk);
return;
kernel_trap:
if (!fixup_exception(regs)) {
tsk->thread.error_code = error_code;
tsk->thread.trap_no = trapnr;
die(str, regs, error_code);
}
return;
}
#define DO_ERROR(trapnr, signr, str, name) \
asmlinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
conditional_sti(regs); \
do_trap(trapnr, signr, str, regs, error_code, NULL); \
}
#define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
asmlinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \
siginfo_t info; \
info.si_signo = signr; \
info.si_errno = 0; \
info.si_code = sicode; \
info.si_addr = (void __user *)siaddr; \
trace_hardirqs_fixup(); \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
conditional_sti(regs); \
do_trap(trapnr, signr, str, regs, error_code, &info); \
}
DO_ERROR_INFO(0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
DO_ERROR(4, SIGSEGV, "overflow", overflow)
DO_ERROR(5, SIGSEGV, "bounds", bounds)
DO_ERROR_INFO(6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip)
DO_ERROR(9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
/* Runs on IST stack */
asmlinkage void do_stack_segment(struct pt_regs *regs, long error_code)
{
if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
12, SIGBUS) == NOTIFY_STOP)
return;
preempt_conditional_sti(regs);
do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL);
preempt_conditional_cli(regs);
}
asmlinkage void do_double_fault(struct pt_regs *regs, long error_code)
{
static const char str[] = "double fault";
struct task_struct *tsk = current;
/* Return not checked because double check cannot be ignored */
notify_die(DIE_TRAP, str, regs, error_code, 8, SIGSEGV);
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 8;
/* This is always a kernel trap and never fixable (and thus must
never return). */
for (;;)
die(str, regs, error_code);
}
asmlinkage void __kprobes
do_general_protection(struct pt_regs *regs, long error_code)
{
struct task_struct *tsk;
conditional_sti(regs);
tsk = current;
if (!user_mode(regs))
goto gp_in_kernel;
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 13;
if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
printk_ratelimit()) {
printk(KERN_INFO
"%s[%d] general protection ip:%lx sp:%lx error:%lx",
tsk->comm, tsk->pid,
regs->ip, regs->sp, error_code);
print_vma_addr(" in ", regs->ip);
printk("\n");
}
force_sig(SIGSEGV, tsk);
return;
gp_in_kernel:
if (fixup_exception(regs))
return;
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 13;
if (notify_die(DIE_GPF, "general protection fault", regs,
error_code, 13, SIGSEGV) == NOTIFY_STOP)
return;
die("general protection fault", regs, error_code);
}
static notrace __kprobes void
mem_parity_error(unsigned char reason, struct pt_regs *regs)
{
printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n",
reason);
printk(KERN_EMERG "You have some hardware problem, likely on the PCI bus.\n");
#if defined(CONFIG_EDAC)
if (edac_handler_set()) {
edac_atomic_assert_error();
return;
}
#endif
if (panic_on_unrecovered_nmi)
panic("NMI: Not continuing");
printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
/* Clear and disable the memory parity error line. */
reason = (reason & 0xf) | 4;
outb(reason, 0x61);
}
static notrace __kprobes void
io_check_error(unsigned char reason, struct pt_regs *regs)
{
printk("NMI: IOCK error (debug interrupt?)\n");
show_registers(regs);
/* Re-enable the IOCK line, wait for a few seconds */
reason = (reason & 0xf) | 8;
outb(reason, 0x61);
mdelay(2000);
reason &= ~8;
outb(reason, 0x61);
}
static notrace __kprobes void
unknown_nmi_error(unsigned char reason, struct pt_regs *regs)
{
if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) ==
NOTIFY_STOP)
return;
printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n",
reason);
printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n");
if (panic_on_unrecovered_nmi)
panic("NMI: Not continuing");
printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
}
/* Runs on IST stack. This code must keep interrupts off all the time.
Nested NMIs are prevented by the CPU. */
asmlinkage notrace __kprobes void default_do_nmi(struct pt_regs *regs)
{
unsigned char reason = 0;
int cpu;
cpu = smp_processor_id();
/* Only the BSP gets external NMIs from the system. */
if (!cpu)
reason = get_nmi_reason();
if (!(reason & 0xc0)) {
if (notify_die(DIE_NMI_IPI, "nmi_ipi", regs, reason, 2, SIGINT)
== NOTIFY_STOP)
return;
/*
* Ok, so this is none of the documented NMI sources,
* so it must be the NMI watchdog.
*/
if (nmi_watchdog_tick(regs, reason))
return;
if (!do_nmi_callback(regs, cpu))
unknown_nmi_error(reason, regs);
return;
}
if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP)
return;
/* AK: following checks seem to be broken on modern chipsets. FIXME */
if (reason & 0x80)
mem_parity_error(reason, regs);
if (reason & 0x40)
io_check_error(reason, regs);
}
asmlinkage notrace __kprobes void
do_nmi(struct pt_regs *regs, long error_code)
{
nmi_enter();
add_pda(__nmi_count, 1);
if (!ignore_nmis)
default_do_nmi(regs);
nmi_exit();
}
void stop_nmi(void)
{
acpi_nmi_disable();
ignore_nmis++;
}
void restart_nmi(void)
{
ignore_nmis--;
acpi_nmi_enable();
}
/* runs on IST stack. */
asmlinkage void __kprobes do_int3(struct pt_regs *regs, long error_code)
{
trace_hardirqs_fixup();
if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP)
== NOTIFY_STOP)
return;
preempt_conditional_sti(regs);
do_trap(3, SIGTRAP, "int3", regs, error_code, NULL);
preempt_conditional_cli(regs);
}
/* Help handler running on IST stack to switch back to user stack
for scheduling or signal handling. The actual stack switch is done in
entry.S */
asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
{
struct pt_regs *regs = eregs;
/* Did already sync */
if (eregs == (struct pt_regs *)eregs->sp)
;
/* Exception from user space */
else if (user_mode(eregs))
regs = task_pt_regs(current);
/* Exception from kernel and interrupts are enabled. Move to
kernel process stack. */
else if (eregs->flags & X86_EFLAGS_IF)
regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs));
if (eregs != regs)
*regs = *eregs;
return regs;
}
/* runs on IST stack. */
asmlinkage void __kprobes do_debug(struct pt_regs *regs,
unsigned long error_code)
{
struct task_struct *tsk = current;
unsigned long condition;
siginfo_t info;
trace_hardirqs_fixup();
get_debugreg(condition, 6);
/*
* The processor cleared BTF, so don't mark that we need it set.
*/
clear_tsk_thread_flag(tsk, TIF_DEBUGCTLMSR);
tsk->thread.debugctlmsr = 0;
if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
SIGTRAP) == NOTIFY_STOP)
return;
preempt_conditional_sti(regs);
/* Mask out spurious debug traps due to lazy DR7 setting */
if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
if (!tsk->thread.debugreg7)
goto clear_dr7;
}
tsk->thread.debugreg6 = condition;
/*
* Single-stepping through TF: make sure we ignore any events in
* kernel space (but re-enable TF when returning to user mode).
*/
if (condition & DR_STEP) {
if (!user_mode(regs))
goto clear_TF_reenable;
}
/* Ok, finally something we can handle */
tsk->thread.trap_no = 1;
tsk->thread.error_code = error_code;
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = get_si_code(condition);
info.si_addr = user_mode(regs) ? (void __user *)regs->ip : NULL;
force_sig_info(SIGTRAP, &info, tsk);
clear_dr7:
set_debugreg(0, 7);
preempt_conditional_cli(regs);
return;
clear_TF_reenable:
set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
regs->flags &= ~X86_EFLAGS_TF;
preempt_conditional_cli(regs);
return;
}
static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr)
{
if (fixup_exception(regs))
return 1;
notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE);
/* Illegal floating point operation in the kernel */
current->thread.trap_no = trapnr;
die(str, regs, 0);
return 0;
}
/*
* Note that we play around with the 'TS' bit in an attempt to get
* the correct behaviour even in the presence of the asynchronous
* IRQ13 behaviour
*/
asmlinkage void do_coprocessor_error(struct pt_regs *regs)
{
void __user *ip = (void __user *)(regs->ip);
struct task_struct *task;
siginfo_t info;
unsigned short cwd, swd;
conditional_sti(regs);
if (!user_mode(regs) &&
kernel_math_error(regs, "kernel x87 math error", 16))
return;
/*
* Save the info for the exception handler and clear the error.
*/
task = current;
save_init_fpu(task);
task->thread.trap_no = 16;
task->thread.error_code = 0;
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = __SI_FAULT;
info.si_addr = ip;
/*
* (~cwd & swd) will mask out exceptions that are not set to unmasked
* status. 0x3f is the exception bits in these regs, 0x200 is the
* C1 reg you need in case of a stack fault, 0x040 is the stack
* fault bit. We should only be taking one exception at a time,
* so if this combination doesn't produce any single exception,
* then we have a bad program that isn't synchronizing its FPU usage
* and it will suffer the consequences since we won't be able to
* fully reproduce the context of the exception
*/
cwd = get_fpu_cwd(task);
swd = get_fpu_swd(task);
switch (swd & ~cwd & 0x3f) {
case 0x000: /* No unmasked exception */
default: /* Multiple exceptions */
break;
case 0x001: /* Invalid Op */
/*
* swd & 0x240 == 0x040: Stack Underflow
* swd & 0x240 == 0x240: Stack Overflow
* User must clear the SF bit (0x40) if set
*/
info.si_code = FPE_FLTINV;
break;
case 0x002: /* Denormalize */
case 0x010: /* Underflow */
info.si_code = FPE_FLTUND;
break;
case 0x004: /* Zero Divide */
info.si_code = FPE_FLTDIV;
break;
case 0x008: /* Overflow */
info.si_code = FPE_FLTOVF;
break;
case 0x020: /* Precision */
info.si_code = FPE_FLTRES;
break;
}
force_sig_info(SIGFPE, &info, task);
}
asmlinkage void bad_intr(void)
{
printk("bad interrupt");
}
asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
{
void __user *ip = (void __user *)(regs->ip);
struct task_struct *task;
siginfo_t info;
unsigned short mxcsr;
conditional_sti(regs);
if (!user_mode(regs) &&
kernel_math_error(regs, "kernel simd math error", 19))
return;
/*
* Save the info for the exception handler and clear the error.
*/
task = current;
save_init_fpu(task);
task->thread.trap_no = 19;
task->thread.error_code = 0;
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = __SI_FAULT;
info.si_addr = ip;
/*
* The SIMD FPU exceptions are handled a little differently, as there
* is only a single status/control register. Thus, to determine which
* unmasked exception was caught we must mask the exception mask bits
* at 0x1f80, and then use these to mask the exception bits at 0x3f.
*/
mxcsr = get_fpu_mxcsr(task);
switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) {
case 0x000:
default:
break;
case 0x001: /* Invalid Op */
info.si_code = FPE_FLTINV;
break;
case 0x002: /* Denormalize */
case 0x010: /* Underflow */
info.si_code = FPE_FLTUND;
break;
case 0x004: /* Zero Divide */
info.si_code = FPE_FLTDIV;
break;
case 0x008: /* Overflow */
info.si_code = FPE_FLTOVF;
break;
case 0x020: /* Precision */
info.si_code = FPE_FLTRES;
break;
}
force_sig_info(SIGFPE, &info, task);
}
asmlinkage void do_spurious_interrupt_bug(struct pt_regs *regs)
{
}
asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void)
{
}
asmlinkage void __attribute__((weak)) mce_threshold_interrupt(void)
{
}
/*
* 'math_state_restore()' saves the current math information in the
* old math state array, and gets the new ones from the current task
*
* Careful.. There are problems with IBM-designed IRQ13 behaviour.
* Don't touch unless you *really* know how it works.
*/
asmlinkage void math_state_restore(void)
{
struct task_struct *me = current;
if (!used_math()) {
local_irq_enable();
/*
* does a slab alloc which can sleep
*/
if (init_fpu(me)) {
/*
* ran out of memory!
*/
do_group_exit(SIGKILL);
return;
}
local_irq_disable();
}
clts(); /* Allow maths ops (or we recurse) */
/*
* Paranoid restore. send a SIGSEGV if we fail to restore the state.
*/
if (unlikely(restore_fpu_checking(me))) {
stts();
force_sig(SIGSEGV, me);
return;
}
task_thread_info(me)->status |= TS_USEDFPU;
me->fpu_counter++;
}
EXPORT_SYMBOL_GPL(math_state_restore);
void __init trap_init(void)
{
set_intr_gate(0, &divide_error);
set_intr_gate_ist(1, &debug, DEBUG_STACK);
set_intr_gate_ist(2, &nmi, NMI_STACK);
/* int3 can be called from all */
set_system_gate_ist(3, &int3, DEBUG_STACK);
/* int4 can be called from all */
set_system_gate(4, &overflow);
set_intr_gate(5, &bounds);
set_intr_gate(6, &invalid_op);
set_intr_gate(7, &device_not_available);
set_intr_gate_ist(8, &double_fault, DOUBLEFAULT_STACK);
set_intr_gate(9, &coprocessor_segment_overrun);
set_intr_gate(10, &invalid_TSS);
set_intr_gate(11, &segment_not_present);
set_intr_gate_ist(12, &stack_segment, STACKFAULT_STACK);
set_intr_gate(13, &general_protection);
set_intr_gate(14, &page_fault);
set_intr_gate(15, &spurious_interrupt_bug);
set_intr_gate(16, &coprocessor_error);
set_intr_gate(17, &alignment_check);
#ifdef CONFIG_X86_MCE
set_intr_gate_ist(18, &machine_check, MCE_STACK);
#endif
set_intr_gate(19, &simd_coprocessor_error);
#ifdef CONFIG_IA32_EMULATION
set_system_gate(IA32_SYSCALL_VECTOR, ia32_syscall);
#endif
/*
* Should be a barrier for any external CPU state:
*/
cpu_init();
}
static int __init oops_setup(char *s) static int __init oops_setup(char *s)
{ {
if (!s) if (!s)
......
...@@ -730,6 +730,7 @@ error_code: ...@@ -730,6 +730,7 @@ error_code:
movl $(__USER_DS), %ecx movl $(__USER_DS), %ecx
movl %ecx, %ds movl %ecx, %ds
movl %ecx, %es movl %ecx, %es
TRACE_IRQS_OFF
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call *%edi call *%edi
jmp ret_from_exception jmp ret_from_exception
...@@ -760,20 +761,9 @@ ENTRY(device_not_available) ...@@ -760,20 +761,9 @@ ENTRY(device_not_available)
RING0_INT_FRAME RING0_INT_FRAME
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL pushl $do_device_not_available
GET_CR0_INTO_EAX
testl $0x4, %eax # EM (math emulation bit)
jne device_not_available_emulate
preempt_stop(CLBR_ANY)
call math_state_restore
jmp ret_from_exception
device_not_available_emulate:
pushl $0 # temporary storage for ORIG_EIP
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
call math_emulate jmp error_code
addl $4, %esp
CFI_ADJUST_CFA_OFFSET -4
jmp ret_from_exception
CFI_ENDPROC CFI_ENDPROC
END(device_not_available) END(device_not_available)
...@@ -814,6 +804,7 @@ debug_stack_correct: ...@@ -814,6 +804,7 @@ debug_stack_correct:
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL SAVE_ALL
TRACE_IRQS_OFF
xorl %edx,%edx # error code 0 xorl %edx,%edx # error code 0
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call do_debug call do_debug
...@@ -858,6 +849,7 @@ nmi_stack_correct: ...@@ -858,6 +849,7 @@ nmi_stack_correct:
pushl %eax pushl %eax
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL SAVE_ALL
TRACE_IRQS_OFF
xorl %edx,%edx # zero error code xorl %edx,%edx # zero error code
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call do_nmi call do_nmi
...@@ -898,6 +890,7 @@ nmi_espfix_stack: ...@@ -898,6 +890,7 @@ nmi_espfix_stack:
pushl %eax pushl %eax
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL SAVE_ALL
TRACE_IRQS_OFF
FIXUP_ESPFIX_STACK # %eax == %esp FIXUP_ESPFIX_STACK # %eax == %esp
xorl %edx,%edx # zero error code xorl %edx,%edx # zero error code
call do_nmi call do_nmi
...@@ -928,6 +921,7 @@ KPROBE_ENTRY(int3) ...@@ -928,6 +921,7 @@ KPROBE_ENTRY(int3)
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL SAVE_ALL
TRACE_IRQS_OFF
xorl %edx,%edx # zero error code xorl %edx,%edx # zero error code
movl %esp,%eax # pt_regs pointer movl %esp,%eax # pt_regs pointer
call do_int3 call do_int3
...@@ -1030,7 +1024,7 @@ ENTRY(machine_check) ...@@ -1030,7 +1024,7 @@ ENTRY(machine_check)
RING0_INT_FRAME RING0_INT_FRAME
pushl $0 pushl $0
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
pushl machine_check_vector pushl $do_machine_check
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
......
...@@ -667,6 +667,13 @@ END(stub_rt_sigreturn) ...@@ -667,6 +667,13 @@ END(stub_rt_sigreturn)
SAVE_ARGS SAVE_ARGS
leaq -ARGOFFSET(%rsp),%rdi # arg1 for handler leaq -ARGOFFSET(%rsp),%rdi # arg1 for handler
pushq %rbp pushq %rbp
/*
* Save rbp twice: One is for marking the stack frame, as usual, and the
* other, to fill pt_regs properly. This is because bx comes right
* before the last saved register in that structure, and not bp. If the
* base pointer were in the place bx is today, this would not be needed.
*/
movq %rbp, -8(%rsp)
CFI_ADJUST_CFA_OFFSET 8 CFI_ADJUST_CFA_OFFSET 8
CFI_REL_OFFSET rbp, 0 CFI_REL_OFFSET rbp, 0
movq %rsp,%rbp movq %rsp,%rbp
...@@ -932,6 +939,9 @@ END(spurious_interrupt) ...@@ -932,6 +939,9 @@ END(spurious_interrupt)
.if \ist .if \ist
movq %gs:pda_data_offset, %rbp movq %gs:pda_data_offset, %rbp
.endif .endif
.if \irqtrace
TRACE_IRQS_OFF
.endif
movq %rsp,%rdi movq %rsp,%rdi
movq ORIG_RAX(%rsp),%rsi movq ORIG_RAX(%rsp),%rsi
movq $-1,ORIG_RAX(%rsp) movq $-1,ORIG_RAX(%rsp)
...@@ -1059,6 +1069,7 @@ KPROBE_ENTRY(error_entry) ...@@ -1059,6 +1069,7 @@ KPROBE_ENTRY(error_entry)
error_swapgs: error_swapgs:
SWAPGS SWAPGS
error_sti: error_sti:
TRACE_IRQS_OFF
movq %rdi,RDI(%rsp) movq %rdi,RDI(%rsp)
CFI_REL_OFFSET rdi,RDI CFI_REL_OFFSET rdi,RDI
movq %rsp,%rdi movq %rsp,%rdi
...@@ -1232,7 +1243,7 @@ ENTRY(simd_coprocessor_error) ...@@ -1232,7 +1243,7 @@ ENTRY(simd_coprocessor_error)
END(simd_coprocessor_error) END(simd_coprocessor_error)
ENTRY(device_not_available) ENTRY(device_not_available)
zeroentry math_state_restore zeroentry do_device_not_available
END(device_not_available) END(device_not_available)
/* runs on exception stack */ /* runs on exception stack */
......
...@@ -109,6 +109,7 @@ struct oem_table { ...@@ -109,6 +109,7 @@ struct oem_table {
}; };
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr); extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
extern void unmap_unisys_acpi_oem_table(unsigned long oem_addr);
#endif #endif
struct mip_reg { struct mip_reg {
...@@ -243,21 +244,38 @@ parse_unisys_oem (char *oemptr) ...@@ -243,21 +244,38 @@ parse_unisys_oem (char *oemptr)
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
int __init static unsigned long oem_addrX;
find_unisys_acpi_oem_table(unsigned long *oem_addr) static unsigned long oem_size;
int __init find_unisys_acpi_oem_table(unsigned long *oem_addr)
{ {
struct acpi_table_header *header = NULL; struct acpi_table_header *header = NULL;
int i = 0; int i = 0;
while (ACPI_SUCCESS(acpi_get_table("OEM1", i++, &header))) { acpi_size tbl_size;
while (ACPI_SUCCESS(acpi_get_table_with_size("OEM1", i++, &header, &tbl_size))) {
if (!memcmp((char *) &header->oem_id, "UNISYS", 6)) { if (!memcmp((char *) &header->oem_id, "UNISYS", 6)) {
struct oem_table *t = (struct oem_table *)header; struct oem_table *t = (struct oem_table *)header;
*oem_addr = (unsigned long)__acpi_map_table(t->OEMTableAddr,
t->OEMTableSize); oem_addrX = t->OEMTableAddr;
oem_size = t->OEMTableSize;
early_acpi_os_unmap_memory(header, tbl_size);
*oem_addr = (unsigned long)__acpi_map_table(oem_addrX,
oem_size);
return 0; return 0;
} }
early_acpi_os_unmap_memory(header, tbl_size);
} }
return -1; return -1;
} }
void __init unmap_unisys_acpi_oem_table(unsigned long oem_addr)
{
if (!oem_addr)
return;
__acpi_unmap_table((char *)oem_addr, oem_size);
}
#endif #endif
static void static void
......
...@@ -114,7 +114,7 @@ static void uv_send_IPI_one(int cpu, int vector) ...@@ -114,7 +114,7 @@ static void uv_send_IPI_one(int cpu, int vector)
unsigned long val, apicid, lapicid; unsigned long val, apicid, lapicid;
int pnode; int pnode;
apicid = per_cpu(x86_cpu_to_apicid, cpu); /* ZZZ - cache node-local ? */ apicid = per_cpu(x86_cpu_to_apicid, cpu);
lapicid = apicid & 0x3f; /* ZZZ macro needed */ lapicid = apicid & 0x3f; /* ZZZ macro needed */
pnode = uv_apicid_to_pnode(apicid); pnode = uv_apicid_to_pnode(apicid);
val = val =
...@@ -202,12 +202,10 @@ static unsigned int phys_pkg_id(int index_msb) ...@@ -202,12 +202,10 @@ static unsigned int phys_pkg_id(int index_msb)
return uv_read_apic_id() >> index_msb; return uv_read_apic_id() >> index_msb;
} }
#ifdef ZZZ /* Needs x2apic patch */
static void uv_send_IPI_self(int vector) static void uv_send_IPI_self(int vector)
{ {
apic_write(APIC_SELF_IPI, vector); apic_write(APIC_SELF_IPI, vector);
} }
#endif
struct genapic apic_x2apic_uv_x = { struct genapic apic_x2apic_uv_x = {
.name = "UV large system", .name = "UV large system",
...@@ -215,15 +213,15 @@ struct genapic apic_x2apic_uv_x = { ...@@ -215,15 +213,15 @@ struct genapic apic_x2apic_uv_x = {
.int_delivery_mode = dest_Fixed, .int_delivery_mode = dest_Fixed,
.int_dest_mode = (APIC_DEST_PHYSICAL != 0), .int_dest_mode = (APIC_DEST_PHYSICAL != 0),
.target_cpus = uv_target_cpus, .target_cpus = uv_target_cpus,
.vector_allocation_domain = uv_vector_allocation_domain,/* Fixme ZZZ */ .vector_allocation_domain = uv_vector_allocation_domain,
.apic_id_registered = uv_apic_id_registered, .apic_id_registered = uv_apic_id_registered,
.init_apic_ldr = uv_init_apic_ldr, .init_apic_ldr = uv_init_apic_ldr,
.send_IPI_all = uv_send_IPI_all, .send_IPI_all = uv_send_IPI_all,
.send_IPI_allbutself = uv_send_IPI_allbutself, .send_IPI_allbutself = uv_send_IPI_allbutself,
.send_IPI_mask = uv_send_IPI_mask, .send_IPI_mask = uv_send_IPI_mask,
/* ZZZ.send_IPI_self = uv_send_IPI_self, */ .send_IPI_self = uv_send_IPI_self,
.cpu_mask_to_apicid = uv_cpu_mask_to_apicid, .cpu_mask_to_apicid = uv_cpu_mask_to_apicid,
.phys_pkg_id = phys_pkg_id, /* Fixme ZZZ */ .phys_pkg_id = phys_pkg_id,
.get_apic_id = get_apic_id, .get_apic_id = get_apic_id,
.set_apic_id = set_apic_id, .set_apic_id = set_apic_id,
.apic_id_mask = (0xFFFFFFFFu), .apic_id_mask = (0xFFFFFFFFu),
...@@ -286,12 +284,13 @@ static __init void map_low_mmrs(void) ...@@ -286,12 +284,13 @@ static __init void map_low_mmrs(void)
enum map_type {map_wb, map_uc}; enum map_type {map_wb, map_uc};
static __init void map_high(char *id, unsigned long base, int shift, enum map_type map_type) static __init void map_high(char *id, unsigned long base, int shift,
int max_pnode, enum map_type map_type)
{ {
unsigned long bytes, paddr; unsigned long bytes, paddr;
paddr = base << shift; paddr = base << shift;
bytes = (1UL << shift); bytes = (1UL << shift) * (max_pnode + 1);
printk(KERN_INFO "UV: Map %s_HI 0x%lx - 0x%lx\n", id, paddr, printk(KERN_INFO "UV: Map %s_HI 0x%lx - 0x%lx\n", id, paddr,
paddr + bytes); paddr + bytes);
if (map_type == map_uc) if (map_type == map_uc)
...@@ -307,7 +306,7 @@ static __init void map_gru_high(int max_pnode) ...@@ -307,7 +306,7 @@ static __init void map_gru_high(int max_pnode)
gru.v = uv_read_local_mmr(UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR); gru.v = uv_read_local_mmr(UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR);
if (gru.s.enable) if (gru.s.enable)
map_high("GRU", gru.s.base, shift, map_wb); map_high("GRU", gru.s.base, shift, max_pnode, map_wb);
} }
static __init void map_config_high(int max_pnode) static __init void map_config_high(int max_pnode)
...@@ -317,7 +316,7 @@ static __init void map_config_high(int max_pnode) ...@@ -317,7 +316,7 @@ static __init void map_config_high(int max_pnode)
cfg.v = uv_read_local_mmr(UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR); cfg.v = uv_read_local_mmr(UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR);
if (cfg.s.enable) if (cfg.s.enable)
map_high("CONFIG", cfg.s.base, shift, map_uc); map_high("CONFIG", cfg.s.base, shift, max_pnode, map_uc);
} }
static __init void map_mmr_high(int max_pnode) static __init void map_mmr_high(int max_pnode)
...@@ -327,7 +326,7 @@ static __init void map_mmr_high(int max_pnode) ...@@ -327,7 +326,7 @@ static __init void map_mmr_high(int max_pnode)
mmr.v = uv_read_local_mmr(UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR); mmr.v = uv_read_local_mmr(UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR);
if (mmr.s.enable) if (mmr.s.enable)
map_high("MMR", mmr.s.base, shift, map_uc); map_high("MMR", mmr.s.base, shift, max_pnode, map_uc);
} }
static __init void map_mmioh_high(int max_pnode) static __init void map_mmioh_high(int max_pnode)
...@@ -337,7 +336,7 @@ static __init void map_mmioh_high(int max_pnode) ...@@ -337,7 +336,7 @@ static __init void map_mmioh_high(int max_pnode)
mmioh.v = uv_read_local_mmr(UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR); mmioh.v = uv_read_local_mmr(UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR);
if (mmioh.s.enable) if (mmioh.s.enable)
map_high("MMIOH", mmioh.s.base, shift, map_uc); map_high("MMIOH", mmioh.s.base, shift, max_pnode, map_uc);
} }
static __init void uv_rtc_init(void) static __init void uv_rtc_init(void)
......
...@@ -35,6 +35,7 @@ void __init reserve_ebda_region(void) ...@@ -35,6 +35,7 @@ void __init reserve_ebda_region(void)
/* start of EBDA area */ /* start of EBDA area */
ebda_addr = get_bios_ebda(); ebda_addr = get_bios_ebda();
printk(KERN_INFO "BIOS EBDA/lowmem at: %08x/%08x\n", ebda_addr, lowmem);
/* Fixup: bios puts an EBDA in the top 64K segment */ /* Fixup: bios puts an EBDA in the top 64K segment */
/* of conventional memory, but does not adjust lowmem. */ /* of conventional memory, but does not adjust lowmem. */
......
...@@ -115,13 +115,17 @@ static void hpet_reserve_platform_timers(unsigned long id) ...@@ -115,13 +115,17 @@ static void hpet_reserve_platform_timers(unsigned long id)
hd.hd_phys_address = hpet_address; hd.hd_phys_address = hpet_address;
hd.hd_address = hpet; hd.hd_address = hpet;
hd.hd_nirqs = nrtimers; hd.hd_nirqs = nrtimers;
hd.hd_flags = HPET_DATA_PLATFORM;
hpet_reserve_timer(&hd, 0); hpet_reserve_timer(&hd, 0);
#ifdef CONFIG_HPET_EMULATE_RTC #ifdef CONFIG_HPET_EMULATE_RTC
hpet_reserve_timer(&hd, 1); hpet_reserve_timer(&hd, 1);
#endif #endif
/*
* NOTE that hd_irq[] reflects IOAPIC input pins (LEGACY_8254
* is wrong for i8259!) not the output IRQ. Many BIOS writers
* don't bother configuring *any* comparator interrupts.
*/
hd.hd_irq[0] = HPET_LEGACY_8254; hd.hd_irq[0] = HPET_LEGACY_8254;
hd.hd_irq[1] = HPET_LEGACY_RTC; hd.hd_irq[1] = HPET_LEGACY_RTC;
......
...@@ -135,7 +135,7 @@ DEFINE_PER_CPU(vector_irq_t, vector_irq) = { ...@@ -135,7 +135,7 @@ DEFINE_PER_CPU(vector_irq_t, vector_irq) = {
[IRQ15_VECTOR + 1 ... NR_VECTORS - 1] = -1 [IRQ15_VECTOR + 1 ... NR_VECTORS - 1] = -1
}; };
static void __init init_ISA_irqs (void) void __init init_ISA_irqs(void)
{ {
int i; int i;
...@@ -164,22 +164,8 @@ static void __init init_ISA_irqs (void) ...@@ -164,22 +164,8 @@ static void __init init_ISA_irqs (void)
void init_IRQ(void) __attribute__((weak, alias("native_init_IRQ"))); void init_IRQ(void) __attribute__((weak, alias("native_init_IRQ")));
void __init native_init_IRQ(void) static void __init smp_intr_init(void)
{ {
int i;
init_ISA_irqs();
/*
* Cover the whole vector space, no vector can escape
* us. (some of these will be overridden and become
* 'special' SMP interrupts)
*/
for (i = 0; i < (NR_VECTORS - FIRST_EXTERNAL_VECTOR); i++) {
int vector = FIRST_EXTERNAL_VECTOR + i;
if (vector != IA32_SYSCALL_VECTOR)
set_intr_gate(vector, interrupt[i]);
}
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
* The reschedule interrupt is a CPU-to-CPU reschedule-helper * The reschedule interrupt is a CPU-to-CPU reschedule-helper
...@@ -207,6 +193,12 @@ void __init native_init_IRQ(void) ...@@ -207,6 +193,12 @@ void __init native_init_IRQ(void)
/* Low priority IPI to cleanup after moving an irq */ /* Low priority IPI to cleanup after moving an irq */
set_intr_gate(IRQ_MOVE_CLEANUP_VECTOR, irq_move_cleanup_interrupt); set_intr_gate(IRQ_MOVE_CLEANUP_VECTOR, irq_move_cleanup_interrupt);
#endif #endif
}
static void __init apic_intr_init(void)
{
smp_intr_init();
alloc_intr_gate(THERMAL_APIC_VECTOR, thermal_interrupt); alloc_intr_gate(THERMAL_APIC_VECTOR, thermal_interrupt);
alloc_intr_gate(THRESHOLD_APIC_VECTOR, threshold_interrupt); alloc_intr_gate(THRESHOLD_APIC_VECTOR, threshold_interrupt);
...@@ -216,6 +208,25 @@ void __init native_init_IRQ(void) ...@@ -216,6 +208,25 @@ void __init native_init_IRQ(void)
/* IPI vectors for APIC spurious and error interrupts */ /* IPI vectors for APIC spurious and error interrupts */
alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt); alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt); alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
}
void __init native_init_IRQ(void)
{
int i;
init_ISA_irqs();
/*
* Cover the whole vector space, no vector can escape
* us. (some of these will be overridden and become
* 'special' SMP interrupts)
*/
for (i = 0; i < (NR_VECTORS - FIRST_EXTERNAL_VECTOR); i++) {
int vector = FIRST_EXTERNAL_VECTOR + i;
if (vector != IA32_SYSCALL_VECTOR)
set_intr_gate(vector, interrupt[i]);
}
apic_intr_init();
if (!acpi_ioapic) if (!acpi_ioapic)
setup_irq(2, &irq2); setup_irq(2, &irq2);
......
...@@ -123,7 +123,7 @@ void cpu_idle(void) ...@@ -123,7 +123,7 @@ void cpu_idle(void)
} }
} }
void __show_registers(struct pt_regs *regs, int all) void __show_regs(struct pt_regs *regs, int all)
{ {
unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L; unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
unsigned long d0, d1, d2, d3, d6, d7; unsigned long d0, d1, d2, d3, d6, d7;
...@@ -189,7 +189,7 @@ void __show_registers(struct pt_regs *regs, int all) ...@@ -189,7 +189,7 @@ void __show_registers(struct pt_regs *regs, int all)
void show_regs(struct pt_regs *regs) void show_regs(struct pt_regs *regs)
{ {
__show_registers(regs, 1); __show_regs(regs, 1);
show_trace(NULL, regs, &regs->sp, regs->bp); show_trace(NULL, regs, &regs->sp, regs->bp);
} }
......
...@@ -136,7 +136,7 @@ void cpu_idle(void) ...@@ -136,7 +136,7 @@ void cpu_idle(void)
} }
/* Prints also some state that isn't saved in the pt_regs */ /* Prints also some state that isn't saved in the pt_regs */
void __show_regs(struct pt_regs *regs) void __show_regs(struct pt_regs *regs, int all)
{ {
unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs; unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs;
unsigned long d0, d1, d2, d3, d6, d7; unsigned long d0, d1, d2, d3, d6, d7;
...@@ -175,6 +175,9 @@ void __show_regs(struct pt_regs *regs) ...@@ -175,6 +175,9 @@ void __show_regs(struct pt_regs *regs)
rdmsrl(MSR_GS_BASE, gs); rdmsrl(MSR_GS_BASE, gs);
rdmsrl(MSR_KERNEL_GS_BASE, shadowgs); rdmsrl(MSR_KERNEL_GS_BASE, shadowgs);
if (!all)
return;
cr0 = read_cr0(); cr0 = read_cr0();
cr2 = read_cr2(); cr2 = read_cr2();
cr3 = read_cr3(); cr3 = read_cr3();
...@@ -200,7 +203,7 @@ void __show_regs(struct pt_regs *regs) ...@@ -200,7 +203,7 @@ void __show_regs(struct pt_regs *regs)
void show_regs(struct pt_regs *regs) void show_regs(struct pt_regs *regs)
{ {
printk(KERN_INFO "CPU %d:", smp_processor_id()); printk(KERN_INFO "CPU %d:", smp_processor_id());
__show_regs(regs); __show_regs(regs, 1);
show_trace(NULL, regs, (void *)(regs + 1), regs->bp); show_trace(NULL, regs, (void *)(regs + 1), regs->bp);
} }
......
...@@ -354,9 +354,27 @@ static void ati_force_hpet_resume(void) ...@@ -354,9 +354,27 @@ static void ati_force_hpet_resume(void)
printk(KERN_DEBUG "Force enabled HPET at resume\n"); printk(KERN_DEBUG "Force enabled HPET at resume\n");
} }
static u32 ati_ixp4x0_rev(struct pci_dev *dev)
{
u32 d;
u8 b;
pci_read_config_byte(dev, 0xac, &b);
b &= ~(1<<5);
pci_write_config_byte(dev, 0xac, b);
pci_read_config_dword(dev, 0x70, &d);
d |= 1<<8;
pci_write_config_dword(dev, 0x70, d);
pci_read_config_dword(dev, 0x8, &d);
d &= 0xff;
dev_printk(KERN_DEBUG, &dev->dev, "SB4X0 revision 0x%x\n", d);
return d;
}
static void ati_force_enable_hpet(struct pci_dev *dev) static void ati_force_enable_hpet(struct pci_dev *dev)
{ {
u32 uninitialized_var(val); u32 d, val;
u8 b;
if (hpet_address || force_hpet_address) if (hpet_address || force_hpet_address)
return; return;
...@@ -366,14 +384,33 @@ static void ati_force_enable_hpet(struct pci_dev *dev) ...@@ -366,14 +384,33 @@ static void ati_force_enable_hpet(struct pci_dev *dev)
return; return;
} }
d = ati_ixp4x0_rev(dev);
if (d < 0x82)
return;
/* base address */
pci_write_config_dword(dev, 0x14, 0xfed00000); pci_write_config_dword(dev, 0x14, 0xfed00000);
pci_read_config_dword(dev, 0x14, &val); pci_read_config_dword(dev, 0x14, &val);
/* enable interrupt */
outb(0x72, 0xcd6); b = inb(0xcd7);
b |= 0x1;
outb(0x72, 0xcd6); outb(b, 0xcd7);
outb(0x72, 0xcd6); b = inb(0xcd7);
if (!(b & 0x1))
return;
pci_read_config_dword(dev, 0x64, &d);
d |= (1<<10);
pci_write_config_dword(dev, 0x64, d);
pci_read_config_dword(dev, 0x64, &d);
if (!(d & (1<<10)))
return;
force_hpet_address = val; force_hpet_address = val;
force_hpet_resume_type = ATI_FORCE_HPET_RESUME; force_hpet_resume_type = ATI_FORCE_HPET_RESUME;
dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at 0x%lx\n", dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at 0x%lx\n",
force_hpet_address); force_hpet_address);
cached_dev = dev; cached_dev = dev;
return;
} }
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_SMBUS, DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_SMBUS,
ati_force_enable_hpet); ati_force_enable_hpet);
......
...@@ -302,7 +302,7 @@ static void __init relocate_initrd(void) ...@@ -302,7 +302,7 @@ static void __init relocate_initrd(void)
if (clen > MAX_MAP_CHUNK-slop) if (clen > MAX_MAP_CHUNK-slop)
clen = MAX_MAP_CHUNK-slop; clen = MAX_MAP_CHUNK-slop;
mapaddr = ramdisk_image & PAGE_MASK; mapaddr = ramdisk_image & PAGE_MASK;
p = early_ioremap(mapaddr, clen+slop); p = early_memremap(mapaddr, clen+slop);
memcpy(q, p+slop, clen); memcpy(q, p+slop, clen);
early_iounmap(p, clen+slop); early_iounmap(p, clen+slop);
q += clen; q += clen;
...@@ -379,7 +379,7 @@ static void __init parse_setup_data(void) ...@@ -379,7 +379,7 @@ static void __init parse_setup_data(void)
return; return;
pa_data = boot_params.hdr.setup_data; pa_data = boot_params.hdr.setup_data;
while (pa_data) { while (pa_data) {
data = early_ioremap(pa_data, PAGE_SIZE); data = early_memremap(pa_data, PAGE_SIZE);
switch (data->type) { switch (data->type) {
case SETUP_E820_EXT: case SETUP_E820_EXT:
parse_e820_ext(data, pa_data); parse_e820_ext(data, pa_data);
...@@ -402,7 +402,7 @@ static void __init e820_reserve_setup_data(void) ...@@ -402,7 +402,7 @@ static void __init e820_reserve_setup_data(void)
return; return;
pa_data = boot_params.hdr.setup_data; pa_data = boot_params.hdr.setup_data;
while (pa_data) { while (pa_data) {
data = early_ioremap(pa_data, sizeof(*data)); data = early_memremap(pa_data, sizeof(*data));
e820_update_range(pa_data, sizeof(*data)+data->len, e820_update_range(pa_data, sizeof(*data)+data->len,
E820_RAM, E820_RESERVED_KERN); E820_RAM, E820_RESERVED_KERN);
found = 1; found = 1;
...@@ -428,7 +428,7 @@ static void __init reserve_early_setup_data(void) ...@@ -428,7 +428,7 @@ static void __init reserve_early_setup_data(void)
return; return;
pa_data = boot_params.hdr.setup_data; pa_data = boot_params.hdr.setup_data;
while (pa_data) { while (pa_data) {
data = early_ioremap(pa_data, sizeof(*data)); data = early_memremap(pa_data, sizeof(*data));
sprintf(buf, "setup data %x", data->type); sprintf(buf, "setup data %x", data->type);
reserve_early(pa_data, pa_data+sizeof(*data)+data->len, buf); reserve_early(pa_data, pa_data+sizeof(*data)+data->len, buf);
pa_data = data->next; pa_data = data->next;
...@@ -998,6 +998,8 @@ void __init setup_arch(char **cmdline_p) ...@@ -998,6 +998,8 @@ void __init setup_arch(char **cmdline_p)
*/ */
acpi_boot_table_init(); acpi_boot_table_init();
early_acpi_boot_init();
#ifdef CONFIG_ACPI_NUMA #ifdef CONFIG_ACPI_NUMA
/* /*
* Parse SRAT to discover nodes. * Parse SRAT to discover nodes.
......
...@@ -334,14 +334,17 @@ static void __cpuinit start_secondary(void *unused) ...@@ -334,14 +334,17 @@ static void __cpuinit start_secondary(void *unused)
* does not change while we are assigning vectors to cpus. Holding * does not change while we are assigning vectors to cpus. Holding
* this lock ensures we don't half assign or remove an irq from a cpu. * this lock ensures we don't half assign or remove an irq from a cpu.
*/ */
ipi_call_lock_irq(); ipi_call_lock();
lock_vector_lock(); lock_vector_lock();
__setup_vector_irq(smp_processor_id()); __setup_vector_irq(smp_processor_id());
cpu_set(smp_processor_id(), cpu_online_map); cpu_set(smp_processor_id(), cpu_online_map);
unlock_vector_lock(); unlock_vector_lock();
ipi_call_unlock_irq(); ipi_call_unlock();
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
/* enable local interrupts */
local_irq_enable();
setup_secondary_clock(); setup_secondary_clock();
wmb(); wmb();
...@@ -596,10 +599,12 @@ wakeup_secondary_cpu(int logical_apicid, unsigned long start_eip) ...@@ -596,10 +599,12 @@ wakeup_secondary_cpu(int logical_apicid, unsigned long start_eip)
* Give the other CPU some time to accept the IPI. * Give the other CPU some time to accept the IPI.
*/ */
udelay(200); udelay(200);
if (APIC_INTEGRATED(apic_version[phys_apicid])) {
maxlvt = lapic_get_maxlvt(); maxlvt = lapic_get_maxlvt();
if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */ if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0); apic_write(APIC_ESR, 0);
accept_status = (apic_read(APIC_ESR) & 0xEF); accept_status = (apic_read(APIC_ESR) & 0xEF);
}
pr_debug("NMI sent.\n"); pr_debug("NMI sent.\n");
if (send_status) if (send_status)
...@@ -1256,39 +1261,6 @@ void __init native_smp_cpus_done(unsigned int max_cpus) ...@@ -1256,39 +1261,6 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
check_nmi_watchdog(); check_nmi_watchdog();
} }
#ifdef CONFIG_HOTPLUG_CPU
static void remove_siblinginfo(int cpu)
{
int sibling;
struct cpuinfo_x86 *c = &cpu_data(cpu);
for_each_cpu_mask_nr(sibling, per_cpu(cpu_core_map, cpu)) {
cpu_clear(cpu, per_cpu(cpu_core_map, sibling));
/*/
* last thread sibling in this cpu core going down
*/
if (cpus_weight(per_cpu(cpu_sibling_map, cpu)) == 1)
cpu_data(sibling).booted_cores--;
}
for_each_cpu_mask_nr(sibling, per_cpu(cpu_sibling_map, cpu))
cpu_clear(cpu, per_cpu(cpu_sibling_map, sibling));
cpus_clear(per_cpu(cpu_sibling_map, cpu));
cpus_clear(per_cpu(cpu_core_map, cpu));
c->phys_proc_id = 0;
c->cpu_core_id = 0;
cpu_clear(cpu, cpu_sibling_setup_map);
}
static int additional_cpus __initdata = -1;
static __init int setup_additional_cpus(char *s)
{
return s && get_option(&s, &additional_cpus) ? 0 : -EINVAL;
}
early_param("additional_cpus", setup_additional_cpus);
/* /*
* cpu_possible_map should be static, it cannot change as cpu's * cpu_possible_map should be static, it cannot change as cpu's
* are onlined, or offlined. The reason is per-cpu data-structures * are onlined, or offlined. The reason is per-cpu data-structures
...@@ -1308,21 +1280,13 @@ early_param("additional_cpus", setup_additional_cpus); ...@@ -1308,21 +1280,13 @@ early_param("additional_cpus", setup_additional_cpus);
*/ */
__init void prefill_possible_map(void) __init void prefill_possible_map(void)
{ {
int i; int i, possible;
int possible;
/* no processor from mptable or madt */ /* no processor from mptable or madt */
if (!num_processors) if (!num_processors)
num_processors = 1; num_processors = 1;
if (additional_cpus == -1) { possible = num_processors + disabled_cpus;
if (disabled_cpus > 0)
additional_cpus = disabled_cpus;
else
additional_cpus = 0;
}
possible = num_processors + additional_cpus;
if (possible > NR_CPUS) if (possible > NR_CPUS)
possible = NR_CPUS; possible = NR_CPUS;
...@@ -1335,6 +1299,31 @@ __init void prefill_possible_map(void) ...@@ -1335,6 +1299,31 @@ __init void prefill_possible_map(void)
nr_cpu_ids = possible; nr_cpu_ids = possible;
} }
#ifdef CONFIG_HOTPLUG_CPU
static void remove_siblinginfo(int cpu)
{
int sibling;
struct cpuinfo_x86 *c = &cpu_data(cpu);
for_each_cpu_mask_nr(sibling, per_cpu(cpu_core_map, cpu)) {
cpu_clear(cpu, per_cpu(cpu_core_map, sibling));
/*/
* last thread sibling in this cpu core going down
*/
if (cpus_weight(per_cpu(cpu_sibling_map, cpu)) == 1)
cpu_data(sibling).booted_cores--;
}
for_each_cpu_mask_nr(sibling, per_cpu(cpu_sibling_map, cpu))
cpu_clear(cpu, per_cpu(cpu_sibling_map, sibling));
cpus_clear(per_cpu(cpu_sibling_map, cpu));
cpus_clear(per_cpu(cpu_core_map, cpu));
c->phys_proc_id = 0;
c->cpu_core_id = 0;
cpu_clear(cpu, cpu_sibling_setup_map);
}
static void __ref remove_cpu_from_maps(int cpu) static void __ref remove_cpu_from_maps(int cpu)
{ {
cpu_clear(cpu, cpu_online_map); cpu_clear(cpu, cpu_online_map);
......
...@@ -47,10 +47,9 @@ unsigned long profile_pc(struct pt_regs *regs) ...@@ -47,10 +47,9 @@ unsigned long profile_pc(struct pt_regs *regs)
unsigned long pc = instruction_pointer(regs); unsigned long pc = instruction_pointer(regs);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (!v8086_mode(regs) && SEGMENT_IS_KERNEL_CODE(regs->cs) && if (!user_mode_vm(regs) && in_lock_functions(pc)) {
in_lock_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
return *(unsigned long *)(regs->bp + 4); return *(unsigned long *)(regs->bp + sizeof(long));
#else #else
unsigned long *sp = (unsigned long *)&regs->sp; unsigned long *sp = (unsigned long *)&regs->sp;
...@@ -95,6 +94,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id) ...@@ -95,6 +94,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id)
do_timer_interrupt_hook(); do_timer_interrupt_hook();
#ifdef CONFIG_MCA
if (MCA_bus) { if (MCA_bus) {
/* The PS/2 uses level-triggered interrupts. You can't /* The PS/2 uses level-triggered interrupts. You can't
turn them off, nor would you want to (any attempt to turn them off, nor would you want to (any attempt to
...@@ -108,6 +108,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id) ...@@ -108,6 +108,7 @@ irqreturn_t timer_interrupt(int irq, void *dev_id)
u8 irq_v = inb_p( 0x61 ); /* read the current state */ u8 irq_v = inb_p( 0x61 ); /* read the current state */
outb_p( irq_v|0x80, 0x61 ); /* reset the IRQ */ outb_p( irq_v|0x80, 0x61 ); /* reset the IRQ */
} }
#endif
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/mca.h>
#include <asm/i8253.h> #include <asm/i8253.h>
#include <asm/hpet.h> #include <asm/hpet.h>
...@@ -33,23 +34,34 @@ unsigned long profile_pc(struct pt_regs *regs) ...@@ -33,23 +34,34 @@ unsigned long profile_pc(struct pt_regs *regs)
/* Assume the lock function has either no stack frame or a copy /* Assume the lock function has either no stack frame or a copy
of flags from PUSHF of flags from PUSHF
Eflags always has bits 22 and up cleared unlike kernel addresses. */ Eflags always has bits 22 and up cleared unlike kernel addresses. */
if (!user_mode(regs) && in_lock_functions(pc)) { if (!user_mode_vm(regs) && in_lock_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER
return *(unsigned long *)(regs->bp + sizeof(long));
#else
unsigned long *sp = (unsigned long *)regs->sp; unsigned long *sp = (unsigned long *)regs->sp;
if (sp[0] >> 22) if (sp[0] >> 22)
return sp[0]; return sp[0];
if (sp[1] >> 22) if (sp[1] >> 22)
return sp[1]; return sp[1];
#endif
} }
return pc; return pc;
} }
EXPORT_SYMBOL(profile_pc); EXPORT_SYMBOL(profile_pc);
static irqreturn_t timer_event_interrupt(int irq, void *dev_id) irqreturn_t timer_interrupt(int irq, void *dev_id)
{ {
add_pda(irq0_irqs, 1); add_pda(irq0_irqs, 1);
global_clock_event->event_handler(global_clock_event); global_clock_event->event_handler(global_clock_event);
#ifdef CONFIG_MCA
if (MCA_bus) {
u8 irq_v = inb_p(0x61); /* read the current state */
outb_p(irq_v|0x80, 0x61); /* reset the IRQ */
}
#endif
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -100,7 +112,7 @@ unsigned long __init calibrate_cpu(void) ...@@ -100,7 +112,7 @@ unsigned long __init calibrate_cpu(void)
} }
static struct irqaction irq0 = { static struct irqaction irq0 = {
.handler = timer_event_interrupt, .handler = timer_interrupt,
.flags = IRQF_DISABLED | IRQF_IRQPOLL | IRQF_NOBALANCING, .flags = IRQF_DISABLED | IRQF_IRQPOLL | IRQF_NOBALANCING,
.mask = CPU_MASK_NONE, .mask = CPU_MASK_NONE,
.name = "timer" .name = "timer"
...@@ -111,16 +123,13 @@ void __init hpet_time_init(void) ...@@ -111,16 +123,13 @@ void __init hpet_time_init(void)
if (!hpet_enable()) if (!hpet_enable())
setup_pit_timer(); setup_pit_timer();
irq0.mask = cpumask_of_cpu(0);
setup_irq(0, &irq0); setup_irq(0, &irq0);
} }
void __init time_init(void) void __init time_init(void)
{ {
tsc_init(); tsc_init();
if (cpu_has(&boot_cpu_data, X86_FEATURE_RDTSCP))
vgetcpu_mode = VGETCPU_RDTSCP;
else
vgetcpu_mode = VGETCPU_LSL;
late_time_init = choose_time_init(); late_time_init = choose_time_init();
} }
...@@ -7,13 +7,11 @@ ...@@ -7,13 +7,11 @@
*/ */
/* /*
* 'Traps.c' handles hardware traps and faults after we have saved some * Handle hardware traps and faults.
* state in 'asm.s'.
*/ */
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/highmem.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/utsname.h> #include <linux/utsname.h>
...@@ -32,6 +30,8 @@ ...@@ -32,6 +30,8 @@
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/nmi.h> #include <linux/nmi.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/smp.h>
#include <linux/io.h>
#ifdef CONFIG_EISA #ifdef CONFIG_EISA
#include <linux/ioport.h> #include <linux/ioport.h>
...@@ -46,21 +46,31 @@ ...@@ -46,21 +46,31 @@
#include <linux/edac.h> #include <linux/edac.h>
#endif #endif
#include <asm/arch_hooks.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
#include <asm/atomic.h> #include <asm/atomic.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/traps.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <mach_traps.h>
#ifdef CONFIG_X86_64
#include <asm/pgalloc.h>
#include <asm/proto.h>
#include <asm/pda.h>
#else
#include <asm/processor-flags.h>
#include <asm/arch_hooks.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/traps.h> #include <asm/traps.h>
#include "mach_traps.h" #include "cpu/mcheck/mce.h"
DECLARE_BITMAP(used_vectors, NR_VECTORS); DECLARE_BITMAP(used_vectors, NR_VECTORS);
EXPORT_SYMBOL_GPL(used_vectors); EXPORT_SYMBOL_GPL(used_vectors);
...@@ -77,418 +87,104 @@ char ignore_fpu_irq; ...@@ -77,418 +87,104 @@ char ignore_fpu_irq;
*/ */
gate_desc idt_table[256] gate_desc idt_table[256]
__attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, }; __attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, };
int panic_on_unrecovered_nmi;
int kstack_depth_to_print = 24;
static unsigned int code_bytes = 64;
static int ignore_nmis;
static int die_counter;
void printk_address(unsigned long address, int reliable)
{
#ifdef CONFIG_KALLSYMS
unsigned long offset = 0;
unsigned long symsize;
const char *symname;
char *modname;
char *delim = ":";
char namebuf[KSYM_NAME_LEN];
char reliab[4] = "";
symname = kallsyms_lookup(address, &symsize, &offset,
&modname, namebuf);
if (!symname) {
printk(" [<%08lx>]\n", address);
return;
}
if (!reliable)
strcpy(reliab, "? ");
if (!modname)
modname = delim = "";
printk(" [<%08lx>] %s%s%s%s%s+0x%lx/0x%lx\n",
address, reliab, delim, modname, delim, symname, offset, symsize);
#else
printk(" [<%08lx>]\n", address);
#endif
}
static inline int valid_stack_ptr(struct thread_info *tinfo,
void *p, unsigned int size)
{
void *t = tinfo;
return p > t && p <= t + THREAD_SIZE - size;
}
/* The form of the top of the frame on the stack */
struct stack_frame {
struct stack_frame *next_frame;
unsigned long return_address;
};
static inline unsigned long
print_context_stack(struct thread_info *tinfo,
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data)
{
struct stack_frame *frame = (struct stack_frame *)bp;
while (valid_stack_ptr(tinfo, stack, sizeof(*stack))) {
unsigned long addr;
addr = *stack;
if (__kernel_text_address(addr)) {
if ((unsigned long) stack == bp + 4) {
ops->address(data, addr, 1);
frame = frame->next_frame;
bp = (unsigned long) frame;
} else {
ops->address(data, addr, bp == 0);
}
}
stack++;
}
return bp;
}
void dump_trace(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data)
{
if (!task)
task = current;
if (!stack) {
unsigned long dummy;
stack = &dummy;
if (task != current)
stack = (unsigned long *)task->thread.sp;
}
#ifdef CONFIG_FRAME_POINTER
if (!bp) {
if (task == current) {
/* Grab bp right from our regs */
asm("movl %%ebp, %0" : "=r" (bp) :);
} else {
/* bp is the last reg pushed by switch_to */
bp = *(unsigned long *) task->thread.sp;
}
}
#endif #endif
for (;;) { static int ignore_nmis;
struct thread_info *context;
context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
bp = print_context_stack(context, stack, bp, ops, data);
/*
* Should be after the line below, but somewhere
* in early boot context comes out corrupted and we
* can't reference it:
*/
if (ops->stack(data, "IRQ") < 0)
break;
stack = (unsigned long *)context->previous_esp;
if (!stack)
break;
touch_nmi_watchdog();
}
}
EXPORT_SYMBOL(dump_trace);
static void
print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
{
printk(data);
print_symbol(msg, symbol);
printk("\n");
}
static void print_trace_warning(void *data, char *msg)
{
printk("%s%s\n", (char *)data, msg);
}
static int print_trace_stack(void *data, char *name)
{
return 0;
}
/*
* Print one address/symbol entries per line.
*/
static void print_trace_address(void *data, unsigned long addr, int reliable)
{
printk("%s [<%08lx>] ", (char *)data, addr);
if (!reliable)
printk("? ");
print_symbol("%s\n", addr);
touch_nmi_watchdog();
}
static const struct stacktrace_ops print_trace_ops = {
.warning = print_trace_warning,
.warning_symbol = print_trace_warning_symbol,
.stack = print_trace_stack,
.address = print_trace_address,
};
static void static inline void conditional_sti(struct pt_regs *regs)
show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp, char *log_lvl)
{ {
dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl); if (regs->flags & X86_EFLAGS_IF)
printk("%s =======================\n", log_lvl); local_irq_enable();
} }
void show_trace(struct task_struct *task, struct pt_regs *regs, static inline void preempt_conditional_sti(struct pt_regs *regs)
unsigned long *stack, unsigned long bp)
{ {
show_trace_log_lvl(task, regs, stack, bp, ""); inc_preempt_count();
if (regs->flags & X86_EFLAGS_IF)
local_irq_enable();
} }
static void static inline void preempt_conditional_cli(struct pt_regs *regs)
show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *sp, unsigned long bp, char *log_lvl)
{ {
unsigned long *stack; if (regs->flags & X86_EFLAGS_IF)
int i; local_irq_disable();
dec_preempt_count();
if (sp == NULL) {
if (task)
sp = (unsigned long *)task->thread.sp;
else
sp = (unsigned long *)&sp;
}
stack = sp;
for (i = 0; i < kstack_depth_to_print; i++) {
if (kstack_end(stack))
break;
if (i && ((i % 8) == 0))
printk("\n%s ", log_lvl);
printk("%08lx ", *stack++);
}
printk("\n%sCall Trace:\n", log_lvl);
show_trace_log_lvl(task, regs, sp, bp, log_lvl);
} }
void show_stack(struct task_struct *task, unsigned long *sp) #ifdef CONFIG_X86_32
static inline void
die_if_kernel(const char *str, struct pt_regs *regs, long err)
{ {
printk(" "); if (!user_mode_vm(regs))
show_stack_log_lvl(task, NULL, sp, 0, ""); die(str, regs, err);
} }
/* /*
* The architecture-independent dump_stack generator * Perform the lazy TSS's I/O bitmap copy. If the TSS has an
* invalid offset set (the LAZY one) and the faulting thread has
* a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS,
* we set the offset field correctly and return 1.
*/ */
void dump_stack(void) static int lazy_iobitmap_copy(void)
{ {
unsigned long bp = 0; struct thread_struct *thread;
unsigned long stack; struct tss_struct *tss;
int cpu;
#ifdef CONFIG_FRAME_POINTER
if (!bp)
asm("movl %%ebp, %0" : "=r" (bp):);
#endif
printk("Pid: %d, comm: %.20s %s %s %.*s\n",
current->pid, current->comm, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
init_utsname()->version);
show_trace(current, NULL, &stack, bp);
}
EXPORT_SYMBOL(dump_stack);
void show_registers(struct pt_regs *regs)
{
int i;
print_modules(); cpu = get_cpu();
__show_registers(regs, 0); tss = &per_cpu(init_tss, cpu);
thread = &current->thread;
printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)", if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY &&
TASK_COMM_LEN, current->comm, task_pid_nr(current), thread->io_bitmap_ptr) {
current_thread_info(), current, task_thread_info(current)); memcpy(tss->io_bitmap, thread->io_bitmap_ptr,
thread->io_bitmap_max);
/* /*
* When in-kernel, we also print out the stack and code at the * If the previously set map was extending to higher ports
* time of the fault.. * than the current one, pad extra space with 0xff (no access).
*/ */
if (!user_mode_vm(regs)) { if (thread->io_bitmap_max < tss->io_bitmap_max) {
unsigned int code_prologue = code_bytes * 43 / 64; memset((char *) tss->io_bitmap +
unsigned int code_len = code_bytes; thread->io_bitmap_max, 0xff,
unsigned char c; tss->io_bitmap_max - thread->io_bitmap_max);
u8 *ip;
printk("\n" KERN_EMERG "Stack: ");
show_stack_log_lvl(NULL, regs, &regs->sp, 0, KERN_EMERG);
printk(KERN_EMERG "Code: ");
ip = (u8 *)regs->ip - code_prologue;
if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
/* try starting at EIP */
ip = (u8 *)regs->ip;
code_len = code_len - code_prologue + 1;
}
for (i = 0; i < code_len; i++, ip++) {
if (ip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(ip, c)) {
printk(" Bad EIP value.");
break;
}
if (ip == (u8 *)regs->ip)
printk("<%02x> ", c);
else
printk("%02x ", c);
}
}
printk("\n");
}
int is_valid_bugaddr(unsigned long ip)
{
unsigned short ud2;
if (ip < PAGE_OFFSET)
return 0;
if (probe_kernel_address((unsigned short *)ip, ud2))
return 0;
return ud2 == 0x0b0f;
}
static raw_spinlock_t die_lock = __RAW_SPIN_LOCK_UNLOCKED;
static int die_owner = -1;
static unsigned int die_nest_count;
unsigned __kprobes long oops_begin(void)
{
unsigned long flags;
oops_enter();
if (die_owner != raw_smp_processor_id()) {
console_verbose();
raw_local_irq_save(flags);
__raw_spin_lock(&die_lock);
die_owner = smp_processor_id();
die_nest_count = 0;
bust_spinlocks(1);
} else {
raw_local_irq_save(flags);
} }
die_nest_count++; tss->io_bitmap_max = thread->io_bitmap_max;
return flags; tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
} tss->io_bitmap_owner = thread;
put_cpu();
void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
{
bust_spinlocks(0);
die_owner = -1;
add_taint(TAINT_DIE);
__raw_spin_unlock(&die_lock);
raw_local_irq_restore(flags);
if (!regs)
return;
if (kexec_should_crash(current))
crash_kexec(regs);
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops)
panic("Fatal exception");
oops_exit();
do_exit(signr);
}
int __kprobes __die(const char *str, struct pt_regs *regs, long err)
{
unsigned short ss;
unsigned long sp;
printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT
printk("PREEMPT ");
#endif
#ifdef CONFIG_SMP
printk("SMP ");
#endif
#ifdef CONFIG_DEBUG_PAGEALLOC
printk("DEBUG_PAGEALLOC");
#endif
printk("\n");
if (notify_die(DIE_OOPS, str, regs, err,
current->thread.trap_no, SIGSEGV) == NOTIFY_STOP)
return 1; return 1;
show_registers(regs);
/* Executive summary in case the oops scrolled away */
sp = (unsigned long) (&regs->sp);
savesegment(ss, ss);
if (user_mode(regs)) {
sp = regs->sp;
ss = regs->ss & 0xffff;
}
printk(KERN_EMERG "EIP: [<%08lx>] ", regs->ip);
print_symbol("%s", regs->ip);
printk(" SS:ESP %04x:%08lx\n", ss, sp);
return 0;
}
/*
* This is gone through when something in the kernel has done something bad
* and is about to be terminated:
*/
void die(const char *str, struct pt_regs *regs, long err)
{
unsigned long flags = oops_begin();
if (die_nest_count < 3) {
report_bug(regs->ip, regs);
if (__die(str, regs, err))
regs = NULL;
} else {
printk(KERN_EMERG "Recursive die() failure, output suppressed\n");
} }
put_cpu();
oops_end(flags, regs, SIGSEGV); return 0;
}
static inline void
die_if_kernel(const char *str, struct pt_regs *regs, long err)
{
if (!user_mode_vm(regs))
die(str, regs, err);
} }
#endif
static void __kprobes static void __kprobes
do_trap(int trapnr, int signr, char *str, int vm86, struct pt_regs *regs, do_trap(int trapnr, int signr, char *str, struct pt_regs *regs,
long error_code, siginfo_t *info) long error_code, siginfo_t *info)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
#ifdef CONFIG_X86_32
if (regs->flags & X86_VM_MASK) { if (regs->flags & X86_VM_MASK) {
if (vm86) /*
* traps 0, 1, 3, 4, and 5 should be forwarded to vm86.
* On nmi (interrupt 2), do_trap should not be called.
*/
if (trapnr < 6)
goto vm86_trap; goto vm86_trap;
goto trap_signal; goto trap_signal;
} }
#endif
if (!user_mode(regs)) if (!user_mode(regs))
goto kernel_trap; goto kernel_trap;
#ifdef CONFIG_X86_32
trap_signal: trap_signal:
#endif
/* /*
* We want error_code and trap_no set for userspace faults and * We want error_code and trap_no set for userspace faults and
* kernelspace faults which result in die(), but not * kernelspace faults which result in die(), but not
...@@ -501,6 +197,18 @@ do_trap(int trapnr, int signr, char *str, int vm86, struct pt_regs *regs, ...@@ -501,6 +197,18 @@ do_trap(int trapnr, int signr, char *str, int vm86, struct pt_regs *regs,
tsk->thread.error_code = error_code; tsk->thread.error_code = error_code;
tsk->thread.trap_no = trapnr; tsk->thread.trap_no = trapnr;
#ifdef CONFIG_X86_64
if (show_unhandled_signals && unhandled_signal(tsk, signr) &&
printk_ratelimit()) {
printk(KERN_INFO
"%s[%d] trap %s ip:%lx sp:%lx error:%lx",
tsk->comm, tsk->pid, str,
regs->ip, regs->sp, error_code);
print_vma_addr(" in ", regs->ip);
printk("\n");
}
#endif
if (info) if (info)
force_sig_info(signr, info, tsk); force_sig_info(signr, info, tsk);
else else
...@@ -515,29 +223,29 @@ do_trap(int trapnr, int signr, char *str, int vm86, struct pt_regs *regs, ...@@ -515,29 +223,29 @@ do_trap(int trapnr, int signr, char *str, int vm86, struct pt_regs *regs,
} }
return; return;
#ifdef CONFIG_X86_32
vm86_trap: vm86_trap:
if (handle_vm86_trap((struct kernel_vm86_regs *) regs, if (handle_vm86_trap((struct kernel_vm86_regs *) regs,
error_code, trapnr)) error_code, trapnr))
goto trap_signal; goto trap_signal;
return; return;
#endif
} }
#define DO_ERROR(trapnr, signr, str, name) \ #define DO_ERROR(trapnr, signr, str, name) \
void do_##name(struct pt_regs *regs, long error_code) \ dotraplinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \ { \
trace_hardirqs_fixup(); \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \ == NOTIFY_STOP) \
return; \ return; \
do_trap(trapnr, signr, str, 0, regs, error_code, NULL); \ conditional_sti(regs); \
do_trap(trapnr, signr, str, regs, error_code, NULL); \
} }
#define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr, irq) \ #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
void do_##name(struct pt_regs *regs, long error_code) \ dotraplinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \ { \
siginfo_t info; \ siginfo_t info; \
if (irq) \
local_irq_enable(); \
info.si_signo = signr; \ info.si_signo = signr; \
info.si_errno = 0; \ info.si_errno = 0; \
info.si_code = sicode; \ info.si_code = sicode; \
...@@ -545,90 +253,68 @@ void do_##name(struct pt_regs *regs, long error_code) \ ...@@ -545,90 +253,68 @@ void do_##name(struct pt_regs *regs, long error_code) \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \ == NOTIFY_STOP) \
return; \ return; \
do_trap(trapnr, signr, str, 0, regs, error_code, &info); \ conditional_sti(regs); \
} do_trap(trapnr, signr, str, regs, error_code, &info); \
#define DO_VM86_ERROR(trapnr, signr, str, name) \
void do_##name(struct pt_regs *regs, long error_code) \
{ \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
do_trap(trapnr, signr, str, 1, regs, error_code, NULL); \
} }
#define DO_VM86_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ DO_ERROR_INFO(0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
void do_##name(struct pt_regs *regs, long error_code) \ DO_ERROR(4, SIGSEGV, "overflow", overflow)
{ \ DO_ERROR(5, SIGSEGV, "bounds", bounds)
siginfo_t info; \ DO_ERROR_INFO(6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip)
info.si_signo = signr; \
info.si_errno = 0; \
info.si_code = sicode; \
info.si_addr = (void __user *)siaddr; \
trace_hardirqs_fixup(); \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
do_trap(trapnr, signr, str, 1, regs, error_code, &info); \
}
DO_VM86_ERROR_INFO(0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
#ifndef CONFIG_KPROBES
DO_VM86_ERROR(3, SIGTRAP, "int3", int3)
#endif
DO_VM86_ERROR(4, SIGSEGV, "overflow", overflow)
DO_VM86_ERROR(5, SIGSEGV, "bounds", bounds)
DO_ERROR_INFO(6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip, 0)
DO_ERROR(9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun) DO_ERROR(9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS) DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
DO_ERROR(11, SIGBUS, "segment not present", segment_not_present) DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
#ifdef CONFIG_X86_32
DO_ERROR(12, SIGBUS, "stack segment", stack_segment) DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0, 0) #endif
DO_ERROR_INFO(32, SIGILL, "iret exception", iret_error, ILL_BADSTK, 0, 1) DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
#ifdef CONFIG_X86_64
/* Runs on IST stack */
dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code)
{
if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
12, SIGBUS) == NOTIFY_STOP)
return;
preempt_conditional_sti(regs);
do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL);
preempt_conditional_cli(regs);
}
dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
{
static const char str[] = "double fault";
struct task_struct *tsk = current;
/* Return not checked because double check cannot be ignored */
notify_die(DIE_TRAP, str, regs, error_code, 8, SIGSEGV);
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 8;
/* This is always a kernel trap and never fixable (and thus must
never return). */
for (;;)
die(str, regs, error_code);
}
#endif
void __kprobes dotraplinkage void __kprobes
do_general_protection(struct pt_regs *regs, long error_code) do_general_protection(struct pt_regs *regs, long error_code)
{ {
struct task_struct *tsk; struct task_struct *tsk;
struct thread_struct *thread;
struct tss_struct *tss;
int cpu;
cpu = get_cpu(); conditional_sti(regs);
tss = &per_cpu(init_tss, cpu);
thread = &current->thread;
/*
* Perform the lazy TSS's I/O bitmap copy. If the TSS has an
* invalid offset set (the LAZY one) and the faulting thread has
* a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS
* and we set the offset field correctly. Then we let the CPU to
* restart the faulting instruction.
*/
if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY &&
thread->io_bitmap_ptr) {
memcpy(tss->io_bitmap, thread->io_bitmap_ptr,
thread->io_bitmap_max);
/*
* If the previously set map was extending to higher ports
* than the current one, pad extra space with 0xff (no access).
*/
if (thread->io_bitmap_max < tss->io_bitmap_max) {
memset((char *) tss->io_bitmap +
thread->io_bitmap_max, 0xff,
tss->io_bitmap_max - thread->io_bitmap_max);
}
tss->io_bitmap_max = thread->io_bitmap_max;
tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
tss->io_bitmap_owner = thread;
put_cpu();
#ifdef CONFIG_X86_32
if (lazy_iobitmap_copy()) {
/* restart the faulting instruction */
return; return;
} }
put_cpu();
if (regs->flags & X86_VM_MASK) if (regs->flags & X86_VM_MASK)
goto gp_in_vm86; goto gp_in_vm86;
#endif
tsk = current; tsk = current;
if (!user_mode(regs)) if (!user_mode(regs))
...@@ -650,10 +336,12 @@ do_general_protection(struct pt_regs *regs, long error_code) ...@@ -650,10 +336,12 @@ do_general_protection(struct pt_regs *regs, long error_code)
force_sig(SIGSEGV, tsk); force_sig(SIGSEGV, tsk);
return; return;
#ifdef CONFIG_X86_32
gp_in_vm86: gp_in_vm86:
local_irq_enable(); local_irq_enable();
handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code); handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code);
return; return;
#endif
gp_in_kernel: gp_in_kernel:
if (fixup_exception(regs)) if (fixup_exception(regs))
...@@ -690,7 +378,8 @@ mem_parity_error(unsigned char reason, struct pt_regs *regs) ...@@ -690,7 +378,8 @@ mem_parity_error(unsigned char reason, struct pt_regs *regs)
printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
/* Clear and disable the memory parity error line. */ /* Clear and disable the memory parity error line. */
clear_mem_error(reason); reason = (reason & 0xf) | 4;
outb(reason, 0x61);
} }
static notrace __kprobes void static notrace __kprobes void
...@@ -716,7 +405,8 @@ io_check_error(unsigned char reason, struct pt_regs *regs) ...@@ -716,7 +405,8 @@ io_check_error(unsigned char reason, struct pt_regs *regs)
static notrace __kprobes void static notrace __kprobes void
unknown_nmi_error(unsigned char reason, struct pt_regs *regs) unknown_nmi_error(unsigned char reason, struct pt_regs *regs)
{ {
if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP) if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) ==
NOTIFY_STOP)
return; return;
#ifdef CONFIG_MCA #ifdef CONFIG_MCA
/* /*
...@@ -739,41 +429,6 @@ unknown_nmi_error(unsigned char reason, struct pt_regs *regs) ...@@ -739,41 +429,6 @@ unknown_nmi_error(unsigned char reason, struct pt_regs *regs)
printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
} }
static DEFINE_SPINLOCK(nmi_print_lock);
void notrace __kprobes die_nmi(char *str, struct pt_regs *regs, int do_panic)
{
if (notify_die(DIE_NMIWATCHDOG, str, regs, 0, 2, SIGINT) == NOTIFY_STOP)
return;
spin_lock(&nmi_print_lock);
/*
* We are in trouble anyway, lets at least try
* to get a message out:
*/
bust_spinlocks(1);
printk(KERN_EMERG "%s", str);
printk(" on CPU%d, ip %08lx, registers:\n",
smp_processor_id(), regs->ip);
show_registers(regs);
if (do_panic)
panic("Non maskable interrupt");
console_silent();
spin_unlock(&nmi_print_lock);
bust_spinlocks(0);
/*
* If we are in kernel we are probably nested up pretty bad
* and might aswell get out now while we still can:
*/
if (!user_mode_vm(regs)) {
current->thread.trap_no = 2;
crash_kexec(regs);
}
do_exit(SIGSEGV);
}
static notrace __kprobes void default_do_nmi(struct pt_regs *regs) static notrace __kprobes void default_do_nmi(struct pt_regs *regs)
{ {
unsigned char reason = 0; unsigned char reason = 0;
...@@ -812,22 +467,25 @@ static notrace __kprobes void default_do_nmi(struct pt_regs *regs) ...@@ -812,22 +467,25 @@ static notrace __kprobes void default_do_nmi(struct pt_regs *regs)
mem_parity_error(reason, regs); mem_parity_error(reason, regs);
if (reason & 0x40) if (reason & 0x40)
io_check_error(reason, regs); io_check_error(reason, regs);
#ifdef CONFIG_X86_32
/* /*
* Reassert NMI in case it became active meanwhile * Reassert NMI in case it became active meanwhile
* as it's edge-triggered: * as it's edge-triggered:
*/ */
reassert_nmi(); reassert_nmi();
#endif
} }
notrace __kprobes void do_nmi(struct pt_regs *regs, long error_code) dotraplinkage notrace __kprobes void
do_nmi(struct pt_regs *regs, long error_code)
{ {
int cpu;
nmi_enter(); nmi_enter();
cpu = smp_processor_id(); #ifdef CONFIG_X86_32
{ int cpu; cpu = smp_processor_id(); ++nmi_count(cpu); }
++nmi_count(cpu); #else
add_pda(__nmi_count, 1);
#endif
if (!ignore_nmis) if (!ignore_nmis)
default_do_nmi(regs); default_do_nmi(regs);
...@@ -847,21 +505,44 @@ void restart_nmi(void) ...@@ -847,21 +505,44 @@ void restart_nmi(void)
acpi_nmi_enable(); acpi_nmi_enable();
} }
#ifdef CONFIG_KPROBES /* May run on IST stack. */
void __kprobes do_int3(struct pt_regs *regs, long error_code) dotraplinkage void __kprobes do_int3(struct pt_regs *regs, long error_code)
{ {
trace_hardirqs_fixup(); #ifdef CONFIG_KPROBES
if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP) if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP)
== NOTIFY_STOP) == NOTIFY_STOP)
return; return;
/* #else
* This is an interrupt gate, because kprobes wants interrupts if (notify_die(DIE_TRAP, "int3", regs, error_code, 3, SIGTRAP)
* disabled. Normal trap handlers don't. == NOTIFY_STOP)
*/ return;
restore_interrupts(regs); #endif
preempt_conditional_sti(regs);
do_trap(3, SIGTRAP, "int3", regs, error_code, NULL);
preempt_conditional_cli(regs);
}
do_trap(3, SIGTRAP, "int3", 1, regs, error_code, NULL); #ifdef CONFIG_X86_64
/* Help handler running on IST stack to switch back to user stack
for scheduling or signal handling. The actual stack switch is done in
entry.S */
asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
{
struct pt_regs *regs = eregs;
/* Did already sync */
if (eregs == (struct pt_regs *)eregs->sp)
;
/* Exception from user space */
else if (user_mode(eregs))
regs = task_pt_regs(current);
/* Exception from kernel and interrupts are enabled. Move to
kernel process stack. */
else if (eregs->flags & X86_EFLAGS_IF)
regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs));
if (eregs != regs)
*regs = *eregs;
return regs;
} }
#endif #endif
...@@ -886,15 +567,15 @@ void __kprobes do_int3(struct pt_regs *regs, long error_code) ...@@ -886,15 +567,15 @@ void __kprobes do_int3(struct pt_regs *regs, long error_code)
* about restoring all the debug state, and ptrace doesn't have to * about restoring all the debug state, and ptrace doesn't have to
* find every occurrence of the TF bit that could be saved away even * find every occurrence of the TF bit that could be saved away even
* by user code) * by user code)
*
* May run on IST stack.
*/ */
void __kprobes do_debug(struct pt_regs *regs, long error_code) dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
unsigned int condition; unsigned long condition;
int si_code; int si_code;
trace_hardirqs_fixup();
get_debugreg(condition, 6); get_debugreg(condition, 6);
/* /*
...@@ -906,9 +587,9 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -906,9 +587,9 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code, if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
SIGTRAP) == NOTIFY_STOP) SIGTRAP) == NOTIFY_STOP)
return; return;
/* It's safe to allow irq's after DR6 has been saved */ /* It's safe to allow irq's after DR6 has been saved */
if (regs->flags & X86_EFLAGS_IF) preempt_conditional_sti(regs);
local_irq_enable();
/* Mask out spurious debug traps due to lazy DR7 setting */ /* Mask out spurious debug traps due to lazy DR7 setting */
if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) { if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
...@@ -916,8 +597,10 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -916,8 +597,10 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
goto clear_dr7; goto clear_dr7;
} }
#ifdef CONFIG_X86_32
if (regs->flags & X86_VM_MASK) if (regs->flags & X86_VM_MASK)
goto debug_vm86; goto debug_vm86;
#endif
/* Save debug status register where ptrace can see it */ /* Save debug status register where ptrace can see it */
tsk->thread.debugreg6 = condition; tsk->thread.debugreg6 = condition;
...@@ -927,16 +610,11 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -927,16 +610,11 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
* kernel space (but re-enable TF when returning to user mode). * kernel space (but re-enable TF when returning to user mode).
*/ */
if (condition & DR_STEP) { if (condition & DR_STEP) {
/*
* We already checked v86 mode above, so we can
* check for kernel mode by just checking the CPL
* of CS.
*/
if (!user_mode(regs)) if (!user_mode(regs))
goto clear_TF_reenable; goto clear_TF_reenable;
} }
si_code = get_si_code((unsigned long)condition); si_code = get_si_code(condition);
/* Ok, finally something we can handle */ /* Ok, finally something we can handle */
send_sigtrap(tsk, regs, error_code, si_code); send_sigtrap(tsk, regs, error_code, si_code);
...@@ -946,18 +624,37 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -946,18 +624,37 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
*/ */
clear_dr7: clear_dr7:
set_debugreg(0, 7); set_debugreg(0, 7);
preempt_conditional_cli(regs);
return; return;
#ifdef CONFIG_X86_32
debug_vm86: debug_vm86:
handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1);
preempt_conditional_cli(regs);
return; return;
#endif
clear_TF_reenable: clear_TF_reenable:
set_tsk_thread_flag(tsk, TIF_SINGLESTEP); set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
regs->flags &= ~X86_EFLAGS_TF; regs->flags &= ~X86_EFLAGS_TF;
preempt_conditional_cli(regs);
return; return;
} }
#ifdef CONFIG_X86_64
static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr)
{
if (fixup_exception(regs))
return 1;
notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE);
/* Illegal floating point operation in the kernel */
current->thread.trap_no = trapnr;
die(str, regs, 0);
return 0;
}
#endif
/* /*
* Note that we play around with the 'TS' bit in an attempt to get * Note that we play around with the 'TS' bit in an attempt to get
* the correct behaviour even in the presence of the asynchronous * the correct behaviour even in the presence of the asynchronous
...@@ -994,7 +691,9 @@ void math_error(void __user *ip) ...@@ -994,7 +691,9 @@ void math_error(void __user *ip)
swd = get_fpu_swd(task); swd = get_fpu_swd(task);
switch (swd & ~cwd & 0x3f) { switch (swd & ~cwd & 0x3f) {
case 0x000: /* No unmasked exception */ case 0x000: /* No unmasked exception */
#ifdef CONFIG_X86_32
return; return;
#endif
default: /* Multiple exceptions */ default: /* Multiple exceptions */
break; break;
case 0x001: /* Invalid Op */ case 0x001: /* Invalid Op */
...@@ -1022,9 +721,18 @@ void math_error(void __user *ip) ...@@ -1022,9 +721,18 @@ void math_error(void __user *ip)
force_sig_info(SIGFPE, &info, task); force_sig_info(SIGFPE, &info, task);
} }
void do_coprocessor_error(struct pt_regs *regs, long error_code) dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code)
{ {
conditional_sti(regs);
#ifdef CONFIG_X86_32
ignore_fpu_irq = 1; ignore_fpu_irq = 1;
#else
if (!user_mode(regs) &&
kernel_math_error(regs, "kernel x87 math error", 16))
return;
#endif
math_error((void __user *)regs->ip); math_error((void __user *)regs->ip);
} }
...@@ -1076,8 +784,12 @@ static void simd_math_error(void __user *ip) ...@@ -1076,8 +784,12 @@ static void simd_math_error(void __user *ip)
force_sig_info(SIGFPE, &info, task); force_sig_info(SIGFPE, &info, task);
} }
void do_simd_coprocessor_error(struct pt_regs *regs, long error_code) dotraplinkage void
do_simd_coprocessor_error(struct pt_regs *regs, long error_code)
{ {
conditional_sti(regs);
#ifdef CONFIG_X86_32
if (cpu_has_xmm) { if (cpu_has_xmm) {
/* Handle SIMD FPU exceptions on PIII+ processors. */ /* Handle SIMD FPU exceptions on PIII+ processors. */
ignore_fpu_irq = 1; ignore_fpu_irq = 1;
...@@ -1096,16 +808,25 @@ void do_simd_coprocessor_error(struct pt_regs *regs, long error_code) ...@@ -1096,16 +808,25 @@ void do_simd_coprocessor_error(struct pt_regs *regs, long error_code)
current->thread.error_code = error_code; current->thread.error_code = error_code;
die_if_kernel("cache flush denied", regs, error_code); die_if_kernel("cache flush denied", regs, error_code);
force_sig(SIGSEGV, current); force_sig(SIGSEGV, current);
#else
if (!user_mode(regs) &&
kernel_math_error(regs, "kernel simd math error", 19))
return;
simd_math_error((void __user *)regs->ip);
#endif
} }
void do_spurious_interrupt_bug(struct pt_regs *regs, long error_code) dotraplinkage void
do_spurious_interrupt_bug(struct pt_regs *regs, long error_code)
{ {
conditional_sti(regs);
#if 0 #if 0
/* No need to warn about this any longer. */ /* No need to warn about this any longer. */
printk(KERN_INFO "Ignoring P6 Local APIC Spurious Interrupt Bug...\n"); printk(KERN_INFO "Ignoring P6 Local APIC Spurious Interrupt Bug...\n");
#endif #endif
} }
#ifdef CONFIG_X86_32
unsigned long patch_espfix_desc(unsigned long uesp, unsigned long kesp) unsigned long patch_espfix_desc(unsigned long uesp, unsigned long kesp)
{ {
struct desc_struct *gdt = get_cpu_gdt_table(smp_processor_id()); struct desc_struct *gdt = get_cpu_gdt_table(smp_processor_id());
...@@ -1124,6 +845,15 @@ unsigned long patch_espfix_desc(unsigned long uesp, unsigned long kesp) ...@@ -1124,6 +845,15 @@ unsigned long patch_espfix_desc(unsigned long uesp, unsigned long kesp)
return new_kesp; return new_kesp;
} }
#else
asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void)
{
}
asmlinkage void __attribute__((weak)) mce_threshold_interrupt(void)
{
}
#endif
/* /*
* 'math_state_restore()' saves the current math information in the * 'math_state_restore()' saves the current math information in the
...@@ -1156,14 +886,24 @@ asmlinkage void math_state_restore(void) ...@@ -1156,14 +886,24 @@ asmlinkage void math_state_restore(void)
} }
clts(); /* Allow maths ops (or we recurse) */ clts(); /* Allow maths ops (or we recurse) */
#ifdef CONFIG_X86_32
restore_fpu(tsk); restore_fpu(tsk);
#else
/*
* Paranoid restore. send a SIGSEGV if we fail to restore the state.
*/
if (unlikely(restore_fpu_checking(tsk))) {
stts();
force_sig(SIGSEGV, tsk);
return;
}
#endif
thread->status |= TS_USEDFPU; /* So we fnsave on switch_to() */ thread->status |= TS_USEDFPU; /* So we fnsave on switch_to() */
tsk->fpu_counter++; tsk->fpu_counter++;
} }
EXPORT_SYMBOL_GPL(math_state_restore); EXPORT_SYMBOL_GPL(math_state_restore);
#ifndef CONFIG_MATH_EMULATION #ifndef CONFIG_MATH_EMULATION
asmlinkage void math_emulate(long arg) asmlinkage void math_emulate(long arg)
{ {
printk(KERN_EMERG printk(KERN_EMERG
...@@ -1172,12 +912,54 @@ asmlinkage void math_emulate(long arg) ...@@ -1172,12 +912,54 @@ asmlinkage void math_emulate(long arg)
force_sig(SIGFPE, current); force_sig(SIGFPE, current);
schedule(); schedule();
} }
#endif /* CONFIG_MATH_EMULATION */ #endif /* CONFIG_MATH_EMULATION */
dotraplinkage void __kprobes
do_device_not_available(struct pt_regs *regs, long error)
{
#ifdef CONFIG_X86_32
if (read_cr0() & X86_CR0_EM) {
conditional_sti(regs);
math_emulate(0);
} else {
math_state_restore(); /* interrupts still off */
conditional_sti(regs);
}
#else
math_state_restore();
#endif
}
#ifdef CONFIG_X86_32
#ifdef CONFIG_X86_MCE
dotraplinkage void __kprobes do_machine_check(struct pt_regs *regs, long error)
{
conditional_sti(regs);
machine_check_vector(regs, error);
}
#endif
dotraplinkage void do_iret_error(struct pt_regs *regs, long error_code)
{
siginfo_t info;
local_irq_enable();
info.si_signo = SIGILL;
info.si_errno = 0;
info.si_code = ILL_BADSTK;
info.si_addr = 0;
if (notify_die(DIE_TRAP, "iret exception",
regs, error_code, 32, SIGILL) == NOTIFY_STOP)
return;
do_trap(32, SIGILL, "iret exception", regs, error_code, &info);
}
#endif
void __init trap_init(void) void __init trap_init(void)
{ {
#ifdef CONFIG_X86_32
int i; int i;
#endif
#ifdef CONFIG_EISA #ifdef CONFIG_EISA
void __iomem *p = early_ioremap(0x0FFFD9, 4); void __iomem *p = early_ioremap(0x0FFFD9, 4);
...@@ -1187,29 +969,40 @@ void __init trap_init(void) ...@@ -1187,29 +969,40 @@ void __init trap_init(void)
early_iounmap(p, 4); early_iounmap(p, 4);
#endif #endif
set_trap_gate(0, &divide_error); set_intr_gate(0, &divide_error);
set_intr_gate(1, &debug); set_intr_gate_ist(1, &debug, DEBUG_STACK);
set_intr_gate(2, &nmi); set_intr_gate_ist(2, &nmi, NMI_STACK);
set_system_intr_gate(3, &int3); /* int3 can be called from all */ /* int3 can be called from all */
set_system_gate(4, &overflow); /* int4 can be called from all */ set_system_intr_gate_ist(3, &int3, DEBUG_STACK);
set_trap_gate(5, &bounds); /* int4 can be called from all */
set_trap_gate(6, &invalid_op); set_system_intr_gate(4, &overflow);
set_trap_gate(7, &device_not_available); set_intr_gate(5, &bounds);
set_intr_gate(6, &invalid_op);
set_intr_gate(7, &device_not_available);
#ifdef CONFIG_X86_32
set_task_gate(8, GDT_ENTRY_DOUBLEFAULT_TSS); set_task_gate(8, GDT_ENTRY_DOUBLEFAULT_TSS);
set_trap_gate(9, &coprocessor_segment_overrun); #else
set_trap_gate(10, &invalid_TSS); set_intr_gate_ist(8, &double_fault, DOUBLEFAULT_STACK);
set_trap_gate(11, &segment_not_present); #endif
set_trap_gate(12, &stack_segment); set_intr_gate(9, &coprocessor_segment_overrun);
set_trap_gate(13, &general_protection); set_intr_gate(10, &invalid_TSS);
set_intr_gate(11, &segment_not_present);
set_intr_gate_ist(12, &stack_segment, STACKFAULT_STACK);
set_intr_gate(13, &general_protection);
set_intr_gate(14, &page_fault); set_intr_gate(14, &page_fault);
set_trap_gate(15, &spurious_interrupt_bug); set_intr_gate(15, &spurious_interrupt_bug);
set_trap_gate(16, &coprocessor_error); set_intr_gate(16, &coprocessor_error);
set_trap_gate(17, &alignment_check); set_intr_gate(17, &alignment_check);
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
set_trap_gate(18, &machine_check); set_intr_gate_ist(18, &machine_check, MCE_STACK);
#endif
set_intr_gate(19, &simd_coprocessor_error);
#ifdef CONFIG_IA32_EMULATION
set_system_intr_gate(IA32_SYSCALL_VECTOR, ia32_syscall);
#endif #endif
set_trap_gate(19, &simd_coprocessor_error);
#ifdef CONFIG_X86_32
if (cpu_has_fxsr) { if (cpu_has_fxsr) {
printk(KERN_INFO "Enabling fast FPU save and restore... "); printk(KERN_INFO "Enabling fast FPU save and restore... ");
set_in_cr4(X86_CR4_OSFXSR); set_in_cr4(X86_CR4_OSFXSR);
...@@ -1222,36 +1015,20 @@ void __init trap_init(void) ...@@ -1222,36 +1015,20 @@ void __init trap_init(void)
printk("done.\n"); printk("done.\n");
} }
set_system_gate(SYSCALL_VECTOR, &system_call); set_system_trap_gate(SYSCALL_VECTOR, &system_call);
/* Reserve all the builtin and the syscall vector: */ /* Reserve all the builtin and the syscall vector: */
for (i = 0; i < FIRST_EXTERNAL_VECTOR; i++) for (i = 0; i < FIRST_EXTERNAL_VECTOR; i++)
set_bit(i, used_vectors); set_bit(i, used_vectors);
set_bit(SYSCALL_VECTOR, used_vectors); set_bit(SYSCALL_VECTOR, used_vectors);
#endif
/* /*
* Should be a barrier for any external CPU state: * Should be a barrier for any external CPU state:
*/ */
cpu_init(); cpu_init();
#ifdef CONFIG_X86_32
trap_init_hook(); trap_init_hook();
#endif
} }
static int __init kstack_setup(char *s)
{
kstack_depth_to_print = simple_strtoul(s, NULL, 0);
return 1;
}
__setup("kstack=", kstack_setup);
static int __init code_bytes_setup(char *s)
{
code_bytes = simple_strtoul(s, NULL, 0);
if (code_bytes > 8192)
code_bytes = 8192;
return 1;
}
__setup("code_bytes=", code_bytes_setup);
...@@ -47,16 +47,26 @@ static __init int mps_oem_check(struct mp_config_table *mpc, char *oem, ...@@ -47,16 +47,26 @@ static __init int mps_oem_check(struct mp_config_table *mpc, char *oem,
/* Hook from generic ACPI tables.c */ /* Hook from generic ACPI tables.c */
static int __init acpi_madt_oem_check(char *oem_id, char *oem_table_id) static int __init acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{ {
unsigned long oem_addr; unsigned long oem_addr = 0;
int check_dsdt;
int ret = 0;
/* check dsdt at first to avoid clear fix_map for oem_addr */
check_dsdt = es7000_check_dsdt();
if (!find_unisys_acpi_oem_table(&oem_addr)) { if (!find_unisys_acpi_oem_table(&oem_addr)) {
if (es7000_check_dsdt()) if (check_dsdt)
return parse_unisys_oem((char *)oem_addr); ret = parse_unisys_oem((char *)oem_addr);
else { else {
setup_unisys(); setup_unisys();
return 1; ret = 1;
} }
/*
* we need to unmap it
*/
unmap_unisys_acpi_oem_table(oem_addr);
} }
return 0; return ret;
} }
#else #else
static int __init acpi_madt_oem_check(char *oem_id, char *oem_table_id) static int __init acpi_madt_oem_check(char *oem_id, char *oem_table_id)
......
...@@ -13,12 +13,8 @@ obj-$(CONFIG_MMIOTRACE) += mmiotrace.o ...@@ -13,12 +13,8 @@ obj-$(CONFIG_MMIOTRACE) += mmiotrace.o
mmiotrace-y := pf_in.o mmio-mod.o mmiotrace-y := pf_in.o mmio-mod.o
obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o
ifeq ($(CONFIG_X86_32),y) obj-$(CONFIG_NUMA) += numa_$(BITS).o
obj-$(CONFIG_NUMA) += discontig_32.o
else
obj-$(CONFIG_NUMA) += numa_64.o
obj-$(CONFIG_K8_NUMA) += k8topology_64.o obj-$(CONFIG_K8_NUMA) += k8topology_64.o
endif
obj-$(CONFIG_ACPI_NUMA) += srat_$(BITS).o obj-$(CONFIG_ACPI_NUMA) += srat_$(BITS).o
obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MEMTEST) += memtest.o
...@@ -592,11 +592,6 @@ void __kprobes do_page_fault(struct pt_regs *regs, unsigned long error_code) ...@@ -592,11 +592,6 @@ void __kprobes do_page_fault(struct pt_regs *regs, unsigned long error_code)
unsigned long flags; unsigned long flags;
#endif #endif
/*
* We can fault from pretty much anywhere, with unknown IRQ state.
*/
trace_hardirqs_fixup();
tsk = current; tsk = current;
mm = tsk->mm; mm = tsk->mm;
prefetchw(&mm->mmap_sem); prefetchw(&mm->mmap_sem);
......
...@@ -82,7 +82,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, ...@@ -82,7 +82,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
pte_t pte = gup_get_pte(ptep); pte_t pte = gup_get_pte(ptep);
struct page *page; struct page *page;
if ((pte_val(pte) & (mask | _PAGE_SPECIAL)) != mask) { if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) {
pte_unmap(ptep); pte_unmap(ptep);
return 0; return 0;
} }
...@@ -116,10 +116,10 @@ static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr, ...@@ -116,10 +116,10 @@ static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
mask = _PAGE_PRESENT|_PAGE_USER; mask = _PAGE_PRESENT|_PAGE_USER;
if (write) if (write)
mask |= _PAGE_RW; mask |= _PAGE_RW;
if ((pte_val(pte) & mask) != mask) if ((pte_flags(pte) & mask) != mask)
return 0; return 0;
/* hugepages are never "special" */ /* hugepages are never "special" */
VM_BUG_ON(pte_val(pte) & _PAGE_SPECIAL); VM_BUG_ON(pte_flags(pte) & _PAGE_SPECIAL);
VM_BUG_ON(!pfn_valid(pte_pfn(pte))); VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
refs = 0; refs = 0;
...@@ -173,10 +173,10 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr, ...@@ -173,10 +173,10 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr,
mask = _PAGE_PRESENT|_PAGE_USER; mask = _PAGE_PRESENT|_PAGE_USER;
if (write) if (write)
mask |= _PAGE_RW; mask |= _PAGE_RW;
if ((pte_val(pte) & mask) != mask) if ((pte_flags(pte) & mask) != mask)
return 0; return 0;
/* hugepages are never "special" */ /* hugepages are never "special" */
VM_BUG_ON(pte_val(pte) & _PAGE_SPECIAL); VM_BUG_ON(pte_flags(pte) & _PAGE_SPECIAL);
VM_BUG_ON(!pfn_valid(pte_pfn(pte))); VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
refs = 0; refs = 0;
......
...@@ -558,7 +558,7 @@ void zap_low_mappings(void) ...@@ -558,7 +558,7 @@ void zap_low_mappings(void)
int nx_enabled; int nx_enabled;
pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL); pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
EXPORT_SYMBOL_GPL(__supported_pte_mask); EXPORT_SYMBOL_GPL(__supported_pte_mask);
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
......
...@@ -89,7 +89,7 @@ early_param("gbpages", parse_direct_gbpages_on); ...@@ -89,7 +89,7 @@ early_param("gbpages", parse_direct_gbpages_on);
int after_bootmem; int after_bootmem;
unsigned long __supported_pte_mask __read_mostly = ~0UL; pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
EXPORT_SYMBOL_GPL(__supported_pte_mask); EXPORT_SYMBOL_GPL(__supported_pte_mask);
static int do_not_nx __cpuinitdata; static int do_not_nx __cpuinitdata;
...@@ -196,9 +196,6 @@ set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte) ...@@ -196,9 +196,6 @@ set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte)
} }
pte = pte_offset_kernel(pmd, vaddr); pte = pte_offset_kernel(pmd, vaddr);
if (!pte_none(*pte) && pte_val(new_pte) &&
pte_val(*pte) != (pte_val(new_pte) & __supported_pte_mask))
pte_ERROR(*pte);
set_pte(pte, new_pte); set_pte(pte, new_pte);
/* /*
...@@ -313,7 +310,7 @@ static __ref void *alloc_low_page(unsigned long *phys) ...@@ -313,7 +310,7 @@ static __ref void *alloc_low_page(unsigned long *phys)
if (pfn >= table_top) if (pfn >= table_top)
panic("alloc_low_page: ran out of memory"); panic("alloc_low_page: ran out of memory");
adr = early_ioremap(pfn * PAGE_SIZE, PAGE_SIZE); adr = early_memremap(pfn * PAGE_SIZE, PAGE_SIZE);
memset(adr, 0, PAGE_SIZE); memset(adr, 0, PAGE_SIZE);
*phys = pfn * PAGE_SIZE; *phys = pfn * PAGE_SIZE;
return adr; return adr;
...@@ -749,7 +746,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start, ...@@ -749,7 +746,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
old_start = mr[i].start; old_start = mr[i].start;
memmove(&mr[i], &mr[i+1], memmove(&mr[i], &mr[i+1],
(nr_range - 1 - i) * sizeof (struct map_range)); (nr_range - 1 - i) * sizeof (struct map_range));
mr[i].start = old_start; mr[i--].start = old_start;
nr_range--; nr_range--;
} }
......
...@@ -45,6 +45,27 @@ unsigned long __phys_addr(unsigned long x) ...@@ -45,6 +45,27 @@ unsigned long __phys_addr(unsigned long x)
} }
EXPORT_SYMBOL(__phys_addr); EXPORT_SYMBOL(__phys_addr);
bool __virt_addr_valid(unsigned long x)
{
if (x >= __START_KERNEL_map) {
x -= __START_KERNEL_map;
if (x >= KERNEL_IMAGE_SIZE)
return false;
x += phys_base;
} else {
if (x < PAGE_OFFSET)
return false;
x -= PAGE_OFFSET;
if (system_state == SYSTEM_BOOTING ?
x > MAXMEM : !phys_addr_valid(x)) {
return false;
}
}
return pfn_valid(x >> PAGE_SHIFT);
}
EXPORT_SYMBOL(__virt_addr_valid);
#else #else
static inline int phys_addr_valid(unsigned long addr) static inline int phys_addr_valid(unsigned long addr)
...@@ -56,13 +77,24 @@ static inline int phys_addr_valid(unsigned long addr) ...@@ -56,13 +77,24 @@ static inline int phys_addr_valid(unsigned long addr)
unsigned long __phys_addr(unsigned long x) unsigned long __phys_addr(unsigned long x)
{ {
/* VMALLOC_* aren't constants; not available at the boot time */ /* VMALLOC_* aren't constants; not available at the boot time */
VIRTUAL_BUG_ON(x < PAGE_OFFSET || (system_state != SYSTEM_BOOTING && VIRTUAL_BUG_ON(x < PAGE_OFFSET);
is_vmalloc_addr((void *)x))); VIRTUAL_BUG_ON(system_state != SYSTEM_BOOTING &&
is_vmalloc_addr((void *) x));
return x - PAGE_OFFSET; return x - PAGE_OFFSET;
} }
EXPORT_SYMBOL(__phys_addr); EXPORT_SYMBOL(__phys_addr);
#endif #endif
bool __virt_addr_valid(unsigned long x)
{
if (x < PAGE_OFFSET)
return false;
if (system_state != SYSTEM_BOOTING && is_vmalloc_addr((void *) x))
return false;
return pfn_valid((x - PAGE_OFFSET) >> PAGE_SHIFT);
}
EXPORT_SYMBOL(__virt_addr_valid);
#endif #endif
int page_is_ram(unsigned long pagenr) int page_is_ram(unsigned long pagenr)
...@@ -242,16 +274,16 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, ...@@ -242,16 +274,16 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
switch (prot_val) { switch (prot_val) {
case _PAGE_CACHE_UC: case _PAGE_CACHE_UC:
default: default:
prot = PAGE_KERNEL_NOCACHE; prot = PAGE_KERNEL_IO_NOCACHE;
break; break;
case _PAGE_CACHE_UC_MINUS: case _PAGE_CACHE_UC_MINUS:
prot = PAGE_KERNEL_UC_MINUS; prot = PAGE_KERNEL_IO_UC_MINUS;
break; break;
case _PAGE_CACHE_WC: case _PAGE_CACHE_WC:
prot = PAGE_KERNEL_WC; prot = PAGE_KERNEL_IO_WC;
break; break;
case _PAGE_CACHE_WB: case _PAGE_CACHE_WB:
prot = PAGE_KERNEL; prot = PAGE_KERNEL_IO;
break; break;
} }
...@@ -568,12 +600,12 @@ static void __init __early_set_fixmap(enum fixed_addresses idx, ...@@ -568,12 +600,12 @@ static void __init __early_set_fixmap(enum fixed_addresses idx,
} }
static inline void __init early_set_fixmap(enum fixed_addresses idx, static inline void __init early_set_fixmap(enum fixed_addresses idx,
unsigned long phys) unsigned long phys, pgprot_t prot)
{ {
if (after_paging_init) if (after_paging_init)
set_fixmap(idx, phys); __set_fixmap(idx, phys, prot);
else else
__early_set_fixmap(idx, phys, PAGE_KERNEL); __early_set_fixmap(idx, phys, prot);
} }
static inline void __init early_clear_fixmap(enum fixed_addresses idx) static inline void __init early_clear_fixmap(enum fixed_addresses idx)
...@@ -584,16 +616,22 @@ static inline void __init early_clear_fixmap(enum fixed_addresses idx) ...@@ -584,16 +616,22 @@ static inline void __init early_clear_fixmap(enum fixed_addresses idx)
__early_set_fixmap(idx, 0, __pgprot(0)); __early_set_fixmap(idx, 0, __pgprot(0));
} }
static void *prev_map[FIX_BTMAPS_SLOTS] __initdata;
static int __initdata early_ioremap_nested; static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
static int __init check_early_ioremap_leak(void) static int __init check_early_ioremap_leak(void)
{ {
if (!early_ioremap_nested) int count = 0;
int i;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
if (prev_map[i])
count++;
if (!count)
return 0; return 0;
WARN(1, KERN_WARNING WARN(1, KERN_WARNING
"Debug warning: early ioremap leak of %d areas detected.\n", "Debug warning: early ioremap leak of %d areas detected.\n",
early_ioremap_nested); count);
printk(KERN_WARNING printk(KERN_WARNING
"please boot with early_ioremap_debug and report the dmesg.\n"); "please boot with early_ioremap_debug and report the dmesg.\n");
...@@ -601,18 +639,33 @@ static int __init check_early_ioremap_leak(void) ...@@ -601,18 +639,33 @@ static int __init check_early_ioremap_leak(void)
} }
late_initcall(check_early_ioremap_leak); late_initcall(check_early_ioremap_leak);
void __init *early_ioremap(unsigned long phys_addr, unsigned long size) static void __init *__early_ioremap(unsigned long phys_addr, unsigned long size, pgprot_t prot)
{ {
unsigned long offset, last_addr; unsigned long offset, last_addr;
unsigned int nrpages, nesting; unsigned int nrpages;
enum fixed_addresses idx0, idx; enum fixed_addresses idx0, idx;
int i, slot;
WARN_ON(system_state != SYSTEM_BOOTING); WARN_ON(system_state != SYSTEM_BOOTING);
nesting = early_ioremap_nested; slot = -1;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
if (!prev_map[i]) {
slot = i;
break;
}
}
if (slot < 0) {
printk(KERN_INFO "early_iomap(%08lx, %08lx) not found slot\n",
phys_addr, size);
WARN_ON(1);
return NULL;
}
if (early_ioremap_debug) { if (early_ioremap_debug) {
printk(KERN_INFO "early_ioremap(%08lx, %08lx) [%d] => ", printk(KERN_INFO "early_ioremap(%08lx, %08lx) [%d] => ",
phys_addr, size, nesting); phys_addr, size, slot);
dump_stack(); dump_stack();
} }
...@@ -623,11 +676,7 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size) ...@@ -623,11 +676,7 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size)
return NULL; return NULL;
} }
if (nesting >= FIX_BTMAPS_NESTING) { prev_size[slot] = size;
WARN_ON(1);
return NULL;
}
early_ioremap_nested++;
/* /*
* Mappings have to be page-aligned * Mappings have to be page-aligned
*/ */
...@@ -647,10 +696,10 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size) ...@@ -647,10 +696,10 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size)
/* /*
* Ok, go for it.. * Ok, go for it..
*/ */
idx0 = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*nesting; idx0 = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
idx = idx0; idx = idx0;
while (nrpages > 0) { while (nrpages > 0) {
early_set_fixmap(idx, phys_addr); early_set_fixmap(idx, phys_addr, prot);
phys_addr += PAGE_SIZE; phys_addr += PAGE_SIZE;
--idx; --idx;
--nrpages; --nrpages;
...@@ -658,7 +707,20 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size) ...@@ -658,7 +707,20 @@ void __init *early_ioremap(unsigned long phys_addr, unsigned long size)
if (early_ioremap_debug) if (early_ioremap_debug)
printk(KERN_CONT "%08lx + %08lx\n", offset, fix_to_virt(idx0)); printk(KERN_CONT "%08lx + %08lx\n", offset, fix_to_virt(idx0));
return (void *) (offset + fix_to_virt(idx0)); prev_map[slot] = (void *) (offset + fix_to_virt(idx0));
return prev_map[slot];
}
/* Remap an IO device */
void __init *early_ioremap(unsigned long phys_addr, unsigned long size)
{
return __early_ioremap(phys_addr, size, PAGE_KERNEL_IO);
}
/* Remap memory */
void __init *early_memremap(unsigned long phys_addr, unsigned long size)
{
return __early_ioremap(phys_addr, size, PAGE_KERNEL);
} }
void __init early_iounmap(void *addr, unsigned long size) void __init early_iounmap(void *addr, unsigned long size)
...@@ -667,15 +729,33 @@ void __init early_iounmap(void *addr, unsigned long size) ...@@ -667,15 +729,33 @@ void __init early_iounmap(void *addr, unsigned long size)
unsigned long offset; unsigned long offset;
unsigned int nrpages; unsigned int nrpages;
enum fixed_addresses idx; enum fixed_addresses idx;
int nesting; int i, slot;
slot = -1;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
if (prev_map[i] == addr) {
slot = i;
break;
}
}
nesting = --early_ioremap_nested; if (slot < 0) {
if (WARN_ON(nesting < 0)) printk(KERN_INFO "early_iounmap(%p, %08lx) not found slot\n",
addr, size);
WARN_ON(1);
return;
}
if (prev_size[slot] != size) {
printk(KERN_INFO "early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n",
addr, size, slot, prev_size[slot]);
WARN_ON(1);
return; return;
}
if (early_ioremap_debug) { if (early_ioremap_debug) {
printk(KERN_INFO "early_iounmap(%p, %08lx) [%d]\n", addr, printk(KERN_INFO "early_iounmap(%p, %08lx) [%d]\n", addr,
size, nesting); size, slot);
dump_stack(); dump_stack();
} }
...@@ -687,12 +767,13 @@ void __init early_iounmap(void *addr, unsigned long size) ...@@ -687,12 +767,13 @@ void __init early_iounmap(void *addr, unsigned long size)
offset = virt_addr & ~PAGE_MASK; offset = virt_addr & ~PAGE_MASK;
nrpages = PAGE_ALIGN(offset + size - 1) >> PAGE_SHIFT; nrpages = PAGE_ALIGN(offset + size - 1) >> PAGE_SHIFT;
idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*nesting; idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
while (nrpages > 0) { while (nrpages > 0) {
early_clear_fixmap(idx); early_clear_fixmap(idx);
--idx; --idx;
--nrpages; --nrpages;
} }
prev_map[slot] = 0;
} }
void __this_fixmap_does_not_exist(void) void __this_fixmap_does_not_exist(void)
......
...@@ -138,7 +138,7 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa) ...@@ -138,7 +138,7 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
return; return;
} }
if (is_uv_system()) if (get_uv_system_type() >= UV_X2APIC)
apic_id = (pa->apic_id << 8) | pa->local_sapic_eid; apic_id = (pa->apic_id << 8) | pa->local_sapic_eid;
else else
apic_id = pa->apic_id; apic_id = pa->apic_id;
......
...@@ -7,6 +7,6 @@ DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \ ...@@ -7,6 +7,6 @@ DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
timer_int.o ) timer_int.o )
oprofile-y := $(DRIVER_OBJS) init.o backtrace.o oprofile-y := $(DRIVER_OBJS) init.o backtrace.o
oprofile-$(CONFIG_X86_LOCAL_APIC) += nmi_int.o op_model_athlon.o \ oprofile-$(CONFIG_X86_LOCAL_APIC) += nmi_int.o op_model_amd.o \
op_model_ppro.o op_model_p4.o op_model_ppro.o op_model_p4.o
oprofile-$(CONFIG_X86_IO_APIC) += nmi_timer_int.o oprofile-$(CONFIG_X86_IO_APIC) += nmi_timer_int.o
/** /**
* @file nmi_int.c * @file nmi_int.c
* *
* @remark Copyright 2002 OProfile authors * @remark Copyright 2002-2008 OProfile authors
* @remark Read the file COPYING * @remark Read the file COPYING
* *
* @author John Levon <levon@movementarian.org> * @author John Levon <levon@movementarian.org>
* @author Robert Richter <robert.richter@amd.com>
*/ */
#include <linux/init.h> #include <linux/init.h>
...@@ -439,6 +440,7 @@ int __init op_nmi_init(struct oprofile_operations *ops) ...@@ -439,6 +440,7 @@ int __init op_nmi_init(struct oprofile_operations *ops)
__u8 vendor = boot_cpu_data.x86_vendor; __u8 vendor = boot_cpu_data.x86_vendor;
__u8 family = boot_cpu_data.x86; __u8 family = boot_cpu_data.x86;
char *cpu_type; char *cpu_type;
int ret = 0;
if (!cpu_has_apic) if (!cpu_has_apic)
return -ENODEV; return -ENODEV;
...@@ -451,19 +453,23 @@ int __init op_nmi_init(struct oprofile_operations *ops) ...@@ -451,19 +453,23 @@ int __init op_nmi_init(struct oprofile_operations *ops)
default: default:
return -ENODEV; return -ENODEV;
case 6: case 6:
model = &op_athlon_spec; model = &op_amd_spec;
cpu_type = "i386/athlon"; cpu_type = "i386/athlon";
break; break;
case 0xf: case 0xf:
model = &op_athlon_spec; model = &op_amd_spec;
/* Actually it could be i386/hammer too, but give /* Actually it could be i386/hammer too, but give
user space an consistent name. */ user space an consistent name. */
cpu_type = "x86-64/hammer"; cpu_type = "x86-64/hammer";
break; break;
case 0x10: case 0x10:
model = &op_athlon_spec; model = &op_amd_spec;
cpu_type = "x86-64/family10"; cpu_type = "x86-64/family10";
break; break;
case 0x11:
model = &op_amd_spec;
cpu_type = "x86-64/family11h";
break;
} }
break; break;
...@@ -490,17 +496,24 @@ int __init op_nmi_init(struct oprofile_operations *ops) ...@@ -490,17 +496,24 @@ int __init op_nmi_init(struct oprofile_operations *ops)
return -ENODEV; return -ENODEV;
} }
init_sysfs();
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
register_cpu_notifier(&oprofile_cpu_nb); register_cpu_notifier(&oprofile_cpu_nb);
#endif #endif
using_nmi = 1; /* default values, can be overwritten by model */
ops->create_files = nmi_create_files; ops->create_files = nmi_create_files;
ops->setup = nmi_setup; ops->setup = nmi_setup;
ops->shutdown = nmi_shutdown; ops->shutdown = nmi_shutdown;
ops->start = nmi_start; ops->start = nmi_start;
ops->stop = nmi_stop; ops->stop = nmi_stop;
ops->cpu_type = cpu_type; ops->cpu_type = cpu_type;
if (model->init)
ret = model->init(ops);
if (ret)
return ret;
init_sysfs();
using_nmi = 1;
printk(KERN_INFO "oprofile: using NMI interrupt.\n"); printk(KERN_INFO "oprofile: using NMI interrupt.\n");
return 0; return 0;
} }
...@@ -513,4 +526,6 @@ void op_nmi_exit(void) ...@@ -513,4 +526,6 @@ void op_nmi_exit(void)
unregister_cpu_notifier(&oprofile_cpu_nb); unregister_cpu_notifier(&oprofile_cpu_nb);
#endif #endif
} }
if (model->exit)
model->exit();
} }
/* /*
* @file op_model_athlon.h * @file op_model_amd.c
* athlon / K7 / K8 / Family 10h model-specific MSR operations * athlon / K7 / K8 / Family 10h model-specific MSR operations
* *
* @remark Copyright 2002 OProfile authors * @remark Copyright 2002-2008 OProfile authors
* @remark Read the file COPYING * @remark Read the file COPYING
* *
* @author John Levon * @author John Levon
* @author Philippe Elie * @author Philippe Elie
* @author Graydon Hoare * @author Graydon Hoare
*/ * @author Robert Richter <robert.richter@amd.com>
* @author Barry Kasindorf
*/
#include <linux/oprofile.h> #include <linux/oprofile.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/nmi.h> #include <asm/nmi.h>
...@@ -44,7 +49,83 @@ ...@@ -44,7 +49,83 @@
static unsigned long reset_value[NUM_COUNTERS]; static unsigned long reset_value[NUM_COUNTERS];
static void athlon_fill_in_addresses(struct op_msrs * const msrs) #ifdef CONFIG_OPROFILE_IBS
/* IbsFetchCtl bits/masks */
#define IBS_FETCH_HIGH_VALID_BIT (1UL << 17) /* bit 49 */
#define IBS_FETCH_HIGH_ENABLE (1UL << 16) /* bit 48 */
#define IBS_FETCH_LOW_MAX_CNT_MASK 0x0000FFFFUL /* MaxCnt mask */
/*IbsOpCtl bits */
#define IBS_OP_LOW_VALID_BIT (1ULL<<18) /* bit 18 */
#define IBS_OP_LOW_ENABLE (1ULL<<17) /* bit 17 */
/* Codes used in cpu_buffer.c */
/* This produces duplicate code, need to be fixed */
#define IBS_FETCH_BEGIN 3
#define IBS_OP_BEGIN 4
/* The function interface needs to be fixed, something like add
data. Should then be added to linux/oprofile.h. */
extern void oprofile_add_ibs_sample(struct pt_regs *const regs,
unsigned int * const ibs_sample, u8 code);
struct ibs_fetch_sample {
/* MSRC001_1031 IBS Fetch Linear Address Register */
unsigned int ibs_fetch_lin_addr_low;
unsigned int ibs_fetch_lin_addr_high;
/* MSRC001_1030 IBS Fetch Control Register */
unsigned int ibs_fetch_ctl_low;
unsigned int ibs_fetch_ctl_high;
/* MSRC001_1032 IBS Fetch Physical Address Register */
unsigned int ibs_fetch_phys_addr_low;
unsigned int ibs_fetch_phys_addr_high;
};
struct ibs_op_sample {
/* MSRC001_1034 IBS Op Logical Address Register (IbsRIP) */
unsigned int ibs_op_rip_low;
unsigned int ibs_op_rip_high;
/* MSRC001_1035 IBS Op Data Register */
unsigned int ibs_op_data1_low;
unsigned int ibs_op_data1_high;
/* MSRC001_1036 IBS Op Data 2 Register */
unsigned int ibs_op_data2_low;
unsigned int ibs_op_data2_high;
/* MSRC001_1037 IBS Op Data 3 Register */
unsigned int ibs_op_data3_low;
unsigned int ibs_op_data3_high;
/* MSRC001_1038 IBS DC Linear Address Register (IbsDcLinAd) */
unsigned int ibs_dc_linear_low;
unsigned int ibs_dc_linear_high;
/* MSRC001_1039 IBS DC Physical Address Register (IbsDcPhysAd) */
unsigned int ibs_dc_phys_low;
unsigned int ibs_dc_phys_high;
};
/*
* unitialize the APIC for the IBS interrupts if needed on AMD Family10h+
*/
static void clear_ibs_nmi(void);
static int ibs_allowed; /* AMD Family10h and later */
struct op_ibs_config {
unsigned long op_enabled;
unsigned long fetch_enabled;
unsigned long max_cnt_fetch;
unsigned long max_cnt_op;
unsigned long rand_en;
unsigned long dispatched_ops;
};
static struct op_ibs_config ibs_config;
#endif
/* functions for op_amd_spec */
static void op_amd_fill_in_addresses(struct op_msrs * const msrs)
{ {
int i; int i;
...@@ -64,7 +145,7 @@ static void athlon_fill_in_addresses(struct op_msrs * const msrs) ...@@ -64,7 +145,7 @@ static void athlon_fill_in_addresses(struct op_msrs * const msrs)
} }
static void athlon_setup_ctrs(struct op_msrs const * const msrs) static void op_amd_setup_ctrs(struct op_msrs const * const msrs)
{ {
unsigned int low, high; unsigned int low, high;
int i; int i;
...@@ -112,8 +193,84 @@ static void athlon_setup_ctrs(struct op_msrs const * const msrs) ...@@ -112,8 +193,84 @@ static void athlon_setup_ctrs(struct op_msrs const * const msrs)
} }
} }
#ifdef CONFIG_OPROFILE_IBS
static int athlon_check_ctrs(struct pt_regs * const regs, static inline int
op_amd_handle_ibs(struct pt_regs * const regs,
struct op_msrs const * const msrs)
{
unsigned int low, high;
struct ibs_fetch_sample ibs_fetch;
struct ibs_op_sample ibs_op;
if (!ibs_allowed)
return 1;
if (ibs_config.fetch_enabled) {
rdmsr(MSR_AMD64_IBSFETCHCTL, low, high);
if (high & IBS_FETCH_HIGH_VALID_BIT) {
ibs_fetch.ibs_fetch_ctl_high = high;
ibs_fetch.ibs_fetch_ctl_low = low;
rdmsr(MSR_AMD64_IBSFETCHLINAD, low, high);
ibs_fetch.ibs_fetch_lin_addr_high = high;
ibs_fetch.ibs_fetch_lin_addr_low = low;
rdmsr(MSR_AMD64_IBSFETCHPHYSAD, low, high);
ibs_fetch.ibs_fetch_phys_addr_high = high;
ibs_fetch.ibs_fetch_phys_addr_low = low;
oprofile_add_ibs_sample(regs,
(unsigned int *)&ibs_fetch,
IBS_FETCH_BEGIN);
/*reenable the IRQ */
rdmsr(MSR_AMD64_IBSFETCHCTL, low, high);
high &= ~IBS_FETCH_HIGH_VALID_BIT;
high |= IBS_FETCH_HIGH_ENABLE;
low &= IBS_FETCH_LOW_MAX_CNT_MASK;
wrmsr(MSR_AMD64_IBSFETCHCTL, low, high);
}
}
if (ibs_config.op_enabled) {
rdmsr(MSR_AMD64_IBSOPCTL, low, high);
if (low & IBS_OP_LOW_VALID_BIT) {
rdmsr(MSR_AMD64_IBSOPRIP, low, high);
ibs_op.ibs_op_rip_low = low;
ibs_op.ibs_op_rip_high = high;
rdmsr(MSR_AMD64_IBSOPDATA, low, high);
ibs_op.ibs_op_data1_low = low;
ibs_op.ibs_op_data1_high = high;
rdmsr(MSR_AMD64_IBSOPDATA2, low, high);
ibs_op.ibs_op_data2_low = low;
ibs_op.ibs_op_data2_high = high;
rdmsr(MSR_AMD64_IBSOPDATA3, low, high);
ibs_op.ibs_op_data3_low = low;
ibs_op.ibs_op_data3_high = high;
rdmsr(MSR_AMD64_IBSDCLINAD, low, high);
ibs_op.ibs_dc_linear_low = low;
ibs_op.ibs_dc_linear_high = high;
rdmsr(MSR_AMD64_IBSDCPHYSAD, low, high);
ibs_op.ibs_dc_phys_low = low;
ibs_op.ibs_dc_phys_high = high;
/* reenable the IRQ */
oprofile_add_ibs_sample(regs,
(unsigned int *)&ibs_op,
IBS_OP_BEGIN);
rdmsr(MSR_AMD64_IBSOPCTL, low, high);
high = 0;
low &= ~IBS_OP_LOW_VALID_BIT;
low |= IBS_OP_LOW_ENABLE;
wrmsr(MSR_AMD64_IBSOPCTL, low, high);
}
}
return 1;
}
#endif
static int op_amd_check_ctrs(struct pt_regs * const regs,
struct op_msrs const * const msrs) struct op_msrs const * const msrs)
{ {
unsigned int low, high; unsigned int low, high;
...@@ -129,12 +286,15 @@ static int athlon_check_ctrs(struct pt_regs * const regs, ...@@ -129,12 +286,15 @@ static int athlon_check_ctrs(struct pt_regs * const regs,
} }
} }
#ifdef CONFIG_OPROFILE_IBS
op_amd_handle_ibs(regs, msrs);
#endif
/* See op_model_ppro.c */ /* See op_model_ppro.c */
return 1; return 1;
} }
static void op_amd_start(struct op_msrs const * const msrs)
static void athlon_start(struct op_msrs const * const msrs)
{ {
unsigned int low, high; unsigned int low, high;
int i; int i;
...@@ -145,10 +305,24 @@ static void athlon_start(struct op_msrs const * const msrs) ...@@ -145,10 +305,24 @@ static void athlon_start(struct op_msrs const * const msrs)
CTRL_WRITE(low, high, msrs, i); CTRL_WRITE(low, high, msrs, i);
} }
} }
#ifdef CONFIG_OPROFILE_IBS
if (ibs_allowed && ibs_config.fetch_enabled) {
low = (ibs_config.max_cnt_fetch >> 4) & 0xFFFF;
high = IBS_FETCH_HIGH_ENABLE;
wrmsr(MSR_AMD64_IBSFETCHCTL, low, high);
}
if (ibs_allowed && ibs_config.op_enabled) {
low = ((ibs_config.max_cnt_op >> 4) & 0xFFFF) + IBS_OP_LOW_ENABLE;
high = 0;
wrmsr(MSR_AMD64_IBSOPCTL, low, high);
}
#endif
} }
static void athlon_stop(struct op_msrs const * const msrs) static void op_amd_stop(struct op_msrs const * const msrs)
{ {
unsigned int low, high; unsigned int low, high;
int i; int i;
...@@ -162,9 +336,23 @@ static void athlon_stop(struct op_msrs const * const msrs) ...@@ -162,9 +336,23 @@ static void athlon_stop(struct op_msrs const * const msrs)
CTRL_SET_INACTIVE(low); CTRL_SET_INACTIVE(low);
CTRL_WRITE(low, high, msrs, i); CTRL_WRITE(low, high, msrs, i);
} }
#ifdef CONFIG_OPROFILE_IBS
if (ibs_allowed && ibs_config.fetch_enabled) {
low = 0; /* clear max count and enable */
high = 0;
wrmsr(MSR_AMD64_IBSFETCHCTL, low, high);
}
if (ibs_allowed && ibs_config.op_enabled) {
low = 0; /* clear max count and enable */
high = 0;
wrmsr(MSR_AMD64_IBSOPCTL, low, high);
}
#endif
} }
static void athlon_shutdown(struct op_msrs const * const msrs) static void op_amd_shutdown(struct op_msrs const * const msrs)
{ {
int i; int i;
...@@ -178,13 +366,178 @@ static void athlon_shutdown(struct op_msrs const * const msrs) ...@@ -178,13 +366,178 @@ static void athlon_shutdown(struct op_msrs const * const msrs)
} }
} }
struct op_x86_model_spec const op_athlon_spec = { #ifndef CONFIG_OPROFILE_IBS
/* no IBS support */
static int op_amd_init(struct oprofile_operations *ops)
{
return 0;
}
static void op_amd_exit(void) {}
#else
static u8 ibs_eilvt_off;
static inline void apic_init_ibs_nmi_per_cpu(void *arg)
{
ibs_eilvt_off = setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_NMI, 0);
}
static inline void apic_clear_ibs_nmi_per_cpu(void *arg)
{
setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_FIX, 1);
}
static int pfm_amd64_setup_eilvt(void)
{
#define IBSCTL_LVTOFFSETVAL (1 << 8)
#define IBSCTL 0x1cc
struct pci_dev *cpu_cfg;
int nodes;
u32 value = 0;
/* per CPU setup */
on_each_cpu(apic_init_ibs_nmi_per_cpu, NULL, 1);
nodes = 0;
cpu_cfg = NULL;
do {
cpu_cfg = pci_get_device(PCI_VENDOR_ID_AMD,
PCI_DEVICE_ID_AMD_10H_NB_MISC,
cpu_cfg);
if (!cpu_cfg)
break;
++nodes;
pci_write_config_dword(cpu_cfg, IBSCTL, ibs_eilvt_off
| IBSCTL_LVTOFFSETVAL);
pci_read_config_dword(cpu_cfg, IBSCTL, &value);
if (value != (ibs_eilvt_off | IBSCTL_LVTOFFSETVAL)) {
printk(KERN_DEBUG "Failed to setup IBS LVT offset, "
"IBSCTL = 0x%08x", value);
return 1;
}
} while (1);
if (!nodes) {
printk(KERN_DEBUG "No CPU node configured for IBS");
return 1;
}
#ifdef CONFIG_NUMA
/* Sanity check */
/* Works only for 64bit with proper numa implementation. */
if (nodes != num_possible_nodes()) {
printk(KERN_DEBUG "Failed to setup CPU node(s) for IBS, "
"found: %d, expected %d",
nodes, num_possible_nodes());
return 1;
}
#endif
return 0;
}
/*
* initialize the APIC for the IBS interrupts
* if available (AMD Family10h rev B0 and later)
*/
static void setup_ibs(void)
{
ibs_allowed = boot_cpu_has(X86_FEATURE_IBS);
if (!ibs_allowed)
return;
if (pfm_amd64_setup_eilvt()) {
ibs_allowed = 0;
return;
}
printk(KERN_INFO "oprofile: AMD IBS detected\n");
}
/*
* unitialize the APIC for the IBS interrupts if needed on AMD Family10h
* rev B0 and later */
static void clear_ibs_nmi(void)
{
if (ibs_allowed)
on_each_cpu(apic_clear_ibs_nmi_per_cpu, NULL, 1);
}
static int (*create_arch_files)(struct super_block * sb, struct dentry * root);
static int setup_ibs_files(struct super_block * sb, struct dentry * root)
{
char buf[12];
struct dentry *dir;
int ret = 0;
/* architecture specific files */
if (create_arch_files)
ret = create_arch_files(sb, root);
if (ret)
return ret;
if (!ibs_allowed)
return ret;
/* model specific files */
/* setup some reasonable defaults */
ibs_config.max_cnt_fetch = 250000;
ibs_config.fetch_enabled = 0;
ibs_config.max_cnt_op = 250000;
ibs_config.op_enabled = 0;
ibs_config.dispatched_ops = 1;
snprintf(buf, sizeof(buf), "ibs_fetch");
dir = oprofilefs_mkdir(sb, root, buf);
oprofilefs_create_ulong(sb, dir, "rand_enable",
&ibs_config.rand_en);
oprofilefs_create_ulong(sb, dir, "enable",
&ibs_config.fetch_enabled);
oprofilefs_create_ulong(sb, dir, "max_count",
&ibs_config.max_cnt_fetch);
snprintf(buf, sizeof(buf), "ibs_uops");
dir = oprofilefs_mkdir(sb, root, buf);
oprofilefs_create_ulong(sb, dir, "enable",
&ibs_config.op_enabled);
oprofilefs_create_ulong(sb, dir, "max_count",
&ibs_config.max_cnt_op);
oprofilefs_create_ulong(sb, dir, "dispatched_ops",
&ibs_config.dispatched_ops);
return 0;
}
static int op_amd_init(struct oprofile_operations *ops)
{
setup_ibs();
create_arch_files = ops->create_files;
ops->create_files = setup_ibs_files;
return 0;
}
static void op_amd_exit(void)
{
clear_ibs_nmi();
}
#endif
struct op_x86_model_spec const op_amd_spec = {
.init = op_amd_init,
.exit = op_amd_exit,
.num_counters = NUM_COUNTERS, .num_counters = NUM_COUNTERS,
.num_controls = NUM_CONTROLS, .num_controls = NUM_CONTROLS,
.fill_in_addresses = &athlon_fill_in_addresses, .fill_in_addresses = &op_amd_fill_in_addresses,
.setup_ctrs = &athlon_setup_ctrs, .setup_ctrs = &op_amd_setup_ctrs,
.check_ctrs = &athlon_check_ctrs, .check_ctrs = &op_amd_check_ctrs,
.start = &athlon_start, .start = &op_amd_start,
.stop = &athlon_stop, .stop = &op_amd_stop,
.shutdown = &athlon_shutdown .shutdown = &op_amd_shutdown
}; };
...@@ -32,6 +32,8 @@ struct pt_regs; ...@@ -32,6 +32,8 @@ struct pt_regs;
* various x86 CPU models' perfctr support. * various x86 CPU models' perfctr support.
*/ */
struct op_x86_model_spec { struct op_x86_model_spec {
int (*init)(struct oprofile_operations *ops);
void (*exit)(void);
unsigned int const num_counters; unsigned int const num_counters;
unsigned int const num_controls; unsigned int const num_controls;
void (*fill_in_addresses)(struct op_msrs * const msrs); void (*fill_in_addresses)(struct op_msrs * const msrs);
...@@ -46,6 +48,6 @@ struct op_x86_model_spec { ...@@ -46,6 +48,6 @@ struct op_x86_model_spec {
extern struct op_x86_model_spec const op_ppro_spec; extern struct op_x86_model_spec const op_ppro_spec;
extern struct op_x86_model_spec const op_p4_spec; extern struct op_x86_model_spec const op_p4_spec;
extern struct op_x86_model_spec const op_p4_ht2_spec; extern struct op_x86_model_spec const op_p4_ht2_spec;
extern struct op_x86_model_spec const op_athlon_spec; extern struct op_x86_model_spec const op_amd_spec;
#endif /* OP_X86_MODEL_H */ #endif /* OP_X86_MODEL_H */
...@@ -511,3 +511,31 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1201, fam10h_pci_cfg_space_size); ...@@ -511,3 +511,31 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1201, fam10h_pci_cfg_space_size);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1202, fam10h_pci_cfg_space_size); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1202, fam10h_pci_cfg_space_size);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1203, fam10h_pci_cfg_space_size); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1203, fam10h_pci_cfg_space_size);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1204, fam10h_pci_cfg_space_size); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x1204, fam10h_pci_cfg_space_size);
/*
* SB600: Disable BAR1 on device 14.0 to avoid HPET resources from
* confusing the PCI engine:
*/
static void sb600_disable_hpet_bar(struct pci_dev *dev)
{
u8 val;
/*
* The SB600 and SB700 both share the same device
* ID, but the PM register 0x55 does something different
* for the SB700, so make sure we are dealing with the
* SB600 before touching the bit:
*/
pci_read_config_byte(dev, 0x08, &val);
if (val < 0x2F) {
outb(0x55, 0xCD6);
val = inb(0xCD7);
/* Set bit 7 in PM register 0x55 */
outb(0x55, 0xCD6);
outb(val | 0x80, 0xCD7);
}
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ATI, 0x4385, sb600_disable_hpet_bar);
...@@ -53,6 +53,11 @@ ...@@ -53,6 +53,11 @@
#define HPET_RANGE_SIZE 1024 /* from HPET spec */ #define HPET_RANGE_SIZE 1024 /* from HPET spec */
/* WARNING -- don't get confused. These macros are never used
* to write the (single) counter, and rarely to read it.
* They're badly named; to fix, someday.
*/
#if BITS_PER_LONG == 64 #if BITS_PER_LONG == 64
#define write_counter(V, MC) writeq(V, MC) #define write_counter(V, MC) writeq(V, MC)
#define read_counter(MC) readq(MC) #define read_counter(MC) readq(MC)
...@@ -77,7 +82,7 @@ static struct clocksource clocksource_hpet = { ...@@ -77,7 +82,7 @@ static struct clocksource clocksource_hpet = {
.rating = 250, .rating = 250,
.read = read_hpet, .read = read_hpet,
.mask = CLOCKSOURCE_MASK(64), .mask = CLOCKSOURCE_MASK(64),
.mult = 0, /*to be caluclated*/ .mult = 0, /* to be calculated */
.shift = 10, .shift = 10,
.flags = CLOCK_SOURCE_IS_CONTINUOUS, .flags = CLOCK_SOURCE_IS_CONTINUOUS,
}; };
...@@ -86,8 +91,6 @@ static struct clocksource *hpet_clocksource; ...@@ -86,8 +91,6 @@ static struct clocksource *hpet_clocksource;
/* A lock for concurrent access by app and isr hpet activity. */ /* A lock for concurrent access by app and isr hpet activity. */
static DEFINE_SPINLOCK(hpet_lock); static DEFINE_SPINLOCK(hpet_lock);
/* A lock for concurrent intermodule access to hpet and isr hpet activity. */
static DEFINE_SPINLOCK(hpet_task_lock);
#define HPET_DEV_NAME (7) #define HPET_DEV_NAME (7)
...@@ -99,7 +102,6 @@ struct hpet_dev { ...@@ -99,7 +102,6 @@ struct hpet_dev {
unsigned long hd_irqdata; unsigned long hd_irqdata;
wait_queue_head_t hd_waitqueue; wait_queue_head_t hd_waitqueue;
struct fasync_struct *hd_async_queue; struct fasync_struct *hd_async_queue;
struct hpet_task *hd_task;
unsigned int hd_flags; unsigned int hd_flags;
unsigned int hd_irq; unsigned int hd_irq;
unsigned int hd_hdwirq; unsigned int hd_hdwirq;
...@@ -173,11 +175,6 @@ static irqreturn_t hpet_interrupt(int irq, void *data) ...@@ -173,11 +175,6 @@ static irqreturn_t hpet_interrupt(int irq, void *data)
writel(isr, &devp->hd_hpet->hpet_isr); writel(isr, &devp->hd_hpet->hpet_isr);
spin_unlock(&hpet_lock); spin_unlock(&hpet_lock);
spin_lock(&hpet_task_lock);
if (devp->hd_task)
devp->hd_task->ht_func(devp->hd_task->ht_data);
spin_unlock(&hpet_task_lock);
wake_up_interruptible(&devp->hd_waitqueue); wake_up_interruptible(&devp->hd_waitqueue);
kill_fasync(&devp->hd_async_queue, SIGIO, POLL_IN); kill_fasync(&devp->hd_async_queue, SIGIO, POLL_IN);
...@@ -185,6 +182,67 @@ static irqreturn_t hpet_interrupt(int irq, void *data) ...@@ -185,6 +182,67 @@ static irqreturn_t hpet_interrupt(int irq, void *data)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static void hpet_timer_set_irq(struct hpet_dev *devp)
{
unsigned long v;
int irq, gsi;
struct hpet_timer __iomem *timer;
spin_lock_irq(&hpet_lock);
if (devp->hd_hdwirq) {
spin_unlock_irq(&hpet_lock);
return;
}
timer = devp->hd_timer;
/* we prefer level triggered mode */
v = readl(&timer->hpet_config);
if (!(v & Tn_INT_TYPE_CNF_MASK)) {
v |= Tn_INT_TYPE_CNF_MASK;
writel(v, &timer->hpet_config);
}
spin_unlock_irq(&hpet_lock);
v = (readq(&timer->hpet_config) & Tn_INT_ROUTE_CAP_MASK) >>
Tn_INT_ROUTE_CAP_SHIFT;
/*
* In PIC mode, skip IRQ0-4, IRQ6-9, IRQ12-15 which is always used by
* legacy device. In IO APIC mode, we skip all the legacy IRQS.
*/
if (acpi_irq_model == ACPI_IRQ_MODEL_PIC)
v &= ~0xf3df;
else
v &= ~0xffff;
for (irq = find_first_bit(&v, HPET_MAX_IRQ); irq < HPET_MAX_IRQ;
irq = find_next_bit(&v, HPET_MAX_IRQ, 1 + irq)) {
if (irq >= NR_IRQS) {
irq = HPET_MAX_IRQ;
break;
}
gsi = acpi_register_gsi(irq, ACPI_LEVEL_SENSITIVE,
ACPI_ACTIVE_LOW);
if (gsi > 0)
break;
/* FIXME: Setup interrupt source table */
}
if (irq < HPET_MAX_IRQ) {
spin_lock_irq(&hpet_lock);
v = readl(&timer->hpet_config);
v |= irq << Tn_INT_ROUTE_CNF_SHIFT;
writel(v, &timer->hpet_config);
devp->hd_hdwirq = gsi;
spin_unlock_irq(&hpet_lock);
}
return;
}
static int hpet_open(struct inode *inode, struct file *file) static int hpet_open(struct inode *inode, struct file *file)
{ {
struct hpet_dev *devp; struct hpet_dev *devp;
...@@ -199,8 +257,7 @@ static int hpet_open(struct inode *inode, struct file *file) ...@@ -199,8 +257,7 @@ static int hpet_open(struct inode *inode, struct file *file)
for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next) for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
for (i = 0; i < hpetp->hp_ntimer; i++) for (i = 0; i < hpetp->hp_ntimer; i++)
if (hpetp->hp_dev[i].hd_flags & HPET_OPEN if (hpetp->hp_dev[i].hd_flags & HPET_OPEN)
|| hpetp->hp_dev[i].hd_task)
continue; continue;
else { else {
devp = &hpetp->hp_dev[i]; devp = &hpetp->hp_dev[i];
...@@ -219,6 +276,8 @@ static int hpet_open(struct inode *inode, struct file *file) ...@@ -219,6 +276,8 @@ static int hpet_open(struct inode *inode, struct file *file)
spin_unlock_irq(&hpet_lock); spin_unlock_irq(&hpet_lock);
unlock_kernel(); unlock_kernel();
hpet_timer_set_irq(devp);
return 0; return 0;
} }
...@@ -441,7 +500,11 @@ static int hpet_ioctl_ieon(struct hpet_dev *devp) ...@@ -441,7 +500,11 @@ static int hpet_ioctl_ieon(struct hpet_dev *devp)
devp->hd_irq = irq; devp->hd_irq = irq;
t = devp->hd_ireqfreq; t = devp->hd_ireqfreq;
v = readq(&timer->hpet_config); v = readq(&timer->hpet_config);
g = v | Tn_INT_ENB_CNF_MASK;
/* 64-bit comparators are not yet supported through the ioctls,
* so force this into 32-bit mode if it supports both modes
*/
g = v | Tn_32MODE_CNF_MASK | Tn_INT_ENB_CNF_MASK;
if (devp->hd_flags & HPET_PERIODIC) { if (devp->hd_flags & HPET_PERIODIC) {
write_counter(t, &timer->hpet_compare); write_counter(t, &timer->hpet_compare);
...@@ -451,6 +514,12 @@ static int hpet_ioctl_ieon(struct hpet_dev *devp) ...@@ -451,6 +514,12 @@ static int hpet_ioctl_ieon(struct hpet_dev *devp)
v |= Tn_VAL_SET_CNF_MASK; v |= Tn_VAL_SET_CNF_MASK;
writeq(v, &timer->hpet_config); writeq(v, &timer->hpet_config);
local_irq_save(flags); local_irq_save(flags);
/* NOTE: what we modify here is a hidden accumulator
* register supported by periodic-capable comparators.
* We never want to modify the (single) counter; that
* would affect all the comparators.
*/
m = read_counter(&hpet->hpet_mc); m = read_counter(&hpet->hpet_mc);
write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare); write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
} else { } else {
...@@ -604,57 +673,6 @@ static int hpet_is_known(struct hpet_data *hdp) ...@@ -604,57 +673,6 @@ static int hpet_is_known(struct hpet_data *hdp)
return 0; return 0;
} }
static inline int hpet_tpcheck(struct hpet_task *tp)
{
struct hpet_dev *devp;
struct hpets *hpetp;
devp = tp->ht_opaque;
if (!devp)
return -ENXIO;
for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
if (devp >= hpetp->hp_dev
&& devp < (hpetp->hp_dev + hpetp->hp_ntimer)
&& devp->hd_hpet == hpetp->hp_hpet)
return 0;
return -ENXIO;
}
#if 0
int hpet_unregister(struct hpet_task *tp)
{
struct hpet_dev *devp;
struct hpet_timer __iomem *timer;
int err;
if ((err = hpet_tpcheck(tp)))
return err;
spin_lock_irq(&hpet_task_lock);
spin_lock(&hpet_lock);
devp = tp->ht_opaque;
if (devp->hd_task != tp) {
spin_unlock(&hpet_lock);
spin_unlock_irq(&hpet_task_lock);
return -ENXIO;
}
timer = devp->hd_timer;
writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
&timer->hpet_config);
devp->hd_flags &= ~(HPET_IE | HPET_PERIODIC);
devp->hd_task = NULL;
spin_unlock(&hpet_lock);
spin_unlock_irq(&hpet_task_lock);
return 0;
}
#endif /* 0 */
static ctl_table hpet_table[] = { static ctl_table hpet_table[] = {
{ {
.ctl_name = CTL_UNNUMBERED, .ctl_name = CTL_UNNUMBERED,
...@@ -746,6 +764,7 @@ int hpet_alloc(struct hpet_data *hdp) ...@@ -746,6 +764,7 @@ int hpet_alloc(struct hpet_data *hdp)
static struct hpets *last = NULL; static struct hpets *last = NULL;
unsigned long period; unsigned long period;
unsigned long long temp; unsigned long long temp;
u32 remainder;
/* /*
* hpet_alloc can be called by platform dependent code. * hpet_alloc can be called by platform dependent code.
...@@ -809,9 +828,13 @@ int hpet_alloc(struct hpet_data *hdp) ...@@ -809,9 +828,13 @@ int hpet_alloc(struct hpet_data *hdp)
printk("%s %d", i > 0 ? "," : "", hdp->hd_irq[i]); printk("%s %d", i > 0 ? "," : "", hdp->hd_irq[i]);
printk("\n"); printk("\n");
printk(KERN_INFO "hpet%u: %u %d-bit timers, %Lu Hz\n", temp = hpetp->hp_tick_freq;
remainder = do_div(temp, 1000000);
printk(KERN_INFO
"hpet%u: %u comparators, %d-bit %u.%06u MHz counter\n",
hpetp->hp_which, hpetp->hp_ntimer, hpetp->hp_which, hpetp->hp_ntimer,
cap & HPET_COUNTER_SIZE_MASK ? 64 : 32, hpetp->hp_tick_freq); cap & HPET_COUNTER_SIZE_MASK ? 64 : 32,
(unsigned) temp, remainder);
mcfg = readq(&hpet->hpet_config); mcfg = readq(&hpet->hpet_config);
if ((mcfg & HPET_ENABLE_CNF_MASK) == 0) { if ((mcfg & HPET_ENABLE_CNF_MASK) == 0) {
...@@ -874,8 +897,6 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) ...@@ -874,8 +897,6 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
hdp->hd_address = ioremap(addr.minimum, addr.address_length); hdp->hd_address = ioremap(addr.minimum, addr.address_length);
if (hpet_is_known(hdp)) { if (hpet_is_known(hdp)) {
printk(KERN_DEBUG "%s: 0x%lx is busy\n",
__func__, hdp->hd_phys_address);
iounmap(hdp->hd_address); iounmap(hdp->hd_address);
return AE_ALREADY_EXISTS; return AE_ALREADY_EXISTS;
} }
...@@ -891,8 +912,6 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) ...@@ -891,8 +912,6 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
HPET_RANGE_SIZE); HPET_RANGE_SIZE);
if (hpet_is_known(hdp)) { if (hpet_is_known(hdp)) {
printk(KERN_DEBUG "%s: 0x%lx is busy\n",
__func__, hdp->hd_phys_address);
iounmap(hdp->hd_address); iounmap(hdp->hd_address);
return AE_ALREADY_EXISTS; return AE_ALREADY_EXISTS;
} }
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* @remark Read the file COPYING * @remark Read the file COPYING
* *
* @author John Levon <levon@movementarian.org> * @author John Levon <levon@movementarian.org>
* @author Barry Kasindorf
* *
* This is the core of the buffer management. Each * This is the core of the buffer management. Each
* CPU buffer is processed and entered into the * CPU buffer is processed and entered into the
...@@ -48,10 +49,11 @@ static void process_task_mortuary(void); ...@@ -48,10 +49,11 @@ static void process_task_mortuary(void);
* Can be invoked from softirq via RCU callback due to * Can be invoked from softirq via RCU callback due to
* call_rcu() of the task struct, hence the _irqsave. * call_rcu() of the task struct, hence the _irqsave.
*/ */
static int task_free_notify(struct notifier_block * self, unsigned long val, void * data) static int
task_free_notify(struct notifier_block *self, unsigned long val, void *data)
{ {
unsigned long flags; unsigned long flags;
struct task_struct * task = data; struct task_struct *task = data;
spin_lock_irqsave(&task_mortuary, flags); spin_lock_irqsave(&task_mortuary, flags);
list_add(&task->tasks, &dying_tasks); list_add(&task->tasks, &dying_tasks);
spin_unlock_irqrestore(&task_mortuary, flags); spin_unlock_irqrestore(&task_mortuary, flags);
...@@ -62,7 +64,8 @@ static int task_free_notify(struct notifier_block * self, unsigned long val, voi ...@@ -62,7 +64,8 @@ static int task_free_notify(struct notifier_block * self, unsigned long val, voi
/* The task is on its way out. A sync of the buffer means we can catch /* The task is on its way out. A sync of the buffer means we can catch
* any remaining samples for this task. * any remaining samples for this task.
*/ */
static int task_exit_notify(struct notifier_block * self, unsigned long val, void * data) static int
task_exit_notify(struct notifier_block *self, unsigned long val, void *data)
{ {
/* To avoid latency problems, we only process the current CPU, /* To avoid latency problems, we only process the current CPU,
* hoping that most samples for the task are on this CPU * hoping that most samples for the task are on this CPU
...@@ -77,11 +80,12 @@ static int task_exit_notify(struct notifier_block * self, unsigned long val, voi ...@@ -77,11 +80,12 @@ static int task_exit_notify(struct notifier_block * self, unsigned long val, voi
* we don't lose any. This does not have to be exact, it's a QoI issue * we don't lose any. This does not have to be exact, it's a QoI issue
* only. * only.
*/ */
static int munmap_notify(struct notifier_block * self, unsigned long val, void * data) static int
munmap_notify(struct notifier_block *self, unsigned long val, void *data)
{ {
unsigned long addr = (unsigned long)data; unsigned long addr = (unsigned long)data;
struct mm_struct * mm = current->mm; struct mm_struct *mm = current->mm;
struct vm_area_struct * mpnt; struct vm_area_struct *mpnt;
down_read(&mm->mmap_sem); down_read(&mm->mmap_sem);
...@@ -103,7 +107,8 @@ static int munmap_notify(struct notifier_block * self, unsigned long val, void * ...@@ -103,7 +107,8 @@ static int munmap_notify(struct notifier_block * self, unsigned long val, void *
/* We need to be told about new modules so we don't attribute to a previously /* We need to be told about new modules so we don't attribute to a previously
* loaded module, or drop the samples on the floor. * loaded module, or drop the samples on the floor.
*/ */
static int module_load_notify(struct notifier_block * self, unsigned long val, void * data) static int
module_load_notify(struct notifier_block *self, unsigned long val, void *data)
{ {
#ifdef CONFIG_MODULES #ifdef CONFIG_MODULES
if (val != MODULE_STATE_COMING) if (val != MODULE_STATE_COMING)
...@@ -208,10 +213,10 @@ static inline unsigned long fast_get_dcookie(struct path *path) ...@@ -208,10 +213,10 @@ static inline unsigned long fast_get_dcookie(struct path *path)
* not strictly necessary but allows oprofile to associate * not strictly necessary but allows oprofile to associate
* shared-library samples with particular applications * shared-library samples with particular applications
*/ */
static unsigned long get_exec_dcookie(struct mm_struct * mm) static unsigned long get_exec_dcookie(struct mm_struct *mm)
{ {
unsigned long cookie = NO_COOKIE; unsigned long cookie = NO_COOKIE;
struct vm_area_struct * vma; struct vm_area_struct *vma;
if (!mm) if (!mm)
goto out; goto out;
...@@ -235,10 +240,11 @@ static unsigned long get_exec_dcookie(struct mm_struct * mm) ...@@ -235,10 +240,11 @@ static unsigned long get_exec_dcookie(struct mm_struct * mm)
* sure to do this lookup before a mm->mmap modification happens so * sure to do this lookup before a mm->mmap modification happens so
* we don't lose track. * we don't lose track.
*/ */
static unsigned long lookup_dcookie(struct mm_struct * mm, unsigned long addr, off_t * offset) static unsigned long
lookup_dcookie(struct mm_struct *mm, unsigned long addr, off_t *offset)
{ {
unsigned long cookie = NO_COOKIE; unsigned long cookie = NO_COOKIE;
struct vm_area_struct * vma; struct vm_area_struct *vma;
for (vma = find_vma(mm, addr); vma; vma = vma->vm_next) { for (vma = find_vma(mm, addr); vma; vma = vma->vm_next) {
...@@ -263,6 +269,17 @@ static unsigned long lookup_dcookie(struct mm_struct * mm, unsigned long addr, o ...@@ -263,6 +269,17 @@ static unsigned long lookup_dcookie(struct mm_struct * mm, unsigned long addr, o
return cookie; return cookie;
} }
static void increment_tail(struct oprofile_cpu_buffer *b)
{
unsigned long new_tail = b->tail_pos + 1;
rmb(); /* be sure fifo pointers are synchromized */
if (new_tail < b->buffer_size)
b->tail_pos = new_tail;
else
b->tail_pos = 0;
}
static unsigned long last_cookie = INVALID_COOKIE; static unsigned long last_cookie = INVALID_COOKIE;
...@@ -284,7 +301,7 @@ static void add_kernel_ctx_switch(unsigned int in_kernel) ...@@ -284,7 +301,7 @@ static void add_kernel_ctx_switch(unsigned int in_kernel)
} }
static void static void
add_user_ctx_switch(struct task_struct const * task, unsigned long cookie) add_user_ctx_switch(struct task_struct const *task, unsigned long cookie)
{ {
add_event_entry(ESCAPE_CODE); add_event_entry(ESCAPE_CODE);
add_event_entry(CTX_SWITCH_CODE); add_event_entry(CTX_SWITCH_CODE);
...@@ -311,6 +328,71 @@ static void add_trace_begin(void) ...@@ -311,6 +328,71 @@ static void add_trace_begin(void)
add_event_entry(TRACE_BEGIN_CODE); add_event_entry(TRACE_BEGIN_CODE);
} }
#ifdef CONFIG_OPROFILE_IBS
#define IBS_FETCH_CODE_SIZE 2
#define IBS_OP_CODE_SIZE 5
#define IBS_EIP(offset) \
(((struct op_sample *)&cpu_buf->buffer[(offset)])->eip)
#define IBS_EVENT(offset) \
(((struct op_sample *)&cpu_buf->buffer[(offset)])->event)
/*
* Add IBS fetch and op entries to event buffer
*/
static void add_ibs_begin(struct oprofile_cpu_buffer *cpu_buf, int code,
int in_kernel, struct mm_struct *mm)
{
unsigned long rip;
int i, count;
unsigned long ibs_cookie = 0;
off_t offset;
increment_tail(cpu_buf); /* move to RIP entry */
rip = IBS_EIP(cpu_buf->tail_pos);
#ifdef __LP64__
rip += IBS_EVENT(cpu_buf->tail_pos) << 32;
#endif
if (mm) {
ibs_cookie = lookup_dcookie(mm, rip, &offset);
if (ibs_cookie == NO_COOKIE)
offset = rip;
if (ibs_cookie == INVALID_COOKIE) {
atomic_inc(&oprofile_stats.sample_lost_no_mapping);
offset = rip;
}
if (ibs_cookie != last_cookie) {
add_cookie_switch(ibs_cookie);
last_cookie = ibs_cookie;
}
} else
offset = rip;
add_event_entry(ESCAPE_CODE);
add_event_entry(code);
add_event_entry(offset); /* Offset from Dcookie */
/* we send the Dcookie offset, but send the raw Linear Add also*/
add_event_entry(IBS_EIP(cpu_buf->tail_pos));
add_event_entry(IBS_EVENT(cpu_buf->tail_pos));
if (code == IBS_FETCH_CODE)
count = IBS_FETCH_CODE_SIZE; /*IBS FETCH is 2 int64s*/
else
count = IBS_OP_CODE_SIZE; /*IBS OP is 5 int64s*/
for (i = 0; i < count; i++) {
increment_tail(cpu_buf);
add_event_entry(IBS_EIP(cpu_buf->tail_pos));
add_event_entry(IBS_EVENT(cpu_buf->tail_pos));
}
}
#endif
static void add_sample_entry(unsigned long offset, unsigned long event) static void add_sample_entry(unsigned long offset, unsigned long event)
{ {
...@@ -319,7 +401,7 @@ static void add_sample_entry(unsigned long offset, unsigned long event) ...@@ -319,7 +401,7 @@ static void add_sample_entry(unsigned long offset, unsigned long event)
} }
static int add_us_sample(struct mm_struct * mm, struct op_sample * s) static int add_us_sample(struct mm_struct *mm, struct op_sample *s)
{ {
unsigned long cookie; unsigned long cookie;
off_t offset; off_t offset;
...@@ -347,7 +429,7 @@ static int add_us_sample(struct mm_struct * mm, struct op_sample * s) ...@@ -347,7 +429,7 @@ static int add_us_sample(struct mm_struct * mm, struct op_sample * s)
* for later lookup from userspace. * for later lookup from userspace.
*/ */
static int static int
add_sample(struct mm_struct * mm, struct op_sample * s, int in_kernel) add_sample(struct mm_struct *mm, struct op_sample *s, int in_kernel)
{ {
if (in_kernel) { if (in_kernel) {
add_sample_entry(s->eip, s->event); add_sample_entry(s->eip, s->event);
...@@ -361,7 +443,7 @@ add_sample(struct mm_struct * mm, struct op_sample * s, int in_kernel) ...@@ -361,7 +443,7 @@ add_sample(struct mm_struct * mm, struct op_sample * s, int in_kernel)
} }
static void release_mm(struct mm_struct * mm) static void release_mm(struct mm_struct *mm)
{ {
if (!mm) if (!mm)
return; return;
...@@ -370,9 +452,9 @@ static void release_mm(struct mm_struct * mm) ...@@ -370,9 +452,9 @@ static void release_mm(struct mm_struct * mm)
} }
static struct mm_struct * take_tasks_mm(struct task_struct * task) static struct mm_struct *take_tasks_mm(struct task_struct *task)
{ {
struct mm_struct * mm = get_task_mm(task); struct mm_struct *mm = get_task_mm(task);
if (mm) if (mm)
down_read(&mm->mmap_sem); down_read(&mm->mmap_sem);
return mm; return mm;
...@@ -386,7 +468,7 @@ static inline int is_code(unsigned long val) ...@@ -386,7 +468,7 @@ static inline int is_code(unsigned long val)
/* "acquire" as many cpu buffer slots as we can */ /* "acquire" as many cpu buffer slots as we can */
static unsigned long get_slots(struct oprofile_cpu_buffer * b) static unsigned long get_slots(struct oprofile_cpu_buffer *b)
{ {
unsigned long head = b->head_pos; unsigned long head = b->head_pos;
unsigned long tail = b->tail_pos; unsigned long tail = b->tail_pos;
...@@ -412,19 +494,6 @@ static unsigned long get_slots(struct oprofile_cpu_buffer * b) ...@@ -412,19 +494,6 @@ static unsigned long get_slots(struct oprofile_cpu_buffer * b)
} }
static void increment_tail(struct oprofile_cpu_buffer * b)
{
unsigned long new_tail = b->tail_pos + 1;
rmb();
if (new_tail < b->buffer_size)
b->tail_pos = new_tail;
else
b->tail_pos = 0;
}
/* Move tasks along towards death. Any tasks on dead_tasks /* Move tasks along towards death. Any tasks on dead_tasks
* will definitely have no remaining references in any * will definitely have no remaining references in any
* CPU buffers at this point, because we use two lists, * CPU buffers at this point, because we use two lists,
...@@ -435,8 +504,8 @@ static void process_task_mortuary(void) ...@@ -435,8 +504,8 @@ static void process_task_mortuary(void)
{ {
unsigned long flags; unsigned long flags;
LIST_HEAD(local_dead_tasks); LIST_HEAD(local_dead_tasks);
struct task_struct * task; struct task_struct *task;
struct task_struct * ttask; struct task_struct *ttask;
spin_lock_irqsave(&task_mortuary, flags); spin_lock_irqsave(&task_mortuary, flags);
...@@ -493,7 +562,7 @@ void sync_buffer(int cpu) ...@@ -493,7 +562,7 @@ void sync_buffer(int cpu)
{ {
struct oprofile_cpu_buffer *cpu_buf = &per_cpu(cpu_buffer, cpu); struct oprofile_cpu_buffer *cpu_buf = &per_cpu(cpu_buffer, cpu);
struct mm_struct *mm = NULL; struct mm_struct *mm = NULL;
struct task_struct * new; struct task_struct *new;
unsigned long cookie = 0; unsigned long cookie = 0;
int in_kernel = 1; int in_kernel = 1;
unsigned int i; unsigned int i;
...@@ -509,7 +578,7 @@ void sync_buffer(int cpu) ...@@ -509,7 +578,7 @@ void sync_buffer(int cpu)
available = get_slots(cpu_buf); available = get_slots(cpu_buf);
for (i = 0; i < available; ++i) { for (i = 0; i < available; ++i) {
struct op_sample * s = &cpu_buf->buffer[cpu_buf->tail_pos]; struct op_sample *s = &cpu_buf->buffer[cpu_buf->tail_pos];
if (is_code(s->eip)) { if (is_code(s->eip)) {
if (s->event <= CPU_IS_KERNEL) { if (s->event <= CPU_IS_KERNEL) {
...@@ -521,8 +590,18 @@ void sync_buffer(int cpu) ...@@ -521,8 +590,18 @@ void sync_buffer(int cpu)
} else if (s->event == CPU_TRACE_BEGIN) { } else if (s->event == CPU_TRACE_BEGIN) {
state = sb_bt_start; state = sb_bt_start;
add_trace_begin(); add_trace_begin();
#ifdef CONFIG_OPROFILE_IBS
} else if (s->event == IBS_FETCH_BEGIN) {
state = sb_bt_start;
add_ibs_begin(cpu_buf,
IBS_FETCH_CODE, in_kernel, mm);
} else if (s->event == IBS_OP_BEGIN) {
state = sb_bt_start;
add_ibs_begin(cpu_buf,
IBS_OP_CODE, in_kernel, mm);
#endif
} else { } else {
struct mm_struct * oldmm = mm; struct mm_struct *oldmm = mm;
/* userspace context switch */ /* userspace context switch */
new = (struct task_struct *)s->event; new = (struct task_struct *)s->event;
...@@ -533,15 +612,13 @@ void sync_buffer(int cpu) ...@@ -533,15 +612,13 @@ void sync_buffer(int cpu)
cookie = get_exec_dcookie(mm); cookie = get_exec_dcookie(mm);
add_user_ctx_switch(new, cookie); add_user_ctx_switch(new, cookie);
} }
} else { } else if (state >= sb_bt_start &&
if (state >= sb_bt_start &&
!add_sample(mm, s, in_kernel)) { !add_sample(mm, s, in_kernel)) {
if (state == sb_bt_start) { if (state == sb_bt_start) {
state = sb_bt_ignore; state = sb_bt_ignore;
atomic_inc(&oprofile_stats.bt_lost_no_mapping); atomic_inc(&oprofile_stats.bt_lost_no_mapping);
} }
} }
}
increment_tail(cpu_buf); increment_tail(cpu_buf);
} }
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* @remark Read the file COPYING * @remark Read the file COPYING
* *
* @author John Levon <levon@movementarian.org> * @author John Levon <levon@movementarian.org>
* @author Barry Kasindorf <barry.kasindorf@amd.com>
* *
* Each CPU has a local buffer that stores PC value/event * Each CPU has a local buffer that stores PC value/event
* pairs. We also log context switches when we notice them. * pairs. We also log context switches when we notice them.
...@@ -209,7 +210,7 @@ static int log_sample(struct oprofile_cpu_buffer * cpu_buf, unsigned long pc, ...@@ -209,7 +210,7 @@ static int log_sample(struct oprofile_cpu_buffer * cpu_buf, unsigned long pc,
return 1; return 1;
} }
static int oprofile_begin_trace(struct oprofile_cpu_buffer * cpu_buf) static int oprofile_begin_trace(struct oprofile_cpu_buffer *cpu_buf)
{ {
if (nr_available_slots(cpu_buf) < 4) { if (nr_available_slots(cpu_buf) < 4) {
cpu_buf->sample_lost_overflow++; cpu_buf->sample_lost_overflow++;
...@@ -254,6 +255,75 @@ void oprofile_add_sample(struct pt_regs * const regs, unsigned long event) ...@@ -254,6 +255,75 @@ void oprofile_add_sample(struct pt_regs * const regs, unsigned long event)
oprofile_add_ext_sample(pc, regs, event, is_kernel); oprofile_add_ext_sample(pc, regs, event, is_kernel);
} }
#ifdef CONFIG_OPROFILE_IBS
#define MAX_IBS_SAMPLE_SIZE 14
static int log_ibs_sample(struct oprofile_cpu_buffer *cpu_buf,
unsigned long pc, int is_kernel, unsigned int *ibs, int ibs_code)
{
struct task_struct *task;
cpu_buf->sample_received++;
if (nr_available_slots(cpu_buf) < MAX_IBS_SAMPLE_SIZE) {
cpu_buf->sample_lost_overflow++;
return 0;
}
is_kernel = !!is_kernel;
/* notice a switch from user->kernel or vice versa */
if (cpu_buf->last_is_kernel != is_kernel) {
cpu_buf->last_is_kernel = is_kernel;
add_code(cpu_buf, is_kernel);
}
/* notice a task switch */
if (!is_kernel) {
task = current;
if (cpu_buf->last_task != task) {
cpu_buf->last_task = task;
add_code(cpu_buf, (unsigned long)task);
}
}
add_code(cpu_buf, ibs_code);
add_sample(cpu_buf, ibs[0], ibs[1]);
add_sample(cpu_buf, ibs[2], ibs[3]);
add_sample(cpu_buf, ibs[4], ibs[5]);
if (ibs_code == IBS_OP_BEGIN) {
add_sample(cpu_buf, ibs[6], ibs[7]);
add_sample(cpu_buf, ibs[8], ibs[9]);
add_sample(cpu_buf, ibs[10], ibs[11]);
}
return 1;
}
void oprofile_add_ibs_sample(struct pt_regs *const regs,
unsigned int * const ibs_sample, u8 code)
{
int is_kernel = !user_mode(regs);
unsigned long pc = profile_pc(regs);
struct oprofile_cpu_buffer *cpu_buf =
&per_cpu(cpu_buffer, smp_processor_id());
if (!backtrace_depth) {
log_ibs_sample(cpu_buf, pc, is_kernel, ibs_sample, code);
return;
}
/* if log_sample() fails we can't backtrace since we lost the source
* of this event */
if (log_ibs_sample(cpu_buf, pc, is_kernel, ibs_sample, code))
oprofile_ops.backtrace(regs, backtrace_depth);
}
#endif
void oprofile_add_pc(unsigned long pc, int is_kernel, unsigned long event) void oprofile_add_pc(unsigned long pc, int is_kernel, unsigned long event)
{ {
struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer); struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer);
...@@ -296,7 +366,7 @@ static void wq_sync_buffer(struct work_struct *work) ...@@ -296,7 +366,7 @@ static void wq_sync_buffer(struct work_struct *work)
struct oprofile_cpu_buffer * b = struct oprofile_cpu_buffer * b =
container_of(work, struct oprofile_cpu_buffer, work.work); container_of(work, struct oprofile_cpu_buffer, work.work);
if (b->cpu != smp_processor_id()) { if (b->cpu != smp_processor_id()) {
printk("WQ on CPU%d, prefer CPU%d\n", printk(KERN_DEBUG "WQ on CPU%d, prefer CPU%d\n",
smp_processor_id(), b->cpu); smp_processor_id(), b->cpu);
} }
sync_buffer(b->cpu); sync_buffer(b->cpu);
......
...@@ -55,5 +55,7 @@ void cpu_buffer_reset(struct oprofile_cpu_buffer * cpu_buf); ...@@ -55,5 +55,7 @@ void cpu_buffer_reset(struct oprofile_cpu_buffer * cpu_buf);
/* transient events for the CPU buffer -> event buffer */ /* transient events for the CPU buffer -> event buffer */
#define CPU_IS_KERNEL 1 #define CPU_IS_KERNEL 1
#define CPU_TRACE_BEGIN 2 #define CPU_TRACE_BEGIN 2
#define IBS_FETCH_BEGIN 3
#define IBS_OP_BEGIN 4
#endif /* OPROFILE_CPU_BUFFER_H */ #endif /* OPROFILE_CPU_BUFFER_H */
...@@ -351,20 +351,16 @@ static inline void set_system_intr_gate(unsigned int n, void *addr) ...@@ -351,20 +351,16 @@ static inline void set_system_intr_gate(unsigned int n, void *addr)
_set_gate(n, GATE_INTERRUPT, addr, 0x3, 0, __KERNEL_CS); _set_gate(n, GATE_INTERRUPT, addr, 0x3, 0, __KERNEL_CS);
} }
static inline void set_trap_gate(unsigned int n, void *addr) static inline void set_system_trap_gate(unsigned int n, void *addr)
{ {
BUG_ON((unsigned)n > 0xFF); BUG_ON((unsigned)n > 0xFF);
_set_gate(n, GATE_TRAP, addr, 0, 0, __KERNEL_CS); _set_gate(n, GATE_TRAP, addr, 0x3, 0, __KERNEL_CS);
} }
static inline void set_system_gate(unsigned int n, void *addr) static inline void set_trap_gate(unsigned int n, void *addr)
{ {
BUG_ON((unsigned)n > 0xFF); BUG_ON((unsigned)n > 0xFF);
#ifdef CONFIG_X86_32 _set_gate(n, GATE_TRAP, addr, 0, 0, __KERNEL_CS);
_set_gate(n, GATE_TRAP, addr, 0x3, 0, __KERNEL_CS);
#else
_set_gate(n, GATE_INTERRUPT, addr, 0x3, 0, __KERNEL_CS);
#endif
} }
static inline void set_task_gate(unsigned int n, unsigned int gdt_entry) static inline void set_task_gate(unsigned int n, unsigned int gdt_entry)
...@@ -379,7 +375,7 @@ static inline void set_intr_gate_ist(int n, void *addr, unsigned ist) ...@@ -379,7 +375,7 @@ static inline void set_intr_gate_ist(int n, void *addr, unsigned ist)
_set_gate(n, GATE_INTERRUPT, addr, 0, ist, __KERNEL_CS); _set_gate(n, GATE_INTERRUPT, addr, 0, ist, __KERNEL_CS);
} }
static inline void set_system_gate_ist(int n, void *addr, unsigned ist) static inline void set_system_intr_gate_ist(int n, void *addr, unsigned ist)
{ {
BUG_ON((unsigned)n > 0xFF); BUG_ON((unsigned)n > 0xFF);
_set_gate(n, GATE_INTERRUPT, addr, 0x3, ist, __KERNEL_CS); _set_gate(n, GATE_INTERRUPT, addr, 0x3, ist, __KERNEL_CS);
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
extern int parse_unisys_oem (char *oemptr); extern int parse_unisys_oem (char *oemptr);
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr); extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
extern void unmap_unisys_acpi_oem_table(unsigned long oem_addr);
extern void setup_unisys(void); extern void setup_unisys(void);
#ifndef CONFIG_X86_GENERICARCH #ifndef CONFIG_X86_GENERICARCH
......
...@@ -94,10 +94,10 @@ enum fixed_addresses { ...@@ -94,10 +94,10 @@ enum fixed_addresses {
* can have a single pgd entry and a single pte table: * can have a single pgd entry and a single pte table:
*/ */
#define NR_FIX_BTMAPS 64 #define NR_FIX_BTMAPS 64
#define FIX_BTMAPS_NESTING 4 #define FIX_BTMAPS_SLOTS 4
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 - FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 255), (__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_NESTING - 1, FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
FIX_WP_TEST, FIX_WP_TEST,
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
FIX_ACPI_BEGIN, FIX_ACPI_BEGIN,
......
...@@ -49,6 +49,7 @@ enum fixed_addresses { ...@@ -49,6 +49,7 @@ enum fixed_addresses {
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
FIX_PARAVIRT_BOOTMAP, FIX_PARAVIRT_BOOTMAP,
#endif #endif
__end_of_permanent_fixed_addresses,
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
FIX_ACPI_BEGIN, FIX_ACPI_BEGIN,
FIX_ACPI_END = FIX_ACPI_BEGIN + FIX_ACPI_PAGES - 1, FIX_ACPI_END = FIX_ACPI_BEGIN + FIX_ACPI_PAGES - 1,
...@@ -56,19 +57,18 @@ enum fixed_addresses { ...@@ -56,19 +57,18 @@ enum fixed_addresses {
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE, FIX_OHCI1394_BASE,
#endif #endif
__end_of_permanent_fixed_addresses,
/* /*
* 256 temporary boot-time mappings, used by early_ioremap(), * 256 temporary boot-time mappings, used by early_ioremap(),
* before ioremap() is functional. * before ioremap() is functional.
* *
* We round it up to the next 512 pages boundary so that we * We round it up to the next 256 pages boundary so that we
* can have a single pgd entry and a single pte table: * can have a single pgd entry and a single pte table:
*/ */
#define NR_FIX_BTMAPS 64 #define NR_FIX_BTMAPS 64
#define FIX_BTMAPS_NESTING 4 #define FIX_BTMAPS_SLOTS 4
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 512 - FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 511), (__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_NESTING - 1, FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
__end_of_fixed_addresses __end_of_fixed_addresses
}; };
......
...@@ -5,20 +5,6 @@ ...@@ -5,20 +5,6 @@
#include <linux/compiler.h> #include <linux/compiler.h>
/*
* early_ioremap() and early_iounmap() are for temporary early boot-time
* mappings, before the real ioremap() is functional.
* A boot-time mapping is currently limited to at most 16 pages.
*/
#ifndef __ASSEMBLY__
extern void early_ioremap_init(void);
extern void early_ioremap_clear(void);
extern void early_ioremap_reset(void);
extern void *early_ioremap(unsigned long offset, unsigned long size);
extern void early_iounmap(void *addr, unsigned long size);
extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
#endif
#define build_mmio_read(name, size, type, reg, barrier) \ #define build_mmio_read(name, size, type, reg, barrier) \
static inline type name(const volatile void __iomem *addr) \ static inline type name(const volatile void __iomem *addr) \
{ type ret; asm volatile("mov" size " %1,%0":reg (ret) \ { type ret; asm volatile("mov" size " %1,%0":reg (ret) \
...@@ -97,6 +83,7 @@ extern void early_ioremap_init(void); ...@@ -97,6 +83,7 @@ extern void early_ioremap_init(void);
extern void early_ioremap_clear(void); extern void early_ioremap_clear(void);
extern void early_ioremap_reset(void); extern void early_ioremap_reset(void);
extern void *early_ioremap(unsigned long offset, unsigned long size); extern void *early_ioremap(unsigned long offset, unsigned long size);
extern void *early_memremap(unsigned long offset, unsigned long size);
extern void early_iounmap(void *addr, unsigned long size); extern void early_iounmap(void *addr, unsigned long size);
extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys); extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
......
...@@ -165,9 +165,6 @@ static inline void *phys_to_virt(unsigned long address) ...@@ -165,9 +165,6 @@ static inline void *phys_to_virt(unsigned long address)
#include <asm-generic/iomap.h> #include <asm-generic/iomap.h>
extern void *early_ioremap(unsigned long addr, unsigned long size);
extern void early_iounmap(void *addr, unsigned long size);
/* /*
* This one maps high address device memory and turns off caching for that area. * This one maps high address device memory and turns off caching for that area.
* it's useful if some control registers are in such an area and write combining * it's useful if some control registers are in such an area and write combining
......
...@@ -166,27 +166,6 @@ static inline int raw_irqs_disabled(void) ...@@ -166,27 +166,6 @@ static inline int raw_irqs_disabled(void)
return raw_irqs_disabled_flags(flags); return raw_irqs_disabled_flags(flags);
} }
/*
* makes the traced hardirq state match with the machine state
*
* should be a rarely used function, only in places where its
* otherwise impossible to know the irq state, like in traps.
*/
static inline void trace_hardirqs_fixup_flags(unsigned long flags)
{
if (raw_irqs_disabled_flags(flags))
trace_hardirqs_off();
else
trace_hardirqs_on();
}
static inline void trace_hardirqs_fixup(void)
{
unsigned long flags = __raw_local_save_flags();
trace_hardirqs_fixup_flags(flags);
}
#else #else
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
......
...@@ -27,10 +27,9 @@ extern void printk_address(unsigned long address, int reliable); ...@@ -27,10 +27,9 @@ extern void printk_address(unsigned long address, int reliable);
extern void die(const char *, struct pt_regs *,long); extern void die(const char *, struct pt_regs *,long);
extern int __must_check __die(const char *, struct pt_regs *, long); extern int __must_check __die(const char *, struct pt_regs *, long);
extern void show_registers(struct pt_regs *regs); extern void show_registers(struct pt_regs *regs);
extern void __show_registers(struct pt_regs *, int all);
extern void show_trace(struct task_struct *t, struct pt_regs *regs, extern void show_trace(struct task_struct *t, struct pt_regs *regs,
unsigned long *sp, unsigned long bp); unsigned long *sp, unsigned long bp);
extern void __show_regs(struct pt_regs *regs); extern void __show_regs(struct pt_regs *regs, int all);
extern void show_regs(struct pt_regs *regs); extern void show_regs(struct pt_regs *regs);
extern unsigned long oops_begin(void); extern unsigned long oops_begin(void);
extern void oops_end(unsigned long, struct pt_regs *, int signr); extern void oops_end(unsigned long, struct pt_regs *, int signr);
......
...@@ -82,15 +82,6 @@ struct kprobe_ctlblk { ...@@ -82,15 +82,6 @@ struct kprobe_ctlblk {
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
* if necessary, before executing the original int3/1 (trap) handler.
*/
static inline void restore_interrupts(struct pt_regs *regs)
{
if (regs->flags & X86_EFLAGS_IF)
local_irq_enable();
}
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr); extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_exceptions_notify(struct notifier_block *self, extern int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data); unsigned long val, void *data);
......
...@@ -7,12 +7,6 @@ ...@@ -7,12 +7,6 @@
#include <asm/mc146818rtc.h> #include <asm/mc146818rtc.h>
static inline void clear_mem_error(unsigned char reason)
{
reason = (reason & 0xf) | 4;
outb(reason, 0x61);
}
static inline unsigned char get_nmi_reason(void) static inline unsigned char get_nmi_reason(void)
{ {
return inb(0x61); return inb(0x61);
......
...@@ -52,8 +52,6 @@ struct mod_arch_specific {}; ...@@ -52,8 +52,6 @@ struct mod_arch_specific {};
#define MODULE_PROC_FAMILY "EFFICEON " #define MODULE_PROC_FAMILY "EFFICEON "
#elif defined CONFIG_MWINCHIPC6 #elif defined CONFIG_MWINCHIPC6
#define MODULE_PROC_FAMILY "WINCHIPC6 " #define MODULE_PROC_FAMILY "WINCHIPC6 "
#elif defined CONFIG_MWINCHIP2
#define MODULE_PROC_FAMILY "WINCHIP2 "
#elif defined CONFIG_MWINCHIP3D #elif defined CONFIG_MWINCHIP3D
#define MODULE_PROC_FAMILY "WINCHIP3D " #define MODULE_PROC_FAMILY "WINCHIP3D "
#elif defined CONFIG_MCYRIXIII #elif defined CONFIG_MCYRIXIII
......
...@@ -15,10 +15,6 @@ ...@@ -15,10 +15,6 @@
*/ */
int do_nmi_callback(struct pt_regs *regs, int cpu); int do_nmi_callback(struct pt_regs *regs, int cpu);
#ifdef CONFIG_X86_64
extern void default_do_nmi(struct pt_regs *);
#endif
extern void die_nmi(char *str, struct pt_regs *regs, int do_panic); extern void die_nmi(char *str, struct pt_regs *regs, int do_panic);
extern int check_nmi_watchdog(void); extern int check_nmi_watchdog(void);
extern int nmi_watchdog_enabled; extern int nmi_watchdog_enabled;
......
...@@ -179,6 +179,7 @@ static inline pteval_t native_pte_flags(pte_t pte) ...@@ -179,6 +179,7 @@ static inline pteval_t native_pte_flags(pte_t pte)
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
#define __pa(x) __phys_addr((unsigned long)(x)) #define __pa(x) __phys_addr((unsigned long)(x))
#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(x))
/* __pa_symbol should be used for C visible symbols. /* __pa_symbol should be used for C visible symbols.
This seems to be the official gcc blessed way to do such arithmetic. */ This seems to be the official gcc blessed way to do such arithmetic. */
#define __pa_symbol(x) __pa(__phys_reloc_hide((unsigned long)(x))) #define __pa_symbol(x) __pa(__phys_reloc_hide((unsigned long)(x)))
...@@ -188,9 +189,14 @@ static inline pteval_t native_pte_flags(pte_t pte) ...@@ -188,9 +189,14 @@ static inline pteval_t native_pte_flags(pte_t pte)
#define __boot_va(x) __va(x) #define __boot_va(x) __va(x)
#define __boot_pa(x) __pa(x) #define __boot_pa(x) __pa(x)
/*
* virt_to_page(kaddr) returns a valid pointer if and only if
* virt_addr_valid(kaddr) returns true.
*/
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) extern bool __virt_addr_valid(unsigned long kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -20,6 +20,12 @@ ...@@ -20,6 +20,12 @@
#endif #endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_ORDER) #define THREAD_SIZE (PAGE_SIZE << THREAD_ORDER)
#define STACKFAULT_STACK 0
#define DOUBLEFAULT_STACK 1
#define NMI_STACK 0
#define DEBUG_STACK 0
#define MCE_STACK 0
#define N_EXCEPTION_STACKS 1
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
/* 44=32+12, the limit we can fit into an unsigned long pfn */ /* 44=32+12, the limit we can fit into an unsigned long pfn */
...@@ -73,11 +79,11 @@ typedef struct page *pgtable_t; ...@@ -73,11 +79,11 @@ typedef struct page *pgtable_t;
#endif #endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define __phys_addr_const(x) ((x) - PAGE_OFFSET) #define __phys_addr_nodebug(x) ((x) - PAGE_OFFSET)
#ifdef CONFIG_DEBUG_VIRTUAL #ifdef CONFIG_DEBUG_VIRTUAL
extern unsigned long __phys_addr(unsigned long); extern unsigned long __phys_addr(unsigned long);
#else #else
#define __phys_addr(x) ((x) - PAGE_OFFSET) #define __phys_addr(x) __phys_addr_nodebug(x)
#endif #endif
#define __phys_reloc_hide(x) RELOC_HIDE((x), 0) #define __phys_reloc_hide(x) RELOC_HIDE((x), 0)
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
#define _PAGE_BIT_PAT 7 /* on 4KB pages */ #define _PAGE_BIT_PAT 7 /* on 4KB pages */
#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */ #define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
#define _PAGE_BIT_UNUSED1 9 /* available for programmer */ #define _PAGE_BIT_UNUSED1 9 /* available for programmer */
#define _PAGE_BIT_UNUSED2 10 #define _PAGE_BIT_IOMAP 10 /* flag used to indicate IO mapping */
#define _PAGE_BIT_UNUSED3 11 #define _PAGE_BIT_UNUSED3 11
#define _PAGE_BIT_PAT_LARGE 12 /* On 2MB or 1GB pages */ #define _PAGE_BIT_PAT_LARGE 12 /* On 2MB or 1GB pages */
#define _PAGE_BIT_SPECIAL _PAGE_BIT_UNUSED1 #define _PAGE_BIT_SPECIAL _PAGE_BIT_UNUSED1
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
#define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE) #define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE)
#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL) #define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
#define _PAGE_UNUSED1 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1) #define _PAGE_UNUSED1 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
#define _PAGE_UNUSED2 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2) #define _PAGE_IOMAP (_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
#define _PAGE_UNUSED3 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED3) #define _PAGE_UNUSED3 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED3)
#define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT) #define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT)
#define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE) #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
...@@ -99,6 +99,11 @@ ...@@ -99,6 +99,11 @@
#define __PAGE_KERNEL_LARGE_NOCACHE (__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_NOCACHE (__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
#define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE)
#define __PAGE_KERNEL_IO (__PAGE_KERNEL | _PAGE_IOMAP)
#define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
#define __PAGE_KERNEL_IO_UC_MINUS (__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
#define __PAGE_KERNEL_IO_WC (__PAGE_KERNEL_WC | _PAGE_IOMAP)
#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) #define PAGE_KERNEL __pgprot(__PAGE_KERNEL)
#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO)
#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC)
...@@ -113,6 +118,11 @@ ...@@ -113,6 +118,11 @@
#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) #define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL)
#define PAGE_KERNEL_VSYSCALL_NOCACHE __pgprot(__PAGE_KERNEL_VSYSCALL_NOCACHE) #define PAGE_KERNEL_VSYSCALL_NOCACHE __pgprot(__PAGE_KERNEL_VSYSCALL_NOCACHE)
#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO)
#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE)
#define PAGE_KERNEL_IO_UC_MINUS __pgprot(__PAGE_KERNEL_IO_UC_MINUS)
#define PAGE_KERNEL_IO_WC __pgprot(__PAGE_KERNEL_IO_WC)
/* xwr */ /* xwr */
#define __P000 PAGE_NONE #define __P000 PAGE_NONE
#define __P001 PAGE_READONLY #define __P001 PAGE_READONLY
...@@ -196,7 +206,7 @@ static inline int pte_exec(pte_t pte) ...@@ -196,7 +206,7 @@ static inline int pte_exec(pte_t pte)
static inline int pte_special(pte_t pte) static inline int pte_special(pte_t pte)
{ {
return pte_val(pte) & _PAGE_SPECIAL; return pte_flags(pte) & _PAGE_SPECIAL;
} }
static inline unsigned long pte_pfn(pte_t pte) static inline unsigned long pte_pfn(pte_t pte)
......
...@@ -174,12 +174,8 @@ extern unsigned long profile_pc(struct pt_regs *regs); ...@@ -174,12 +174,8 @@ extern unsigned long profile_pc(struct pt_regs *regs);
extern unsigned long extern unsigned long
convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs); convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs);
#ifdef CONFIG_X86_32
extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
int error_code, int si_code); int error_code, int si_code);
#endif
void signal_fault(struct pt_regs *regs, void __user *frame, char *where); void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
extern long syscall_trace_enter(struct pt_regs *); extern long syscall_trace_enter(struct pt_regs *);
......
...@@ -131,12 +131,6 @@ ...@@ -131,12 +131,6 @@
* Matching rules for certain types of segments. * Matching rules for certain types of segments.
*/ */
/* Matches only __KERNEL_CS, ignoring PnP / USER / APM segments */
#define SEGMENT_IS_KERNEL_CODE(x) (((x) & 0xfc) == GDT_ENTRY_KERNEL_CS * 8)
/* Matches __KERNEL_CS and __USER_CS (they must be 2 entries apart) */
#define SEGMENT_IS_FLAT_CODE(x) (((x) & 0xec) == GDT_ENTRY_KERNEL_CS * 8)
/* Matches PNP_CS32 and PNP_CS16 (they must be consecutive) */ /* Matches PNP_CS32 and PNP_CS16 (they must be consecutive) */
#define SEGMENT_IS_PNP_CODE(x) (((x) & 0xf4) == GDT_ENTRY_PNPBIOS_BASE * 8) #define SEGMENT_IS_PNP_CODE(x) (((x) & 0xf4) == GDT_ENTRY_PNPBIOS_BASE * 8)
......
...@@ -141,6 +141,8 @@ void play_dead_common(void); ...@@ -141,6 +141,8 @@ void play_dead_common(void);
void native_send_call_func_ipi(cpumask_t mask); void native_send_call_func_ipi(cpumask_t mask);
void native_send_call_func_single_ipi(int cpu); void native_send_call_func_single_ipi(int cpu);
extern void prefill_possible_map(void);
void smp_store_cpu_info(int id); void smp_store_cpu_info(int id);
#define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu) #define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu)
...@@ -149,15 +151,11 @@ static inline int num_booting_cpus(void) ...@@ -149,15 +151,11 @@ static inline int num_booting_cpus(void)
{ {
return cpus_weight(cpu_callout_map); return cpus_weight(cpu_callout_map);
} }
#endif /* CONFIG_SMP */
#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_CPU)
extern void prefill_possible_map(void);
#else #else
static inline void prefill_possible_map(void) static inline void prefill_possible_map(void)
{ {
} }
#endif #endif /* CONFIG_SMP */
extern unsigned disabled_cpus __cpuinitdata; extern unsigned disabled_cpus __cpuinitdata;
......
...@@ -64,7 +64,10 @@ do { \ ...@@ -64,7 +64,10 @@ do { \
\ \
/* regparm parameters for __switch_to(): */ \ /* regparm parameters for __switch_to(): */ \
[prev] "a" (prev), \ [prev] "a" (prev), \
[next] "d" (next)); \ [next] "d" (next) \
\
: /* reloaded segment registers */ \
"memory"); \
} while (0) } while (0)
/* /*
......
...@@ -3,7 +3,12 @@ ...@@ -3,7 +3,12 @@
#include <asm/debugreg.h> #include <asm/debugreg.h>
/* Common in X86_32 and X86_64 */ #ifdef CONFIG_X86_32
#define dotraplinkage
#else
#define dotraplinkage asmlinkage
#endif
asmlinkage void divide_error(void); asmlinkage void divide_error(void);
asmlinkage void debug(void); asmlinkage void debug(void);
asmlinkage void nmi(void); asmlinkage void nmi(void);
...@@ -12,31 +17,47 @@ asmlinkage void overflow(void); ...@@ -12,31 +17,47 @@ asmlinkage void overflow(void);
asmlinkage void bounds(void); asmlinkage void bounds(void);
asmlinkage void invalid_op(void); asmlinkage void invalid_op(void);
asmlinkage void device_not_available(void); asmlinkage void device_not_available(void);
#ifdef CONFIG_X86_64
asmlinkage void double_fault(void);
#endif
asmlinkage void coprocessor_segment_overrun(void); asmlinkage void coprocessor_segment_overrun(void);
asmlinkage void invalid_TSS(void); asmlinkage void invalid_TSS(void);
asmlinkage void segment_not_present(void); asmlinkage void segment_not_present(void);
asmlinkage void stack_segment(void); asmlinkage void stack_segment(void);
asmlinkage void general_protection(void); asmlinkage void general_protection(void);
asmlinkage void page_fault(void); asmlinkage void page_fault(void);
asmlinkage void spurious_interrupt_bug(void);
asmlinkage void coprocessor_error(void); asmlinkage void coprocessor_error(void);
asmlinkage void simd_coprocessor_error(void);
asmlinkage void alignment_check(void); asmlinkage void alignment_check(void);
asmlinkage void spurious_interrupt_bug(void);
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
asmlinkage void machine_check(void); asmlinkage void machine_check(void);
#endif /* CONFIG_X86_MCE */ #endif /* CONFIG_X86_MCE */
asmlinkage void simd_coprocessor_error(void);
void do_divide_error(struct pt_regs *, long); dotraplinkage void do_divide_error(struct pt_regs *, long);
void do_overflow(struct pt_regs *, long); dotraplinkage void do_debug(struct pt_regs *, long);
void do_bounds(struct pt_regs *, long); dotraplinkage void do_nmi(struct pt_regs *, long);
void do_coprocessor_segment_overrun(struct pt_regs *, long); dotraplinkage void do_int3(struct pt_regs *, long);
void do_invalid_TSS(struct pt_regs *, long); dotraplinkage void do_overflow(struct pt_regs *, long);
void do_segment_not_present(struct pt_regs *, long); dotraplinkage void do_bounds(struct pt_regs *, long);
void do_stack_segment(struct pt_regs *, long); dotraplinkage void do_invalid_op(struct pt_regs *, long);
void do_alignment_check(struct pt_regs *, long); dotraplinkage void do_device_not_available(struct pt_regs *, long);
void do_invalid_op(struct pt_regs *, long); dotraplinkage void do_coprocessor_segment_overrun(struct pt_regs *, long);
void do_general_protection(struct pt_regs *, long); dotraplinkage void do_invalid_TSS(struct pt_regs *, long);
void do_nmi(struct pt_regs *, long); dotraplinkage void do_segment_not_present(struct pt_regs *, long);
dotraplinkage void do_stack_segment(struct pt_regs *, long);
dotraplinkage void do_general_protection(struct pt_regs *, long);
dotraplinkage void do_page_fault(struct pt_regs *, unsigned long);
dotraplinkage void do_spurious_interrupt_bug(struct pt_regs *, long);
dotraplinkage void do_coprocessor_error(struct pt_regs *, long);
dotraplinkage void do_alignment_check(struct pt_regs *, long);
#ifdef CONFIG_X86_MCE
dotraplinkage void do_machine_check(struct pt_regs *, long);
#endif
dotraplinkage void do_simd_coprocessor_error(struct pt_regs *, long);
#ifdef CONFIG_X86_32
dotraplinkage void do_iret_error(struct pt_regs *, long);
#endif
static inline int get_si_code(unsigned long condition) static inline int get_si_code(unsigned long condition)
{ {
...@@ -52,31 +73,9 @@ extern int panic_on_unrecovered_nmi; ...@@ -52,31 +73,9 @@ extern int panic_on_unrecovered_nmi;
extern int kstack_depth_to_print; extern int kstack_depth_to_print;
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
void do_iret_error(struct pt_regs *, long);
void do_int3(struct pt_regs *, long);
void do_debug(struct pt_regs *, long);
void math_error(void __user *); void math_error(void __user *);
void do_coprocessor_error(struct pt_regs *, long);
void do_simd_coprocessor_error(struct pt_regs *, long);
void do_spurious_interrupt_bug(struct pt_regs *, long);
unsigned long patch_espfix_desc(unsigned long, unsigned long); unsigned long patch_espfix_desc(unsigned long, unsigned long);
asmlinkage void math_emulate(long); asmlinkage void math_emulate(long);
#endif
void do_page_fault(struct pt_regs *regs, unsigned long error_code);
#else /* CONFIG_X86_32 */
asmlinkage void double_fault(void);
asmlinkage void do_int3(struct pt_regs *, long);
asmlinkage void do_stack_segment(struct pt_regs *, long);
asmlinkage void do_debug(struct pt_regs *, unsigned long);
asmlinkage void do_coprocessor_error(struct pt_regs *);
asmlinkage void do_simd_coprocessor_error(struct pt_regs *);
asmlinkage void do_spurious_interrupt_bug(struct pt_regs *);
asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code);
#endif /* CONFIG_X86_32 */
#endif /* ASM_X86__TRAPS_H */ #endif /* ASM_X86__TRAPS_H */
...@@ -37,6 +37,7 @@ struct hpet { ...@@ -37,6 +37,7 @@ struct hpet {
#define hpet_compare _u1._hpet_compare #define hpet_compare _u1._hpet_compare
#define HPET_MAX_TIMERS (32) #define HPET_MAX_TIMERS (32)
#define HPET_MAX_IRQ (32)
/* /*
* HPET general capabilities register * HPET general capabilities register
...@@ -64,7 +65,7 @@ struct hpet { ...@@ -64,7 +65,7 @@ struct hpet {
*/ */
#define Tn_INT_ROUTE_CAP_MASK (0xffffffff00000000ULL) #define Tn_INT_ROUTE_CAP_MASK (0xffffffff00000000ULL)
#define Tn_INI_ROUTE_CAP_SHIFT (32UL) #define Tn_INT_ROUTE_CAP_SHIFT (32UL)
#define Tn_FSB_INT_DELCAP_MASK (0x8000UL) #define Tn_FSB_INT_DELCAP_MASK (0x8000UL)
#define Tn_FSB_INT_DELCAP_SHIFT (15) #define Tn_FSB_INT_DELCAP_SHIFT (15)
#define Tn_FSB_EN_CNF_MASK (0x4000UL) #define Tn_FSB_EN_CNF_MASK (0x4000UL)
...@@ -91,23 +92,14 @@ struct hpet { ...@@ -91,23 +92,14 @@ struct hpet {
* exported interfaces * exported interfaces
*/ */
struct hpet_task {
void (*ht_func) (void *);
void *ht_data;
void *ht_opaque;
};
struct hpet_data { struct hpet_data {
unsigned long hd_phys_address; unsigned long hd_phys_address;
void __iomem *hd_address; void __iomem *hd_address;
unsigned short hd_nirqs; unsigned short hd_nirqs;
unsigned short hd_flags;
unsigned int hd_state; /* timer allocated */ unsigned int hd_state; /* timer allocated */
unsigned int hd_irq[HPET_MAX_TIMERS]; unsigned int hd_irq[HPET_MAX_TIMERS];
}; };
#define HPET_DATA_PLATFORM 0x0001 /* platform call to hpet_alloc */
static inline void hpet_reserve_timer(struct hpet_data *hd, int timer) static inline void hpet_reserve_timer(struct hpet_data *hd, int timer)
{ {
hd->hd_state |= (1 << timer); hd->hd_state |= (1 << timer);
...@@ -125,7 +117,7 @@ struct hpet_info { ...@@ -125,7 +117,7 @@ struct hpet_info {
unsigned short hi_timer; unsigned short hi_timer;
}; };
#define HPET_INFO_PERIODIC 0x0001 /* timer is periodic */ #define HPET_INFO_PERIODIC 0x0010 /* periodic-capable comparator */
#define HPET_IE_ON _IO('h', 0x01) /* interrupt on */ #define HPET_IE_ON _IO('h', 0x01) /* interrupt on */
#define HPET_IE_OFF _IO('h', 0x02) /* interrupt off */ #define HPET_IE_OFF _IO('h', 0x02) /* interrupt off */
......
...@@ -36,6 +36,8 @@ ...@@ -36,6 +36,8 @@
#define XEN_ENTER_SWITCH_CODE 10 #define XEN_ENTER_SWITCH_CODE 10
#define SPU_PROFILING_CODE 11 #define SPU_PROFILING_CODE 11
#define SPU_CTX_SWITCH_CODE 12 #define SPU_CTX_SWITCH_CODE 12
#define IBS_FETCH_CODE 13
#define IBS_OP_CODE 14
struct super_block; struct super_block;
struct dentry; struct dentry;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment