Commit 34e7724c authored by Ingo Molnar's avatar Ingo Molnar

Merge branches 'x86/mm', 'x86/build', 'x86/apic' and 'x86/platform' into...

Merge branches 'x86/mm', 'x86/build', 'x86/apic' and 'x86/platform' into x86/core, to apply dependent patch
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
...@@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
cpuidle.off=1 [CPU_IDLE] cpuidle.off=1 [CPU_IDLE]
disable the cpuidle sub-system disable the cpuidle sub-system
cpu_init_udelay=N
[X86] Delay for N microsec between assert and de-assert
of APIC INIT to start processors. This delay occurs
on every CPU online, such as boot, and resume from suspend.
Default: 10000
cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver
Format: Format:
<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>] <first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
......
MTRR (Memory Type Range Register) control MTRR (Memory Type Range Register) control
3 Jun 1999
Richard Gooch Richard Gooch <rgooch@atnf.csiro.au> - 3 Jun 1999
<rgooch@atnf.csiro.au> Luis R. Rodriguez <mcgrof@do-not-panic.com> - April 9, 2015
===============================================================================
Phasing out MTRR use
MTRR use is replaced on modern x86 hardware with PAT. Over time the only type
of effective MTRR that is expected to be supported will be for write-combining.
As MTRR use is phased out device drivers should use arch_phys_wc_add() to make
MTRR effective on non-PAT systems while a no-op on PAT enabled systems.
For details refer to Documentation/x86/pat.txt.
===============================================================================
On Intel P6 family processors (Pentium Pro, Pentium II and later) On Intel P6 family processors (Pentium Pro, Pentium II and later)
the Memory Type Range Registers (MTRRs) may be used to control the Memory Type Range Registers (MTRRs) may be used to control
......
...@@ -34,6 +34,8 @@ ioremap | -- | UC- | UC- | ...@@ -34,6 +34,8 @@ ioremap | -- | UC- | UC- |
| | | | | | | |
ioremap_cache | -- | WB | WB | ioremap_cache | -- | WB | WB |
| | | | | | | |
ioremap_uc | -- | UC | UC |
| | | |
ioremap_nocache | -- | UC- | UC- | ioremap_nocache | -- | UC- | UC- |
| | | | | | | |
ioremap_wc | -- | -- | WC | ioremap_wc | -- | -- | WC |
...@@ -102,7 +104,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() ...@@ -102,7 +104,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
as step 0 above and also track the usage of those pages and use set_memory_wb() as step 0 above and also track the usage of those pages and use set_memory_wb()
before the page is freed to free pool. before the page is freed to free pool.
MTRR effects on PAT / non-PAT systems
-------------------------------------
The following table provides the effects of using write-combining MTRRs when
using ioremap*() calls on x86 for both non-PAT and PAT systems. Ideally
mtrr_add() usage will be phased out in favor of arch_phys_wc_add() which will
be a no-op on PAT enabled systems. The region over which a arch_phys_wc_add()
is made, should already have been ioremapped with WC attributes or PAT entries,
this can be done by using ioremap_wc() / set_memory_wc(). Devices which
combine areas of IO memory desired to remain uncacheable with areas where
write-combining is desirable should consider use of ioremap_uc() followed by
set_memory_wc() to white-list effective write-combined areas. Such use is
nevertheless discouraged as the effective memory type is considered
implementation defined, yet this strategy can be used as last resort on devices
with size-constrained regions where otherwise MTRR write-combining would
otherwise not be effective.
----------------------------------------------------------------------
MTRR Non-PAT PAT Linux ioremap value Effective memory type
----------------------------------------------------------------------
Non-PAT | PAT
PAT
|PCD
||PWT
|||
WC 000 WB _PAGE_CACHE_MODE_WB WC | WC
WC 001 WC _PAGE_CACHE_MODE_WC WC* | WC
WC 010 UC- _PAGE_CACHE_MODE_UC_MINUS WC* | UC
WC 011 UC _PAGE_CACHE_MODE_UC UC | UC
----------------------------------------------------------------------
(*) denotes implementation defined and is discouraged
Notes: Notes:
......
#ifndef __IA64_INTR_REMAPPING_H #ifndef __IA64_INTR_REMAPPING_H
#define __IA64_INTR_REMAPPING_H #define __IA64_INTR_REMAPPING_H
#define irq_remapping_enabled 0 #define irq_remapping_enabled 0
#define dmar_alloc_hwirq create_irq
#define dmar_free_hwirq destroy_irq
#endif #endif
...@@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = { ...@@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = {
.irq_retrigger = ia64_msi_retrigger_irq, .irq_retrigger = ia64_msi_retrigger_irq,
}; };
static int static void
msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
{ {
struct irq_cfg *cfg = irq_cfg + irq; struct irq_cfg *cfg = irq_cfg + irq;
...@@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) ...@@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
MSI_DATA_LEVEL_ASSERT | MSI_DATA_LEVEL_ASSERT |
MSI_DATA_DELIVERY_FIXED | MSI_DATA_DELIVERY_FIXED |
MSI_DATA_VECTOR(cfg->vector); MSI_DATA_VECTOR(cfg->vector);
return 0;
} }
int arch_setup_dmar_msi(unsigned int irq) int dmar_alloc_hwirq(int id, int node, void *arg)
{ {
int ret; int irq;
struct msi_msg msg; struct msi_msg msg;
ret = msi_compose_msg(NULL, irq, &msg); irq = create_irq();
if (ret < 0) if (irq > 0) {
return ret; irq_set_handler_data(irq, arg);
irq_set_chip_and_handler_name(irq, &dmar_msi_type,
handle_edge_irq, "edge");
msi_compose_msg(NULL, irq, &msg);
dmar_msi_write(irq, &msg); dmar_msi_write(irq, &msg);
irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, }
"edge");
return 0; return irq;
}
void dmar_free_hwirq(int irq)
{
irq_set_handler_data(irq, NULL);
destroy_irq(irq);
} }
#endif /* CONFIG_INTEL_IOMMU */ #endif /* CONFIG_INTEL_IOMMU */
...@@ -100,7 +100,7 @@ config X86 ...@@ -100,7 +100,7 @@ config X86
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select HAVE_BPF_JIT if X86_64 select HAVE_BPF_JIT if X86_64
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ARCH_HUGE_VMAP if X86_64 || (X86_32 && X86_PAE) select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select CLKEVT_I8253 select CLKEVT_I8253
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
...@@ -341,7 +341,7 @@ config X86_FEATURE_NAMES ...@@ -341,7 +341,7 @@ config X86_FEATURE_NAMES
config X86_X2APIC config X86_X2APIC
bool "Support x2apic" bool "Support x2apic"
depends on X86_LOCAL_APIC && X86_64 && IRQ_REMAP depends on X86_LOCAL_APIC && X86_64 && (IRQ_REMAP || HYPERVISOR_GUEST)
---help--- ---help---
This enables x2apic support on CPUs that have this feature. This enables x2apic support on CPUs that have this feature.
...@@ -441,6 +441,7 @@ config X86_UV ...@@ -441,6 +441,7 @@ config X86_UV
depends on X86_EXTENDED_PLATFORM depends on X86_EXTENDED_PLATFORM
depends on NUMA depends on NUMA
depends on X86_X2APIC depends on X86_X2APIC
depends on PCI
---help--- ---help---
This option is needed in order to support SGI Ultraviolet systems. This option is needed in order to support SGI Ultraviolet systems.
If you don't have one of these, you should say N here. If you don't have one of these, you should say N here.
...@@ -466,7 +467,6 @@ config X86_INTEL_CE ...@@ -466,7 +467,6 @@ config X86_INTEL_CE
select X86_REBOOTFIXUPS select X86_REBOOTFIXUPS
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select IRQ_DOMAIN
---help--- ---help---
Select for the Intel CE media processor (CE4100) SOC. Select for the Intel CE media processor (CE4100) SOC.
This option compiles in support for the CE4100 SOC for settop This option compiles in support for the CE4100 SOC for settop
...@@ -851,11 +851,12 @@ config NR_CPUS ...@@ -851,11 +851,12 @@ config NR_CPUS
default "1" if !SMP default "1" if !SMP
default "8192" if MAXSMP default "8192" if MAXSMP
default "32" if SMP && X86_BIGSMP default "32" if SMP && X86_BIGSMP
default "8" if SMP default "8" if SMP && X86_32
default "64" if SMP
---help--- ---help---
This allows you to specify the maximum number of CPUs which this This allows you to specify the maximum number of CPUs which this
kernel will support. If CPUMASK_OFFSTACK is enabled, the maximum kernel will support. If CPUMASK_OFFSTACK is enabled, the maximum
supported value is 4096, otherwise the maximum value is 512. The supported value is 8192, otherwise the maximum value is 512. The
minimum value which makes sense is 2. minimum value which makes sense is 2.
This is purely to save memory - each supported CPU adds This is purely to save memory - each supported CPU adds
...@@ -914,12 +915,12 @@ config X86_UP_IOAPIC ...@@ -914,12 +915,12 @@ config X86_UP_IOAPIC
config X86_LOCAL_APIC config X86_LOCAL_APIC
def_bool y def_bool y
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI
select GENERIC_IRQ_LEGACY_ALLOC_HWIRQ select IRQ_DOMAIN_HIERARCHY
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
config X86_IO_APIC config X86_IO_APIC
def_bool y def_bool y
depends on X86_LOCAL_APIC || X86_UP_IOAPIC depends on X86_LOCAL_APIC || X86_UP_IOAPIC
select IRQ_DOMAIN
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
bool "Reroute for broken boot IRQs" bool "Reroute for broken boot IRQs"
......
...@@ -332,4 +332,15 @@ config X86_DEBUG_STATIC_CPU_HAS ...@@ -332,4 +332,15 @@ config X86_DEBUG_STATIC_CPU_HAS
If unsure, say N. If unsure, say N.
config PUNIT_ATOM_DEBUG
tristate "ATOM Punit debug driver"
select DEBUG_FS
select IOSF_MBI
---help---
This is a debug driver, which gets the power states
of all Punit North Complex devices. The power states of
each device is exposed as part of the debugfs interface.
The current power state can be read from
/sys/kernel/debug/punit_atom/dev_power_state
endmenu endmenu
...@@ -77,6 +77,12 @@ else ...@@ -77,6 +77,12 @@ else
KBUILD_AFLAGS += -m64 KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64 KBUILD_CFLAGS += -m64
# Align jump targets to 1 byte, not the default 16 bytes:
KBUILD_CFLAGS += -falign-jumps=1
# Pack loops tightly as well:
KBUILD_CFLAGS += -falign-loops=1
# Don't autogenerate traditional x87 instructions # Don't autogenerate traditional x87 instructions
KBUILD_CFLAGS += $(call cc-option,-mno-80387) KBUILD_CFLAGS += $(call cc-option,-mno-80387)
KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387) KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387)
...@@ -84,6 +90,9 @@ else ...@@ -84,6 +90,9 @@ else
# Use -mpreferred-stack-boundary=3 if supported. # Use -mpreferred-stack-boundary=3 if supported.
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3) KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
# Use -mskip-rax-setup if supported.
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8) cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona) cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
......
...@@ -77,12 +77,6 @@ ENTRY(native_usergs_sysret32) ...@@ -77,12 +77,6 @@ ENTRY(native_usergs_sysret32)
swapgs swapgs
sysretl sysretl
ENDPROC(native_usergs_sysret32) ENDPROC(native_usergs_sysret32)
ENTRY(native_irq_enable_sysexit)
swapgs
sti
sysexit
ENDPROC(native_irq_enable_sysexit)
#endif #endif
/* /*
...@@ -119,7 +113,7 @@ ENTRY(ia32_sysenter_target) ...@@ -119,7 +113,7 @@ ENTRY(ia32_sysenter_target)
* it is too small to ever cause noticeable irq latency. * it is too small to ever cause noticeable irq latency.
*/ */
SWAPGS_UNSAFE_STACK SWAPGS_UNSAFE_STACK
movq PER_CPU_VAR(cpu_tss + TSS_sp0), %rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
ENABLE_INTERRUPTS(CLBR_NONE) ENABLE_INTERRUPTS(CLBR_NONE)
/* Zero-extending 32-bit regs, do not remove */ /* Zero-extending 32-bit regs, do not remove */
...@@ -142,7 +136,7 @@ ENTRY(ia32_sysenter_target) ...@@ -142,7 +136,7 @@ ENTRY(ia32_sysenter_target)
pushq_cfi_reg rsi /* pt_regs->si */ pushq_cfi_reg rsi /* pt_regs->si */
pushq_cfi_reg rdx /* pt_regs->dx */ pushq_cfi_reg rdx /* pt_regs->dx */
pushq_cfi_reg rcx /* pt_regs->cx */ pushq_cfi_reg rcx /* pt_regs->cx */
pushq_cfi_reg rax /* pt_regs->ax */ pushq_cfi $-ENOSYS /* pt_regs->ax */
cld cld
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */ sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
CFI_ADJUST_CFA_OFFSET 10*8 CFI_ADJUST_CFA_OFFSET 10*8
...@@ -169,8 +163,6 @@ sysenter_flags_fixed: ...@@ -169,8 +163,6 @@ sysenter_flags_fixed:
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS) testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
CFI_REMEMBER_STATE CFI_REMEMBER_STATE
jnz sysenter_tracesys jnz sysenter_tracesys
cmpq $(IA32_NR_syscalls-1),%rax
ja ia32_badsys
sysenter_do_call: sysenter_do_call:
/* 32bit syscall -> 64bit C ABI argument conversion */ /* 32bit syscall -> 64bit C ABI argument conversion */
movl %edi,%r8d /* arg5 */ movl %edi,%r8d /* arg5 */
...@@ -179,8 +171,11 @@ sysenter_do_call: ...@@ -179,8 +171,11 @@ sysenter_do_call:
movl %ebx,%edi /* arg1 */ movl %ebx,%edi /* arg1 */
movl %edx,%edx /* arg3 (zero extension) */ movl %edx,%edx /* arg3 (zero extension) */
sysenter_dispatch: sysenter_dispatch:
cmpq $(IA32_NR_syscalls-1),%rax
ja 1f
call *ia32_sys_call_table(,%rax,8) call *ia32_sys_call_table(,%rax,8)
movq %rax,RAX(%rsp) movq %rax,RAX(%rsp)
1:
DISABLE_INTERRUPTS(CLBR_NONE) DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS) testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
...@@ -247,9 +242,7 @@ sysexit_from_sys_call: ...@@ -247,9 +242,7 @@ sysexit_from_sys_call:
movl %ebx,%esi /* 2nd arg: 1st syscall arg */ movl %ebx,%esi /* 2nd arg: 1st syscall arg */
movl %eax,%edi /* 1st arg: syscall number */ movl %eax,%edi /* 1st arg: syscall number */
call __audit_syscall_entry call __audit_syscall_entry
movl RAX(%rsp),%eax /* reload syscall number */ movl ORIG_RAX(%rsp),%eax /* reload syscall number */
cmpq $(IA32_NR_syscalls-1),%rax
ja ia32_badsys
movl %ebx,%edi /* reload 1st syscall arg */ movl %ebx,%edi /* reload 1st syscall arg */
movl RCX(%rsp),%esi /* reload 2nd syscall arg */ movl RCX(%rsp),%esi /* reload 2nd syscall arg */
movl RDX(%rsp),%edx /* reload 3rd syscall arg */ movl RDX(%rsp),%edx /* reload 3rd syscall arg */
...@@ -300,13 +293,10 @@ sysenter_tracesys: ...@@ -300,13 +293,10 @@ sysenter_tracesys:
#endif #endif
SAVE_EXTRA_REGS SAVE_EXTRA_REGS
CLEAR_RREGS CLEAR_RREGS
movq $-ENOSYS,RAX(%rsp)/* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 /* reload args from stack in case ptrace changed it */
RESTORE_EXTRA_REGS RESTORE_EXTRA_REGS
cmpq $(IA32_NR_syscalls-1),%rax
ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */
jmp sysenter_do_call jmp sysenter_do_call
CFI_ENDPROC CFI_ENDPROC
ENDPROC(ia32_sysenter_target) ENDPROC(ia32_sysenter_target)
...@@ -356,7 +346,7 @@ ENTRY(ia32_cstar_target) ...@@ -356,7 +346,7 @@ ENTRY(ia32_cstar_target)
SWAPGS_UNSAFE_STACK SWAPGS_UNSAFE_STACK
movl %esp,%r8d movl %esp,%r8d
CFI_REGISTER rsp,r8 CFI_REGISTER rsp,r8
movq PER_CPU_VAR(kernel_stack),%rsp movq PER_CPU_VAR(cpu_current_top_of_stack),%rsp
ENABLE_INTERRUPTS(CLBR_NONE) ENABLE_INTERRUPTS(CLBR_NONE)
/* Zero-extending 32-bit regs, do not remove */ /* Zero-extending 32-bit regs, do not remove */
...@@ -376,7 +366,7 @@ ENTRY(ia32_cstar_target) ...@@ -376,7 +366,7 @@ ENTRY(ia32_cstar_target)
pushq_cfi_reg rdx /* pt_regs->dx */ pushq_cfi_reg rdx /* pt_regs->dx */
pushq_cfi_reg rbp /* pt_regs->cx */ pushq_cfi_reg rbp /* pt_regs->cx */
movl %ebp,%ecx movl %ebp,%ecx
pushq_cfi_reg rax /* pt_regs->ax */ pushq_cfi $-ENOSYS /* pt_regs->ax */
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */ sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
CFI_ADJUST_CFA_OFFSET 10*8 CFI_ADJUST_CFA_OFFSET 10*8
...@@ -392,8 +382,6 @@ ENTRY(ia32_cstar_target) ...@@ -392,8 +382,6 @@ ENTRY(ia32_cstar_target)
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS) testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
CFI_REMEMBER_STATE CFI_REMEMBER_STATE
jnz cstar_tracesys jnz cstar_tracesys
cmpq $IA32_NR_syscalls-1,%rax
ja ia32_badsys
cstar_do_call: cstar_do_call:
/* 32bit syscall -> 64bit C ABI argument conversion */ /* 32bit syscall -> 64bit C ABI argument conversion */
movl %edi,%r8d /* arg5 */ movl %edi,%r8d /* arg5 */
...@@ -402,8 +390,11 @@ cstar_do_call: ...@@ -402,8 +390,11 @@ cstar_do_call:
movl %ebx,%edi /* arg1 */ movl %ebx,%edi /* arg1 */
movl %edx,%edx /* arg3 (zero extension) */ movl %edx,%edx /* arg3 (zero extension) */
cstar_dispatch: cstar_dispatch:
cmpq $(IA32_NR_syscalls-1),%rax
ja 1f
call *ia32_sys_call_table(,%rax,8) call *ia32_sys_call_table(,%rax,8)
movq %rax,RAX(%rsp) movq %rax,RAX(%rsp)
1:
DISABLE_INTERRUPTS(CLBR_NONE) DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS) testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
...@@ -457,14 +448,11 @@ cstar_tracesys: ...@@ -457,14 +448,11 @@ cstar_tracesys:
xchgl %r9d,%ebp xchgl %r9d,%ebp
SAVE_EXTRA_REGS SAVE_EXTRA_REGS
CLEAR_RREGS r9 CLEAR_RREGS r9
movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 1 /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 1 /* reload args from stack in case ptrace changed it */
RESTORE_EXTRA_REGS RESTORE_EXTRA_REGS
xchgl %ebp,%r9d xchgl %ebp,%r9d
cmpq $(IA32_NR_syscalls-1),%rax
ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */
jmp cstar_do_call jmp cstar_do_call
END(ia32_cstar_target) END(ia32_cstar_target)
...@@ -523,7 +511,7 @@ ENTRY(ia32_syscall) ...@@ -523,7 +511,7 @@ ENTRY(ia32_syscall)
pushq_cfi_reg rsi /* pt_regs->si */ pushq_cfi_reg rsi /* pt_regs->si */
pushq_cfi_reg rdx /* pt_regs->dx */ pushq_cfi_reg rdx /* pt_regs->dx */
pushq_cfi_reg rcx /* pt_regs->cx */ pushq_cfi_reg rcx /* pt_regs->cx */
pushq_cfi_reg rax /* pt_regs->ax */ pushq_cfi $-ENOSYS /* pt_regs->ax */
cld cld
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */ sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
CFI_ADJUST_CFA_OFFSET 10*8 CFI_ADJUST_CFA_OFFSET 10*8
...@@ -531,8 +519,6 @@ ENTRY(ia32_syscall) ...@@ -531,8 +519,6 @@ ENTRY(ia32_syscall)
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS) orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS) testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
jnz ia32_tracesys jnz ia32_tracesys
cmpq $(IA32_NR_syscalls-1),%rax
ja ia32_badsys
ia32_do_call: ia32_do_call:
/* 32bit syscall -> 64bit C ABI argument conversion */ /* 32bit syscall -> 64bit C ABI argument conversion */
movl %edi,%r8d /* arg5 */ movl %edi,%r8d /* arg5 */
...@@ -540,9 +526,12 @@ ia32_do_call: ...@@ -540,9 +526,12 @@ ia32_do_call:
xchg %ecx,%esi /* rsi:arg2, rcx:arg4 */ xchg %ecx,%esi /* rsi:arg2, rcx:arg4 */
movl %ebx,%edi /* arg1 */ movl %ebx,%edi /* arg1 */
movl %edx,%edx /* arg3 (zero extension) */ movl %edx,%edx /* arg3 (zero extension) */
cmpq $(IA32_NR_syscalls-1),%rax
ja 1f
call *ia32_sys_call_table(,%rax,8) # xxx: rip relative call *ia32_sys_call_table(,%rax,8) # xxx: rip relative
ia32_sysret: ia32_sysret:
movq %rax,RAX(%rsp) movq %rax,RAX(%rsp)
1:
ia32_ret_from_sys_call: ia32_ret_from_sys_call:
CLEAR_RREGS CLEAR_RREGS
jmp int_ret_from_sys_call jmp int_ret_from_sys_call
...@@ -550,22 +539,13 @@ ia32_ret_from_sys_call: ...@@ -550,22 +539,13 @@ ia32_ret_from_sys_call:
ia32_tracesys: ia32_tracesys:
SAVE_EXTRA_REGS SAVE_EXTRA_REGS
CLEAR_RREGS CLEAR_RREGS
movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 /* reload args from stack in case ptrace changed it */
RESTORE_EXTRA_REGS RESTORE_EXTRA_REGS
cmpq $(IA32_NR_syscalls-1),%rax
ja int_ret_from_sys_call /* ia32_tracesys has set RAX(%rsp) */
jmp ia32_do_call jmp ia32_do_call
END(ia32_syscall)
ia32_badsys:
movq $0,ORIG_RAX(%rsp)
movq $-ENOSYS,%rax
jmp ia32_sysret
CFI_ENDPROC CFI_ENDPROC
END(ia32_syscall)
.macro PTREGSCALL label, func .macro PTREGSCALL label, func
ALIGN ALIGN
......
...@@ -18,6 +18,12 @@ ...@@ -18,6 +18,12 @@
.endm .endm
#endif #endif
/*
* Issue one struct alt_instr descriptor entry (need to put it into
* the section .altinstructions, see below). This entry contains
* enough information for the alternatives patching code to patch an
* instruction. See apply_alternatives().
*/
.macro altinstruction_entry orig alt feature orig_len alt_len pad_len .macro altinstruction_entry orig alt feature orig_len alt_len pad_len
.long \orig - . .long \orig - .
.long \alt - . .long \alt - .
...@@ -27,6 +33,12 @@ ...@@ -27,6 +33,12 @@
.byte \pad_len .byte \pad_len
.endm .endm
/*
* Define an alternative between two instructions. If @feature is
* present, early code in apply_alternatives() replaces @oldinstr with
* @newinstr. ".skip" directive takes care of proper instruction padding
* in case @newinstr is longer than @oldinstr.
*/
.macro ALTERNATIVE oldinstr, newinstr, feature .macro ALTERNATIVE oldinstr, newinstr, feature
140: 140:
\oldinstr \oldinstr
...@@ -55,6 +67,12 @@ ...@@ -55,6 +67,12 @@
*/ */
#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) #define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
/*
* Same as ALTERNATIVE macro above but for two alternatives. If CPU
* has @feature1, it replaces @oldinstr with @newinstr1. If CPU has
* @feature2, it replaces @oldinstr with @feature2.
*/
.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 .macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
140: 140:
\oldinstr \oldinstr
......
...@@ -644,6 +644,12 @@ static inline void entering_ack_irq(void) ...@@ -644,6 +644,12 @@ static inline void entering_ack_irq(void)
entering_irq(); entering_irq();
} }
static inline void ipi_entering_ack_irq(void)
{
ack_APIC_irq();
irq_enter();
}
static inline void exiting_irq(void) static inline void exiting_irq(void)
{ {
irq_exit(); irq_exit();
......
...@@ -63,6 +63,31 @@ ...@@ -63,6 +63,31 @@
_ASM_ALIGN ; \ _ASM_ALIGN ; \
_ASM_PTR (entry); \ _ASM_PTR (entry); \
.popsection .popsection
.macro ALIGN_DESTINATION
/* check for bad alignment of destination */
movl %edi,%ecx
andl $7,%ecx
jz 102f /* already aligned */
subl $8,%ecx
negl %ecx
subl %ecx,%edx
100: movb (%rsi),%al
101: movb %al,(%rdi)
incq %rsi
incq %rdi
decl %ecx
jnz 100b
102:
.section .fixup,"ax"
103: addl %ecx,%edx /* ecx is zerorest also */
jmp copy_user_handle_tail
.previous
_ASM_EXTABLE(100b,103b)
_ASM_EXTABLE(101b,103b)
.endm
#else #else
# define _ASM_EXTABLE(from,to) \ # define _ASM_EXTABLE(from,to) \
" .pushsection \"__ex_table\",\"a\"\n" \ " .pushsection \"__ex_table\",\"a\"\n" \
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
*/ */
static inline int atomic_read(const atomic_t *v) static __always_inline int atomic_read(const atomic_t *v)
{ {
return ACCESS_ONCE((v)->counter); return ACCESS_ONCE((v)->counter);
} }
...@@ -34,7 +34,7 @@ static inline int atomic_read(const atomic_t *v) ...@@ -34,7 +34,7 @@ static inline int atomic_read(const atomic_t *v)
* *
* Atomically sets the value of @v to @i. * Atomically sets the value of @v to @i.
*/ */
static inline void atomic_set(atomic_t *v, int i) static __always_inline void atomic_set(atomic_t *v, int i)
{ {
v->counter = i; v->counter = i;
} }
...@@ -46,7 +46,7 @@ static inline void atomic_set(atomic_t *v, int i) ...@@ -46,7 +46,7 @@ static inline void atomic_set(atomic_t *v, int i)
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static inline void atomic_add(int i, atomic_t *v) static __always_inline void atomic_add(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "addl %1,%0" asm volatile(LOCK_PREFIX "addl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -60,7 +60,7 @@ static inline void atomic_add(int i, atomic_t *v) ...@@ -60,7 +60,7 @@ static inline void atomic_add(int i, atomic_t *v)
* *
* Atomically subtracts @i from @v. * Atomically subtracts @i from @v.
*/ */
static inline void atomic_sub(int i, atomic_t *v) static __always_inline void atomic_sub(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "subl %1,%0" asm volatile(LOCK_PREFIX "subl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -76,7 +76,7 @@ static inline void atomic_sub(int i, atomic_t *v) ...@@ -76,7 +76,7 @@ static inline void atomic_sub(int i, atomic_t *v)
* true if the result is zero, or false for all * true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline int atomic_sub_and_test(int i, atomic_t *v) static __always_inline int atomic_sub_and_test(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e"); GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
} }
...@@ -87,7 +87,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v) ...@@ -87,7 +87,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static inline void atomic_inc(atomic_t *v) static __always_inline void atomic_inc(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "incl %0" asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter)); : "+m" (v->counter));
...@@ -99,7 +99,7 @@ static inline void atomic_inc(atomic_t *v) ...@@ -99,7 +99,7 @@ static inline void atomic_inc(atomic_t *v)
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static inline void atomic_dec(atomic_t *v) static __always_inline void atomic_dec(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "decl %0" asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter)); : "+m" (v->counter));
...@@ -113,7 +113,7 @@ static inline void atomic_dec(atomic_t *v) ...@@ -113,7 +113,7 @@ static inline void atomic_dec(atomic_t *v)
* returns true if the result is 0, or false for all other * returns true if the result is 0, or false for all other
* cases. * cases.
*/ */
static inline int atomic_dec_and_test(atomic_t *v) static __always_inline int atomic_dec_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e"); GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
} }
...@@ -126,7 +126,7 @@ static inline int atomic_dec_and_test(atomic_t *v) ...@@ -126,7 +126,7 @@ static inline int atomic_dec_and_test(atomic_t *v)
* and returns true if the result is zero, or false for all * and returns true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline int atomic_inc_and_test(atomic_t *v) static __always_inline int atomic_inc_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e"); GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
} }
...@@ -140,7 +140,7 @@ static inline int atomic_inc_and_test(atomic_t *v) ...@@ -140,7 +140,7 @@ static inline int atomic_inc_and_test(atomic_t *v)
* if the result is negative, or false when * if the result is negative, or false when
* result is greater than or equal to zero. * result is greater than or equal to zero.
*/ */
static inline int atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_add_negative(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s"); GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
} }
...@@ -152,7 +152,7 @@ static inline int atomic_add_negative(int i, atomic_t *v) ...@@ -152,7 +152,7 @@ static inline int atomic_add_negative(int i, atomic_t *v)
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static inline int atomic_add_return(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
...@@ -164,7 +164,7 @@ static inline int atomic_add_return(int i, atomic_t *v) ...@@ -164,7 +164,7 @@ static inline int atomic_add_return(int i, atomic_t *v)
* *
* Atomically subtracts @i from @v and returns @v - @i * Atomically subtracts @i from @v and returns @v - @i
*/ */
static inline int atomic_sub_return(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v)
{ {
return atomic_add_return(-i, v); return atomic_add_return(-i, v);
} }
...@@ -172,7 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v) ...@@ -172,7 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v)
#define atomic_inc_return(v) (atomic_add_return(1, v)) #define atomic_inc_return(v) (atomic_add_return(1, v))
#define atomic_dec_return(v) (atomic_sub_return(1, v)) #define atomic_dec_return(v) (atomic_sub_return(1, v))
static inline int atomic_cmpxchg(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{ {
return cmpxchg(&v->counter, old, new); return cmpxchg(&v->counter, old, new);
} }
...@@ -191,7 +191,7 @@ static inline int atomic_xchg(atomic_t *v, int new) ...@@ -191,7 +191,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
* Atomically adds @a to @v, so long as @v was not already @u. * Atomically adds @a to @v, so long as @v was not already @u.
* Returns the old value of @v. * Returns the old value of @v.
*/ */
static inline int __atomic_add_unless(atomic_t *v, int a, int u) static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
{ {
int c, old; int c, old;
c = atomic_read(v); c = atomic_read(v);
...@@ -213,7 +213,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) ...@@ -213,7 +213,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
* Atomically adds 1 to @v * Atomically adds 1 to @v
* Returns the new value of @u * Returns the new value of @u
*/ */
static inline short int atomic_inc_short(short int *v) static __always_inline short int atomic_inc_short(short int *v)
{ {
asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v)); asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
return *v; return *v;
......
...@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i) ...@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i)
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static inline void atomic64_add(long i, atomic64_t *v) static __always_inline void atomic64_add(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "addq %1,%0" asm volatile(LOCK_PREFIX "addq %1,%0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -81,7 +81,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v) ...@@ -81,7 +81,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v)
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static inline void atomic64_inc(atomic64_t *v) static __always_inline void atomic64_inc(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "incq %0" asm volatile(LOCK_PREFIX "incq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -94,7 +94,7 @@ static inline void atomic64_inc(atomic64_t *v) ...@@ -94,7 +94,7 @@ static inline void atomic64_inc(atomic64_t *v)
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static inline void atomic64_dec(atomic64_t *v) static __always_inline void atomic64_dec(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "decq %0" asm volatile(LOCK_PREFIX "decq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -148,7 +148,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v) ...@@ -148,7 +148,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v)
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static inline long atomic64_add_return(long i, atomic64_t *v) static __always_inline long atomic64_add_return(long i, atomic64_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
......
...@@ -23,6 +23,8 @@ BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR) ...@@ -23,6 +23,8 @@ BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
BUILD_INTERRUPT3(kvm_posted_intr_ipi, POSTED_INTR_VECTOR, BUILD_INTERRUPT3(kvm_posted_intr_ipi, POSTED_INTR_VECTOR,
smp_kvm_posted_intr_ipi) smp_kvm_posted_intr_ipi)
BUILD_INTERRUPT3(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR,
smp_kvm_posted_intr_wakeup_ipi)
#endif #endif
/* /*
......
...@@ -14,6 +14,7 @@ typedef struct { ...@@ -14,6 +14,7 @@ typedef struct {
#endif #endif
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
unsigned int kvm_posted_intr_ipis; unsigned int kvm_posted_intr_ipis;
unsigned int kvm_posted_intr_wakeup_ipis;
#endif #endif
unsigned int x86_platform_ipis; /* arch dependent */ unsigned int x86_platform_ipis; /* arch dependent */
unsigned int apic_perf_irqs; unsigned int apic_perf_irqs;
......
...@@ -74,20 +74,16 @@ extern unsigned int hpet_readl(unsigned int a); ...@@ -74,20 +74,16 @@ extern unsigned int hpet_readl(unsigned int a);
extern void force_hpet_resume(void); extern void force_hpet_resume(void);
struct irq_data; struct irq_data;
struct hpet_dev;
struct irq_domain;
extern void hpet_msi_unmask(struct irq_data *data); extern void hpet_msi_unmask(struct irq_data *data);
extern void hpet_msi_mask(struct irq_data *data); extern void hpet_msi_mask(struct irq_data *data);
struct hpet_dev;
extern void hpet_msi_write(struct hpet_dev *hdev, struct msi_msg *msg); extern void hpet_msi_write(struct hpet_dev *hdev, struct msi_msg *msg);
extern void hpet_msi_read(struct hpet_dev *hdev, struct msi_msg *msg); extern void hpet_msi_read(struct hpet_dev *hdev, struct msi_msg *msg);
extern struct irq_domain *hpet_create_irq_domain(int hpet_id);
#ifdef CONFIG_PCI_MSI extern int hpet_assign_irq(struct irq_domain *domain,
extern int default_setup_hpet_msi(unsigned int irq, unsigned int id); struct hpet_dev *dev, int dev_num);
#else
static inline int default_setup_hpet_msi(unsigned int irq, unsigned int id)
{
return -EINVAL;
}
#endif
#ifdef CONFIG_HPET_EMULATE_RTC #ifdef CONFIG_HPET_EMULATE_RTC
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
extern asmlinkage void apic_timer_interrupt(void); extern asmlinkage void apic_timer_interrupt(void);
extern asmlinkage void x86_platform_ipi(void); extern asmlinkage void x86_platform_ipi(void);
extern asmlinkage void kvm_posted_intr_ipi(void); extern asmlinkage void kvm_posted_intr_ipi(void);
extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
extern asmlinkage void error_interrupt(void); extern asmlinkage void error_interrupt(void);
extern asmlinkage void irq_work_interrupt(void); extern asmlinkage void irq_work_interrupt(void);
...@@ -36,40 +37,6 @@ extern asmlinkage void spurious_interrupt(void); ...@@ -36,40 +37,6 @@ extern asmlinkage void spurious_interrupt(void);
extern asmlinkage void thermal_interrupt(void); extern asmlinkage void thermal_interrupt(void);
extern asmlinkage void reschedule_interrupt(void); extern asmlinkage void reschedule_interrupt(void);
extern asmlinkage void invalidate_interrupt(void);
extern asmlinkage void invalidate_interrupt0(void);
extern asmlinkage void invalidate_interrupt1(void);
extern asmlinkage void invalidate_interrupt2(void);
extern asmlinkage void invalidate_interrupt3(void);
extern asmlinkage void invalidate_interrupt4(void);
extern asmlinkage void invalidate_interrupt5(void);
extern asmlinkage void invalidate_interrupt6(void);
extern asmlinkage void invalidate_interrupt7(void);
extern asmlinkage void invalidate_interrupt8(void);
extern asmlinkage void invalidate_interrupt9(void);
extern asmlinkage void invalidate_interrupt10(void);
extern asmlinkage void invalidate_interrupt11(void);
extern asmlinkage void invalidate_interrupt12(void);
extern asmlinkage void invalidate_interrupt13(void);
extern asmlinkage void invalidate_interrupt14(void);
extern asmlinkage void invalidate_interrupt15(void);
extern asmlinkage void invalidate_interrupt16(void);
extern asmlinkage void invalidate_interrupt17(void);
extern asmlinkage void invalidate_interrupt18(void);
extern asmlinkage void invalidate_interrupt19(void);
extern asmlinkage void invalidate_interrupt20(void);
extern asmlinkage void invalidate_interrupt21(void);
extern asmlinkage void invalidate_interrupt22(void);
extern asmlinkage void invalidate_interrupt23(void);
extern asmlinkage void invalidate_interrupt24(void);
extern asmlinkage void invalidate_interrupt25(void);
extern asmlinkage void invalidate_interrupt26(void);
extern asmlinkage void invalidate_interrupt27(void);
extern asmlinkage void invalidate_interrupt28(void);
extern asmlinkage void invalidate_interrupt29(void);
extern asmlinkage void invalidate_interrupt30(void);
extern asmlinkage void invalidate_interrupt31(void);
extern asmlinkage void irq_move_cleanup_interrupt(void); extern asmlinkage void irq_move_cleanup_interrupt(void);
extern asmlinkage void reboot_interrupt(void); extern asmlinkage void reboot_interrupt(void);
extern asmlinkage void threshold_interrupt(void); extern asmlinkage void threshold_interrupt(void);
...@@ -92,55 +59,87 @@ extern void trace_call_function_single_interrupt(void); ...@@ -92,55 +59,87 @@ extern void trace_call_function_single_interrupt(void);
#define trace_irq_move_cleanup_interrupt irq_move_cleanup_interrupt #define trace_irq_move_cleanup_interrupt irq_move_cleanup_interrupt
#define trace_reboot_interrupt reboot_interrupt #define trace_reboot_interrupt reboot_interrupt
#define trace_kvm_posted_intr_ipi kvm_posted_intr_ipi #define trace_kvm_posted_intr_ipi kvm_posted_intr_ipi
#define trace_kvm_posted_intr_wakeup_ipi kvm_posted_intr_wakeup_ipi
#endif /* CONFIG_TRACING */ #endif /* CONFIG_TRACING */
#ifdef CONFIG_IRQ_REMAP
/* Intel specific interrupt remapping information */
struct irq_2_iommu {
struct intel_iommu *iommu;
u16 irte_index;
u16 sub_handle;
u8 irte_mask;
};
/* AMD specific interrupt remapping information */
struct irq_2_irte {
u16 devid; /* Device ID for IRTE table */
u16 index; /* Index into IRTE table*/
};
#endif /* CONFIG_IRQ_REMAP */
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
struct irq_data; struct irq_data;
struct pci_dev;
struct msi_desc;
enum irq_alloc_type {
X86_IRQ_ALLOC_TYPE_IOAPIC = 1,
X86_IRQ_ALLOC_TYPE_HPET,
X86_IRQ_ALLOC_TYPE_MSI,
X86_IRQ_ALLOC_TYPE_MSIX,
X86_IRQ_ALLOC_TYPE_DMAR,
X86_IRQ_ALLOC_TYPE_UV,
};
struct irq_cfg { struct irq_alloc_info {
cpumask_var_t domain; enum irq_alloc_type type;
cpumask_var_t old_domain; u32 flags;
u8 vector; const struct cpumask *mask; /* CPU mask for vector allocation */
u8 move_in_progress : 1;
#ifdef CONFIG_IRQ_REMAP
u8 remapped : 1;
union { union {
struct irq_2_iommu irq_2_iommu; int unused;
struct irq_2_irte irq_2_irte; #ifdef CONFIG_HPET_TIMER
struct {
int hpet_id;
int hpet_index;
void *hpet_data;
};
#endif
#ifdef CONFIG_PCI_MSI
struct {
struct pci_dev *msi_dev;
irq_hw_number_t msi_hwirq;
}; };
#endif #endif
union {
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
struct { struct {
struct list_head irq_2_pin; int ioapic_id;
int ioapic_pin;
int ioapic_node;
u32 ioapic_trigger : 1;
u32 ioapic_polarity : 1;
u32 ioapic_valid : 1;
struct IO_APIC_route_entry *ioapic_entry;
}; };
#endif #endif
#ifdef CONFIG_DMAR_TABLE
struct {
int dmar_id;
void *dmar_data;
};
#endif
#ifdef CONFIG_HT_IRQ
struct {
int ht_pos;
int ht_idx;
struct pci_dev *ht_dev;
void *ht_update;
};
#endif
#ifdef CONFIG_X86_UV
struct {
int uv_limit;
int uv_blade;
unsigned long uv_offset;
char *uv_name;
}; };
#endif
};
};
struct irq_cfg {
unsigned int dest_apicid;
u8 vector;
}; };
extern struct irq_cfg *irq_cfg(unsigned int irq); extern struct irq_cfg *irq_cfg(unsigned int irq);
extern struct irq_cfg *irqd_cfg(struct irq_data *irq_data); extern struct irq_cfg *irqd_cfg(struct irq_data *irq_data);
extern struct irq_cfg *alloc_irq_and_cfg_at(unsigned int at, int node);
extern void lock_vector_lock(void); extern void lock_vector_lock(void);
extern void unlock_vector_lock(void); extern void unlock_vector_lock(void);
extern int assign_irq_vector(int, struct irq_cfg *, const struct cpumask *);
extern void clear_irq_vector(int irq, struct irq_cfg *cfg);
extern void setup_vector_irq(int cpu); extern void setup_vector_irq(int cpu);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
extern void send_cleanup_vector(struct irq_cfg *); extern void send_cleanup_vector(struct irq_cfg *);
...@@ -150,10 +149,7 @@ static inline void send_cleanup_vector(struct irq_cfg *c) { } ...@@ -150,10 +149,7 @@ static inline void send_cleanup_vector(struct irq_cfg *c) { }
static inline void irq_complete_move(struct irq_cfg *c) { } static inline void irq_complete_move(struct irq_cfg *c) { }
#endif #endif
extern int apic_retrigger_irq(struct irq_data *data);
extern void apic_ack_edge(struct irq_data *data); extern void apic_ack_edge(struct irq_data *data);
extern int apic_set_affinity(struct irq_data *data, const struct cpumask *mask,
unsigned int *dest_id);
#else /* CONFIG_X86_LOCAL_APIC */ #else /* CONFIG_X86_LOCAL_APIC */
static inline void lock_vector_lock(void) {} static inline void lock_vector_lock(void) {}
static inline void unlock_vector_lock(void) {} static inline void unlock_vector_lock(void) {}
...@@ -163,8 +159,7 @@ static inline void unlock_vector_lock(void) {} ...@@ -163,8 +159,7 @@ static inline void unlock_vector_lock(void) {}
extern atomic_t irq_err_count; extern atomic_t irq_err_count;
extern atomic_t irq_mis_count; extern atomic_t irq_mis_count;
/* EISA */ extern void elcr_set_level_irq(unsigned int irq);
extern void eisa_set_level_irq(unsigned int irq);
/* SMP */ /* SMP */
extern __visible void smp_apic_timer_interrupt(struct pt_regs *); extern __visible void smp_apic_timer_interrupt(struct pt_regs *);
...@@ -178,7 +173,6 @@ extern asmlinkage void smp_irq_move_cleanup_interrupt(void); ...@@ -178,7 +173,6 @@ extern asmlinkage void smp_irq_move_cleanup_interrupt(void);
extern __visible void smp_reschedule_interrupt(struct pt_regs *); extern __visible void smp_reschedule_interrupt(struct pt_regs *);
extern __visible void smp_call_function_interrupt(struct pt_regs *); extern __visible void smp_call_function_interrupt(struct pt_regs *);
extern __visible void smp_call_function_single_interrupt(struct pt_regs *); extern __visible void smp_call_function_single_interrupt(struct pt_regs *);
extern __visible void smp_invalidate_interrupt(struct pt_regs *);
#endif #endif
extern char irq_entries_start[]; extern char irq_entries_start[];
......
...@@ -177,6 +177,7 @@ static inline unsigned int isa_virt_to_bus(volatile void *address) ...@@ -177,6 +177,7 @@ static inline unsigned int isa_virt_to_bus(volatile void *address)
* look at pci_iomap(). * look at pci_iomap().
*/ */
extern void __iomem *ioremap_nocache(resource_size_t offset, unsigned long size); extern void __iomem *ioremap_nocache(resource_size_t offset, unsigned long size);
extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size);
extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size); extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size,
unsigned long prot_val); unsigned long prot_val);
...@@ -338,6 +339,9 @@ extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1, ...@@ -338,6 +339,9 @@ extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
#define IO_SPACE_LIMIT 0xffff #define IO_SPACE_LIMIT 0xffff
#ifdef CONFIG_MTRR #ifdef CONFIG_MTRR
extern int __must_check arch_phys_wc_index(int handle);
#define arch_phys_wc_index arch_phys_wc_index
extern int __must_check arch_phys_wc_add(unsigned long base, extern int __must_check arch_phys_wc_add(unsigned long base,
unsigned long size); unsigned long size);
extern void arch_phys_wc_del(int handle); extern void arch_phys_wc_del(int handle);
......
...@@ -95,9 +95,22 @@ struct IR_IO_APIC_route_entry { ...@@ -95,9 +95,22 @@ struct IR_IO_APIC_route_entry {
index : 15; index : 15;
} __attribute__ ((packed)); } __attribute__ ((packed));
struct irq_alloc_info;
struct ioapic_domain_cfg;
#define IOAPIC_AUTO -1 #define IOAPIC_AUTO -1
#define IOAPIC_EDGE 0 #define IOAPIC_EDGE 0
#define IOAPIC_LEVEL 1 #define IOAPIC_LEVEL 1
#define IOAPIC_MASKED 1
#define IOAPIC_UNMASKED 0
#define IOAPIC_POL_HIGH 0
#define IOAPIC_POL_LOW 1
#define IOAPIC_DEST_MODE_PHYSICAL 0
#define IOAPIC_DEST_MODE_LOGICAL 1
#define IOAPIC_MAP_ALLOC 0x1 #define IOAPIC_MAP_ALLOC 0x1
#define IOAPIC_MAP_CHECK 0x2 #define IOAPIC_MAP_CHECK 0x2
...@@ -110,9 +123,6 @@ extern int nr_ioapics; ...@@ -110,9 +123,6 @@ extern int nr_ioapics;
extern int mpc_ioapic_id(int ioapic); extern int mpc_ioapic_id(int ioapic);
extern unsigned int mpc_ioapic_addr(int ioapic); extern unsigned int mpc_ioapic_addr(int ioapic);
extern struct mp_ioapic_gsi *mp_ioapic_gsi_routing(int ioapic);
#define MP_MAX_IOAPIC_PIN 127
/* # of MP IRQ source entries */ /* # of MP IRQ source entries */
extern int mp_irq_entries; extern int mp_irq_entries;
...@@ -120,9 +130,6 @@ extern int mp_irq_entries; ...@@ -120,9 +130,6 @@ extern int mp_irq_entries;
/* MP IRQ source entries */ /* MP IRQ source entries */
extern struct mpc_intsrc mp_irqs[MAX_IRQ_SOURCES]; extern struct mpc_intsrc mp_irqs[MAX_IRQ_SOURCES];
/* Older SiS APIC requires we rewrite the index register */
extern int sis_apic_bug;
/* 1 if "noapic" boot option passed */ /* 1 if "noapic" boot option passed */
extern int skip_ioapic_setup; extern int skip_ioapic_setup;
...@@ -132,6 +139,8 @@ extern int noioapicquirk; ...@@ -132,6 +139,8 @@ extern int noioapicquirk;
/* -1 if "noapic" boot option passed */ /* -1 if "noapic" boot option passed */
extern int noioapicreroute; extern int noioapicreroute;
extern u32 gsi_top;
extern unsigned long io_apic_irqs; extern unsigned long io_apic_irqs;
#define IO_APIC_IRQ(x) (((x) >= NR_IRQS_LEGACY) || ((1 << (x)) & io_apic_irqs)) #define IO_APIC_IRQ(x) (((x) >= NR_IRQS_LEGACY) || ((1 << (x)) & io_apic_irqs))
...@@ -147,13 +156,6 @@ struct irq_cfg; ...@@ -147,13 +156,6 @@ struct irq_cfg;
extern void ioapic_insert_resources(void); extern void ioapic_insert_resources(void);
extern int arch_early_ioapic_init(void); extern int arch_early_ioapic_init(void);
extern int native_setup_ioapic_entry(int, struct IO_APIC_route_entry *,
unsigned int, int,
struct io_apic_irq_attr *);
extern void eoi_ioapic_irq(unsigned int irq, struct irq_cfg *cfg);
extern void native_eoi_ioapic_pin(int apic, int pin, int vector);
extern int save_ioapic_entries(void); extern int save_ioapic_entries(void);
extern void mask_ioapic_entries(void); extern void mask_ioapic_entries(void);
extern int restore_ioapic_entries(void); extern int restore_ioapic_entries(void);
...@@ -161,82 +163,32 @@ extern int restore_ioapic_entries(void); ...@@ -161,82 +163,32 @@ extern int restore_ioapic_entries(void);
extern void setup_ioapic_ids_from_mpc(void); extern void setup_ioapic_ids_from_mpc(void);
extern void setup_ioapic_ids_from_mpc_nocheck(void); extern void setup_ioapic_ids_from_mpc_nocheck(void);
struct io_apic_irq_attr {
int ioapic;
int ioapic_pin;
int trigger;
int polarity;
};
enum ioapic_domain_type {
IOAPIC_DOMAIN_INVALID,
IOAPIC_DOMAIN_LEGACY,
IOAPIC_DOMAIN_STRICT,
IOAPIC_DOMAIN_DYNAMIC,
};
struct device_node;
struct irq_domain;
struct irq_domain_ops;
struct ioapic_domain_cfg {
enum ioapic_domain_type type;
const struct irq_domain_ops *ops;
struct device_node *dev;
};
struct mp_ioapic_gsi{
u32 gsi_base;
u32 gsi_end;
};
extern u32 gsi_top;
extern int mp_find_ioapic(u32 gsi); extern int mp_find_ioapic(u32 gsi);
extern int mp_find_ioapic_pin(int ioapic, u32 gsi); extern int mp_find_ioapic_pin(int ioapic, u32 gsi);
extern u32 mp_pin_to_gsi(int ioapic, int pin); extern int mp_map_gsi_to_irq(u32 gsi, unsigned int flags,
extern int mp_map_gsi_to_irq(u32 gsi, unsigned int flags); struct irq_alloc_info *info);
extern void mp_unmap_irq(int irq); extern void mp_unmap_irq(int irq);
extern int mp_register_ioapic(int id, u32 address, u32 gsi_base, extern int mp_register_ioapic(int id, u32 address, u32 gsi_base,
struct ioapic_domain_cfg *cfg); struct ioapic_domain_cfg *cfg);
extern int mp_unregister_ioapic(u32 gsi_base); extern int mp_unregister_ioapic(u32 gsi_base);
extern int mp_ioapic_registered(u32 gsi_base); extern int mp_ioapic_registered(u32 gsi_base);
extern int mp_irqdomain_map(struct irq_domain *domain, unsigned int virq,
irq_hw_number_t hwirq); extern void ioapic_set_alloc_attr(struct irq_alloc_info *info,
extern void mp_irqdomain_unmap(struct irq_domain *domain, unsigned int virq); int node, int trigger, int polarity);
extern int mp_set_gsi_attr(u32 gsi, int trigger, int polarity, int node);
extern void __init pre_init_apic_IRQ0(void);
extern void mp_save_irq(struct mpc_intsrc *m); extern void mp_save_irq(struct mpc_intsrc *m);
extern void disable_ioapic_support(void); extern void disable_ioapic_support(void);
extern void __init native_io_apic_init_mappings(void); extern void __init io_apic_init_mappings(void);
extern unsigned int native_io_apic_read(unsigned int apic, unsigned int reg); extern unsigned int native_io_apic_read(unsigned int apic, unsigned int reg);
extern void native_io_apic_write(unsigned int apic, unsigned int reg, unsigned int val);
extern void native_io_apic_modify(unsigned int apic, unsigned int reg, unsigned int val);
extern void native_disable_io_apic(void); extern void native_disable_io_apic(void);
extern void native_io_apic_print_entries(unsigned int apic, unsigned int nr_entries);
extern void intel_ir_io_apic_print_entries(unsigned int apic, unsigned int nr_entries);
extern int native_ioapic_set_affinity(struct irq_data *,
const struct cpumask *,
bool);
static inline unsigned int io_apic_read(unsigned int apic, unsigned int reg) static inline unsigned int io_apic_read(unsigned int apic, unsigned int reg)
{ {
return x86_io_apic_ops.read(apic, reg); return x86_io_apic_ops.read(apic, reg);
} }
static inline void io_apic_write(unsigned int apic, unsigned int reg, unsigned int value)
{
x86_io_apic_ops.write(apic, reg, value);
}
static inline void io_apic_modify(unsigned int apic, unsigned int reg, unsigned int value)
{
x86_io_apic_ops.modify(apic, reg, value);
}
extern void io_apic_eoi(unsigned int apic, unsigned int vector);
extern void setup_IO_APIC(void); extern void setup_IO_APIC(void);
extern void enable_IO_APIC(void); extern void enable_IO_APIC(void);
extern void disable_IO_APIC(void); extern void disable_IO_APIC(void);
...@@ -253,8 +205,12 @@ static inline int arch_early_ioapic_init(void) { return 0; } ...@@ -253,8 +205,12 @@ static inline int arch_early_ioapic_init(void) { return 0; }
static inline void print_IO_APICs(void) {} static inline void print_IO_APICs(void) {}
#define gsi_top (NR_IRQS_LEGACY) #define gsi_top (NR_IRQS_LEGACY)
static inline int mp_find_ioapic(u32 gsi) { return 0; } static inline int mp_find_ioapic(u32 gsi) { return 0; }
static inline u32 mp_pin_to_gsi(int ioapic, int pin) { return UINT_MAX; } static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags,
static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags) { return gsi; } struct irq_alloc_info *info)
{
return gsi;
}
static inline void mp_unmap_irq(int irq) { } static inline void mp_unmap_irq(int irq) { }
static inline int save_ioapic_entries(void) static inline int save_ioapic_entries(void)
...@@ -268,17 +224,11 @@ static inline int restore_ioapic_entries(void) ...@@ -268,17 +224,11 @@ static inline int restore_ioapic_entries(void)
return -ENOMEM; return -ENOMEM;
} }
static inline void mp_save_irq(struct mpc_intsrc *m) { }; static inline void mp_save_irq(struct mpc_intsrc *m) { }
static inline void disable_ioapic_support(void) { } static inline void disable_ioapic_support(void) { }
#define native_io_apic_init_mappings NULL static inline void io_apic_init_mappings(void) { }
#define native_io_apic_read NULL #define native_io_apic_read NULL
#define native_io_apic_write NULL
#define native_io_apic_modify NULL
#define native_disable_io_apic NULL #define native_disable_io_apic NULL
#define native_io_apic_print_entries NULL
#define native_ioapic_set_affinity NULL
#define native_setup_ioapic_entry NULL
#define native_eoi_ioapic_pin NULL
static inline void setup_IO_APIC(void) { } static inline void setup_IO_APIC(void) { }
static inline void enable_IO_APIC(void) { } static inline void enable_IO_APIC(void) { }
......
...@@ -30,6 +30,10 @@ extern void fixup_irqs(void); ...@@ -30,6 +30,10 @@ extern void fixup_irqs(void);
extern void irq_force_complete_move(int); extern void irq_force_complete_move(int);
#endif #endif
#ifdef CONFIG_HAVE_KVM
extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
#endif
extern void (*x86_platform_ipi_callback)(void); extern void (*x86_platform_ipi_callback)(void);
extern void native_init_IRQ(void); extern void native_init_IRQ(void);
extern bool handle_irq(unsigned irq, struct pt_regs *regs); extern bool handle_irq(unsigned irq, struct pt_regs *regs);
......
...@@ -22,14 +22,12 @@ ...@@ -22,14 +22,12 @@
#ifndef __X86_IRQ_REMAPPING_H #ifndef __X86_IRQ_REMAPPING_H
#define __X86_IRQ_REMAPPING_H #define __X86_IRQ_REMAPPING_H
#include <asm/irqdomain.h>
#include <asm/hw_irq.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
struct IO_APIC_route_entry;
struct io_apic_irq_attr;
struct irq_chip;
struct msi_msg; struct msi_msg;
struct pci_dev; struct irq_alloc_info;
struct irq_cfg;
#ifdef CONFIG_IRQ_REMAP #ifdef CONFIG_IRQ_REMAP
...@@ -39,22 +37,21 @@ extern int irq_remapping_enable(void); ...@@ -39,22 +37,21 @@ extern int irq_remapping_enable(void);
extern void irq_remapping_disable(void); extern void irq_remapping_disable(void);
extern int irq_remapping_reenable(int); extern int irq_remapping_reenable(int);
extern int irq_remap_enable_fault_handling(void); extern int irq_remap_enable_fault_handling(void);
extern int setup_ioapic_remapped_entry(int irq,
struct IO_APIC_route_entry *entry,
unsigned int destination,
int vector,
struct io_apic_irq_attr *attr);
extern void free_remapped_irq(int irq);
extern void compose_remapped_msi_msg(struct pci_dev *pdev,
unsigned int irq, unsigned int dest,
struct msi_msg *msg, u8 hpet_id);
extern int setup_hpet_msi_remapped(unsigned int irq, unsigned int id);
extern void panic_if_irq_remap(const char *msg); extern void panic_if_irq_remap(const char *msg);
extern bool setup_remapped_irq(int irq,
struct irq_cfg *cfg,
struct irq_chip *chip);
void irq_remap_modify_chip_defaults(struct irq_chip *chip); extern struct irq_domain *
irq_remapping_get_ir_irq_domain(struct irq_alloc_info *info);
extern struct irq_domain *
irq_remapping_get_irq_domain(struct irq_alloc_info *info);
/* Create PCI MSI/MSIx irqdomain, use @parent as the parent irqdomain. */
extern struct irq_domain *arch_create_msi_irq_domain(struct irq_domain *parent);
/* Get parent irqdomain for interrupt remapping irqdomain */
static inline struct irq_domain *arch_get_ir_parent_domain(void)
{
return x86_vector_domain;
}
#else /* CONFIG_IRQ_REMAP */ #else /* CONFIG_IRQ_REMAP */
...@@ -64,42 +61,22 @@ static inline int irq_remapping_enable(void) { return -ENODEV; } ...@@ -64,42 +61,22 @@ static inline int irq_remapping_enable(void) { return -ENODEV; }
static inline void irq_remapping_disable(void) { } static inline void irq_remapping_disable(void) { }
static inline int irq_remapping_reenable(int eim) { return -ENODEV; } static inline int irq_remapping_reenable(int eim) { return -ENODEV; }
static inline int irq_remap_enable_fault_handling(void) { return -ENODEV; } static inline int irq_remap_enable_fault_handling(void) { return -ENODEV; }
static inline int setup_ioapic_remapped_entry(int irq,
struct IO_APIC_route_entry *entry,
unsigned int destination,
int vector,
struct io_apic_irq_attr *attr)
{
return -ENODEV;
}
static inline void free_remapped_irq(int irq) { }
static inline void compose_remapped_msi_msg(struct pci_dev *pdev,
unsigned int irq, unsigned int dest,
struct msi_msg *msg, u8 hpet_id)
{
}
static inline int setup_hpet_msi_remapped(unsigned int irq, unsigned int id)
{
return -ENODEV;
}
static inline void panic_if_irq_remap(const char *msg) static inline void panic_if_irq_remap(const char *msg)
{ {
} }
static inline void irq_remap_modify_chip_defaults(struct irq_chip *chip) static inline struct irq_domain *
irq_remapping_get_ir_irq_domain(struct irq_alloc_info *info)
{ {
return NULL;
} }
static inline bool setup_remapped_irq(int irq, static inline struct irq_domain *
struct irq_cfg *cfg, irq_remapping_get_irq_domain(struct irq_alloc_info *info)
struct irq_chip *chip)
{ {
return false; return NULL;
} }
#endif /* CONFIG_IRQ_REMAP */
#define dmar_alloc_hwirq() irq_alloc_hwirq(-1)
#define dmar_free_hwirq irq_free_hwirq
#endif /* CONFIG_IRQ_REMAP */
#endif /* __X86_IRQ_REMAPPING_H */ #endif /* __X86_IRQ_REMAPPING_H */
...@@ -47,31 +47,12 @@ ...@@ -47,31 +47,12 @@
#define IRQ_MOVE_CLEANUP_VECTOR FIRST_EXTERNAL_VECTOR #define IRQ_MOVE_CLEANUP_VECTOR FIRST_EXTERNAL_VECTOR
#define IA32_SYSCALL_VECTOR 0x80 #define IA32_SYSCALL_VECTOR 0x80
#ifdef CONFIG_X86_32
# define SYSCALL_VECTOR 0x80
#endif
/* /*
* Vectors 0x30-0x3f are used for ISA interrupts. * Vectors 0x30-0x3f are used for ISA interrupts.
* round up to the next 16-vector boundary * round up to the next 16-vector boundary
*/ */
#define IRQ0_VECTOR ((FIRST_EXTERNAL_VECTOR + 16) & ~15) #define ISA_IRQ_VECTOR(irq) (((FIRST_EXTERNAL_VECTOR + 16) & ~15) + irq)
#define IRQ1_VECTOR (IRQ0_VECTOR + 1)
#define IRQ2_VECTOR (IRQ0_VECTOR + 2)
#define IRQ3_VECTOR (IRQ0_VECTOR + 3)
#define IRQ4_VECTOR (IRQ0_VECTOR + 4)
#define IRQ5_VECTOR (IRQ0_VECTOR + 5)
#define IRQ6_VECTOR (IRQ0_VECTOR + 6)
#define IRQ7_VECTOR (IRQ0_VECTOR + 7)
#define IRQ8_VECTOR (IRQ0_VECTOR + 8)
#define IRQ9_VECTOR (IRQ0_VECTOR + 9)
#define IRQ10_VECTOR (IRQ0_VECTOR + 10)
#define IRQ11_VECTOR (IRQ0_VECTOR + 11)
#define IRQ12_VECTOR (IRQ0_VECTOR + 12)
#define IRQ13_VECTOR (IRQ0_VECTOR + 13)
#define IRQ14_VECTOR (IRQ0_VECTOR + 14)
#define IRQ15_VECTOR (IRQ0_VECTOR + 15)
/* /*
* Special IRQ vectors used by the SMP architecture, 0xf0-0xff * Special IRQ vectors used by the SMP architecture, 0xf0-0xff
...@@ -105,6 +86,7 @@ ...@@ -105,6 +86,7 @@
/* Vector for KVM to deliver posted interrupt IPI */ /* Vector for KVM to deliver posted interrupt IPI */
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
#define POSTED_INTR_VECTOR 0xf2 #define POSTED_INTR_VECTOR 0xf2
#define POSTED_INTR_WAKEUP_VECTOR 0xf1
#endif #endif
/* /*
...@@ -157,16 +139,20 @@ static inline int invalid_vm86_irq(int irq) ...@@ -157,16 +139,20 @@ static inline int invalid_vm86_irq(int irq)
#define NR_IRQS_LEGACY 16 #define NR_IRQS_LEGACY 16
#define IO_APIC_VECTOR_LIMIT ( 32 * MAX_IO_APICS ) #define CPU_VECTOR_LIMIT (64 * NR_CPUS)
#define IO_APIC_VECTOR_LIMIT (32 * MAX_IO_APICS)
#ifdef CONFIG_X86_IO_APIC #if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_PCI_MSI)
# define CPU_VECTOR_LIMIT (64 * NR_CPUS) #define NR_IRQS \
# define NR_IRQS \
(CPU_VECTOR_LIMIT > IO_APIC_VECTOR_LIMIT ? \ (CPU_VECTOR_LIMIT > IO_APIC_VECTOR_LIMIT ? \
(NR_VECTORS + CPU_VECTOR_LIMIT) : \ (NR_VECTORS + CPU_VECTOR_LIMIT) : \
(NR_VECTORS + IO_APIC_VECTOR_LIMIT)) (NR_VECTORS + IO_APIC_VECTOR_LIMIT))
#else /* !CONFIG_X86_IO_APIC: */ #elif defined(CONFIG_X86_IO_APIC)
# define NR_IRQS NR_IRQS_LEGACY #define NR_IRQS (NR_VECTORS + IO_APIC_VECTOR_LIMIT)
#elif defined(CONFIG_PCI_MSI)
#define NR_IRQS (NR_VECTORS + CPU_VECTOR_LIMIT)
#else
#define NR_IRQS NR_IRQS_LEGACY
#endif #endif
#endif /* _ASM_X86_IRQ_VECTORS_H */ #endif /* _ASM_X86_IRQ_VECTORS_H */
#ifndef _ASM_IRQDOMAIN_H
#define _ASM_IRQDOMAIN_H
#include <linux/irqdomain.h>
#include <asm/hw_irq.h>
#ifdef CONFIG_X86_LOCAL_APIC
enum {
/* Allocate contiguous CPU vectors */
X86_IRQ_ALLOC_CONTIGUOUS_VECTORS = 0x1,
};
extern struct irq_domain *x86_vector_domain;
extern void init_irq_alloc_info(struct irq_alloc_info *info,
const struct cpumask *mask);
extern void copy_irq_alloc_info(struct irq_alloc_info *dst,
struct irq_alloc_info *src);
#endif /* CONFIG_X86_LOCAL_APIC */
#ifdef CONFIG_X86_IO_APIC
struct device_node;
struct irq_data;
enum ioapic_domain_type {
IOAPIC_DOMAIN_INVALID,
IOAPIC_DOMAIN_LEGACY,
IOAPIC_DOMAIN_STRICT,
IOAPIC_DOMAIN_DYNAMIC,
};
struct ioapic_domain_cfg {
enum ioapic_domain_type type;
const struct irq_domain_ops *ops;
struct device_node *dev;
};
extern const struct irq_domain_ops mp_ioapic_irqdomain_ops;
extern int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *arg);
extern void mp_irqdomain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs);
extern void mp_irqdomain_activate(struct irq_domain *domain,
struct irq_data *irq_data);
extern void mp_irqdomain_deactivate(struct irq_domain *domain,
struct irq_data *irq_data);
extern int mp_irqdomain_ioapic_idx(struct irq_domain *domain);
#endif /* CONFIG_X86_IO_APIC */
#ifdef CONFIG_PCI_MSI
extern void arch_init_msi_domain(struct irq_domain *domain);
#else
static inline void arch_init_msi_domain(struct irq_domain *domain) { }
#endif
#ifdef CONFIG_HT_IRQ
extern void arch_init_htirq_domain(struct irq_domain *domain);
#else
static inline void arch_init_htirq_domain(struct irq_domain *domain) { }
#endif
#endif
#ifndef _ASM_X86_MSI_H
#define _ASM_X86_MSI_H
#include <asm/hw_irq.h>
typedef struct irq_alloc_info msi_alloc_info_t;
#endif /* _ASM_X86_MSI_H */
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
* arch_phys_wc_add and arch_phys_wc_del. * arch_phys_wc_add and arch_phys_wc_del.
*/ */
# ifdef CONFIG_MTRR # ifdef CONFIG_MTRR
extern u8 mtrr_type_lookup(u64 addr, u64 end); extern u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform);
extern void mtrr_save_fixed_ranges(void *); extern void mtrr_save_fixed_ranges(void *);
extern void mtrr_save_state(void); extern void mtrr_save_state(void);
extern int mtrr_add(unsigned long base, unsigned long size, extern int mtrr_add(unsigned long base, unsigned long size,
...@@ -48,14 +48,13 @@ extern void mtrr_aps_init(void); ...@@ -48,14 +48,13 @@ extern void mtrr_aps_init(void);
extern void mtrr_bp_restore(void); extern void mtrr_bp_restore(void);
extern int mtrr_trim_uncached_memory(unsigned long end_pfn); extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
extern int amd_special_default_mtrr(void); extern int amd_special_default_mtrr(void);
extern int phys_wc_to_mtrr_index(int handle);
# else # else
static inline u8 mtrr_type_lookup(u64 addr, u64 end) static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
{ {
/* /*
* Return no-MTRRs: * Return no-MTRRs:
*/ */
return 0xff; return MTRR_TYPE_INVALID;
} }
#define mtrr_save_fixed_ranges(arg) do {} while (0) #define mtrr_save_fixed_ranges(arg) do {} while (0)
#define mtrr_save_state() do {} while (0) #define mtrr_save_state() do {} while (0)
...@@ -84,10 +83,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn) ...@@ -84,10 +83,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
{ {
} }
static inline int phys_wc_to_mtrr_index(int handle)
{
return -1;
}
#define mtrr_ap_init() do {} while (0) #define mtrr_ap_init() do {} while (0)
#define mtrr_bp_init() do {} while (0) #define mtrr_bp_init() do {} while (0)
...@@ -127,4 +122,8 @@ struct mtrr_gentry32 { ...@@ -127,4 +122,8 @@ struct mtrr_gentry32 {
_IOW(MTRR_IOCTL_BASE, 9, struct mtrr_sentry32) _IOW(MTRR_IOCTL_BASE, 9, struct mtrr_sentry32)
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
/* Bit fields for enabled in struct mtrr_state_type */
#define MTRR_STATE_MTRR_FIXED_ENABLED 0x01
#define MTRR_STATE_MTRR_ENABLED 0x02
#endif /* _ASM_X86_MTRR_H */ #endif /* _ASM_X86_MTRR_H */
...@@ -160,13 +160,14 @@ struct pv_cpu_ops { ...@@ -160,13 +160,14 @@ struct pv_cpu_ops {
u64 (*read_pmc)(int counter); u64 (*read_pmc)(int counter);
unsigned long long (*read_tscp)(unsigned int *aux); unsigned long long (*read_tscp)(unsigned int *aux);
#ifdef CONFIG_X86_32
/* /*
* Atomically enable interrupts and return to userspace. This * Atomically enable interrupts and return to userspace. This
* is only ever used to return to 32-bit processes; in a * is only used in 32-bit kernels. 64-bit kernels use
* 64-bit kernel, it's used for 32-on-64 compat processes, but * usergs_sysret32 instead.
* never native 64-bit processes. (Jump, not call.)
*/ */
void (*irq_enable_sysexit)(void); void (*irq_enable_sysexit)(void);
#endif
/* /*
* Switch to usermode gs and return to 64-bit usermode using * Switch to usermode gs and return to 64-bit usermode using
......
...@@ -4,12 +4,7 @@ ...@@ -4,12 +4,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <asm/pgtable_types.h> #include <asm/pgtable_types.h>
#ifdef CONFIG_X86_PAT bool pat_enabled(void);
extern int pat_enabled;
#else
static const int pat_enabled;
#endif
extern void pat_init(void); extern void pat_init(void);
void pat_init_cache_modes(void); void pat_init_cache_modes(void);
......
...@@ -96,15 +96,10 @@ extern void pci_iommu_alloc(void); ...@@ -96,15 +96,10 @@ extern void pci_iommu_alloc(void);
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
/* implemented in arch/x86/kernel/apic/io_apic. */ /* implemented in arch/x86/kernel/apic/io_apic. */
struct msi_desc; struct msi_desc;
void native_compose_msi_msg(struct pci_dev *pdev, unsigned int irq,
unsigned int dest, struct msi_msg *msg, u8 hpet_id);
int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
void native_teardown_msi_irq(unsigned int irq); void native_teardown_msi_irq(unsigned int irq);
void native_restore_msi_irqs(struct pci_dev *dev); void native_restore_msi_irqs(struct pci_dev *dev);
int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
unsigned int irq_base, unsigned int irq_offset);
#else #else
#define native_compose_msi_msg NULL
#define native_setup_msi_irqs NULL #define native_setup_msi_irqs NULL
#define native_teardown_msi_irq NULL #define native_teardown_msi_irq NULL
#endif #endif
......
...@@ -215,6 +215,44 @@ static inline void clwb(volatile void *__p) ...@@ -215,6 +215,44 @@ static inline void clwb(volatile void *__p)
: [pax] "a" (p)); : [pax] "a" (p));
} }
/**
* pcommit_sfence() - persistent commit and fence
*
* The PCOMMIT instruction ensures that data that has been flushed from the
* processor's cache hierarchy with CLWB, CLFLUSHOPT or CLFLUSH is accepted to
* memory and is durable on the DIMM. The primary use case for this is
* persistent memory.
*
* This function shows how to properly use CLWB/CLFLUSHOPT/CLFLUSH and PCOMMIT
* with appropriate fencing.
*
* Example:
* void flush_and_commit_buffer(void *vaddr, unsigned int size)
* {
* unsigned long clflush_mask = boot_cpu_data.x86_clflush_size - 1;
* void *vend = vaddr + size;
* void *p;
*
* for (p = (void *)((unsigned long)vaddr & ~clflush_mask);
* p < vend; p += boot_cpu_data.x86_clflush_size)
* clwb(p);
*
* // SFENCE to order CLWB/CLFLUSHOPT/CLFLUSH cache flushes
* // MFENCE via mb() also works
* wmb();
*
* // PCOMMIT and the required SFENCE for ordering
* pcommit_sfence();
* }
*
* After this function completes the data pointed to by 'vaddr' has been
* accepted to memory and will be durable if the 'vaddr' points to persistent
* memory.
*
* PCOMMIT must always be ordered by an MFENCE or SFENCE, so to help simplify
* things we include both the PCOMMIT and the required SFENCE in the
* alternatives generated by pcommit_sfence().
*/
static inline void pcommit_sfence(void) static inline void pcommit_sfence(void)
{ {
alternative(ASM_NOP7, alternative(ASM_NOP7,
......
...@@ -177,8 +177,6 @@ struct thread_info { ...@@ -177,8 +177,6 @@ struct thread_info {
*/ */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
DECLARE_PER_CPU(unsigned long, kernel_stack);
static inline struct thread_info *current_thread_info(void) static inline struct thread_info *current_thread_info(void)
{ {
return (struct thread_info *)(current_top_of_stack() - THREAD_SIZE); return (struct thread_info *)(current_top_of_stack() - THREAD_SIZE);
...@@ -197,9 +195,13 @@ static inline unsigned long current_stack_pointer(void) ...@@ -197,9 +195,13 @@ static inline unsigned long current_stack_pointer(void)
#else /* !__ASSEMBLY__ */ #else /* !__ASSEMBLY__ */
#ifdef CONFIG_X86_64
# define cpu_current_top_of_stack (cpu_tss + TSS_sp0)
#endif
/* Load thread_info address into "reg" */ /* Load thread_info address into "reg" */
#define GET_THREAD_INFO(reg) \ #define GET_THREAD_INFO(reg) \
_ASM_MOV PER_CPU_VAR(kernel_stack),reg ; \ _ASM_MOV PER_CPU_VAR(cpu_current_top_of_stack),reg ; \
_ASM_SUB $(THREAD_SIZE),reg ; _ASM_SUB $(THREAD_SIZE),reg ;
/* /*
......
...@@ -59,6 +59,10 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) ...@@ -59,6 +59,10 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
__put_user_size(*(u32 *)from, (u32 __user *)to, __put_user_size(*(u32 *)from, (u32 __user *)to,
4, ret, 4); 4, ret, 4);
return ret; return ret;
case 8:
__put_user_size(*(u64 *)from, (u64 __user *)to,
8, ret, 8);
return ret;
} }
} }
return __copy_to_user_ll(to, from, n); return __copy_to_user_ll(to, from, n);
......
...@@ -171,38 +171,17 @@ struct x86_platform_ops { ...@@ -171,38 +171,17 @@ struct x86_platform_ops {
}; };
struct pci_dev; struct pci_dev;
struct msi_msg;
struct x86_msi_ops { struct x86_msi_ops {
int (*setup_msi_irqs)(struct pci_dev *dev, int nvec, int type); int (*setup_msi_irqs)(struct pci_dev *dev, int nvec, int type);
void (*compose_msi_msg)(struct pci_dev *dev, unsigned int irq,
unsigned int dest, struct msi_msg *msg,
u8 hpet_id);
void (*teardown_msi_irq)(unsigned int irq); void (*teardown_msi_irq)(unsigned int irq);
void (*teardown_msi_irqs)(struct pci_dev *dev); void (*teardown_msi_irqs)(struct pci_dev *dev);
void (*restore_msi_irqs)(struct pci_dev *dev); void (*restore_msi_irqs)(struct pci_dev *dev);
int (*setup_hpet_msi)(unsigned int irq, unsigned int id);
}; };
struct IO_APIC_route_entry;
struct io_apic_irq_attr;
struct irq_data;
struct cpumask;
struct x86_io_apic_ops { struct x86_io_apic_ops {
void (*init) (void);
unsigned int (*read) (unsigned int apic, unsigned int reg); unsigned int (*read) (unsigned int apic, unsigned int reg);
void (*write) (unsigned int apic, unsigned int reg, unsigned int value);
void (*modify) (unsigned int apic, unsigned int reg, unsigned int value);
void (*disable)(void); void (*disable)(void);
void (*print_entries)(unsigned int apic, unsigned int nr_entries);
int (*set_affinity)(struct irq_data *data,
const struct cpumask *mask,
bool force);
int (*setup_entry)(int irq, struct IO_APIC_route_entry *entry,
unsigned int destination, int vector,
struct io_apic_irq_attr *attr);
void (*eoi_ioapic_pin)(int apic, int pin, int vector);
}; };
extern struct x86_init_ops x86_init; extern struct x86_init_ops x86_init;
......
...@@ -103,7 +103,7 @@ struct mtrr_state_type { ...@@ -103,7 +103,7 @@ struct mtrr_state_type {
#define MTRRIOC_GET_PAGE_ENTRY _IOWR(MTRR_IOCTL_BASE, 8, struct mtrr_gentry) #define MTRRIOC_GET_PAGE_ENTRY _IOWR(MTRR_IOCTL_BASE, 8, struct mtrr_gentry)
#define MTRRIOC_KILL_PAGE_ENTRY _IOW(MTRR_IOCTL_BASE, 9, struct mtrr_sentry) #define MTRRIOC_KILL_PAGE_ENTRY _IOW(MTRR_IOCTL_BASE, 9, struct mtrr_sentry)
/* These are the region types */ /* MTRR memory types, which are defined in SDM */
#define MTRR_TYPE_UNCACHABLE 0 #define MTRR_TYPE_UNCACHABLE 0
#define MTRR_TYPE_WRCOMB 1 #define MTRR_TYPE_WRCOMB 1
/*#define MTRR_TYPE_ 2*/ /*#define MTRR_TYPE_ 2*/
...@@ -113,5 +113,11 @@ struct mtrr_state_type { ...@@ -113,5 +113,11 @@ struct mtrr_state_type {
#define MTRR_TYPE_WRBACK 6 #define MTRR_TYPE_WRBACK 6
#define MTRR_NUM_TYPES 7 #define MTRR_NUM_TYPES 7
/*
* Invalid MTRR memory type. mtrr_type_lookup() returns this value when
* MTRRs are disabled. Note, this value is allocated from the reserved
* values (0x7-0xff) of the MTRR memory types.
*/
#define MTRR_TYPE_INVALID 0xff
#endif /* _UAPI_ASM_X86_MTRR_H */ #endif /* _UAPI_ASM_X86_MTRR_H */
...@@ -31,12 +31,12 @@ ...@@ -31,12 +31,12 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <asm/irqdomain.h>
#include <asm/pci_x86.h> #include <asm/pci_x86.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
...@@ -400,57 +400,13 @@ static int mp_config_acpi_gsi(struct device *dev, u32 gsi, int trigger, ...@@ -400,57 +400,13 @@ static int mp_config_acpi_gsi(struct device *dev, u32 gsi, int trigger,
return 0; return 0;
} }
static int mp_register_gsi(struct device *dev, u32 gsi, int trigger,
int polarity)
{
int irq, node;
if (acpi_irq_model != ACPI_IRQ_MODEL_IOAPIC)
return gsi;
trigger = trigger == ACPI_EDGE_SENSITIVE ? 0 : 1;
polarity = polarity == ACPI_ACTIVE_HIGH ? 0 : 1;
node = dev ? dev_to_node(dev) : NUMA_NO_NODE;
if (mp_set_gsi_attr(gsi, trigger, polarity, node)) {
pr_warn("Failed to set pin attr for GSI%d\n", gsi);
return -1;
}
irq = mp_map_gsi_to_irq(gsi, IOAPIC_MAP_ALLOC);
if (irq < 0)
return irq;
/* Don't set up the ACPI SCI because it's already set up */
if (enable_update_mptable && acpi_gbl_FADT.sci_interrupt != gsi)
mp_config_acpi_gsi(dev, gsi, trigger, polarity);
return irq;
}
static void mp_unregister_gsi(u32 gsi)
{
int irq;
if (acpi_irq_model != ACPI_IRQ_MODEL_IOAPIC)
return;
irq = mp_map_gsi_to_irq(gsi, 0);
if (irq > 0)
mp_unmap_irq(irq);
}
static struct irq_domain_ops acpi_irqdomain_ops = {
.map = mp_irqdomain_map,
.unmap = mp_irqdomain_unmap,
};
static int __init static int __init
acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end) acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end)
{ {
struct acpi_madt_io_apic *ioapic = NULL; struct acpi_madt_io_apic *ioapic = NULL;
struct ioapic_domain_cfg cfg = { struct ioapic_domain_cfg cfg = {
.type = IOAPIC_DOMAIN_DYNAMIC, .type = IOAPIC_DOMAIN_DYNAMIC,
.ops = &acpi_irqdomain_ops, .ops = &mp_ioapic_irqdomain_ops,
}; };
ioapic = (struct acpi_madt_io_apic *)header; ioapic = (struct acpi_madt_io_apic *)header;
...@@ -652,7 +608,7 @@ static int acpi_register_gsi_pic(struct device *dev, u32 gsi, ...@@ -652,7 +608,7 @@ static int acpi_register_gsi_pic(struct device *dev, u32 gsi,
* Make sure all (legacy) PCI IRQs are set as level-triggered. * Make sure all (legacy) PCI IRQs are set as level-triggered.
*/ */
if (trigger == ACPI_LEVEL_SENSITIVE) if (trigger == ACPI_LEVEL_SENSITIVE)
eisa_set_level_irq(gsi); elcr_set_level_irq(gsi);
#endif #endif
return gsi; return gsi;
...@@ -663,10 +619,21 @@ static int acpi_register_gsi_ioapic(struct device *dev, u32 gsi, ...@@ -663,10 +619,21 @@ static int acpi_register_gsi_ioapic(struct device *dev, u32 gsi,
int trigger, int polarity) int trigger, int polarity)
{ {
int irq = gsi; int irq = gsi;
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
int node;
struct irq_alloc_info info;
node = dev ? dev_to_node(dev) : NUMA_NO_NODE;
trigger = trigger == ACPI_EDGE_SENSITIVE ? 0 : 1;
polarity = polarity == ACPI_ACTIVE_HIGH ? 0 : 1;
ioapic_set_alloc_attr(&info, node, trigger, polarity);
mutex_lock(&acpi_ioapic_lock); mutex_lock(&acpi_ioapic_lock);
irq = mp_register_gsi(dev, gsi, trigger, polarity); irq = mp_map_gsi_to_irq(gsi, IOAPIC_MAP_ALLOC, &info);
/* Don't set up the ACPI SCI because it's already set up */
if (irq >= 0 && enable_update_mptable &&
acpi_gbl_FADT.sci_interrupt != gsi)
mp_config_acpi_gsi(dev, gsi, trigger, polarity);
mutex_unlock(&acpi_ioapic_lock); mutex_unlock(&acpi_ioapic_lock);
#endif #endif
...@@ -676,8 +643,12 @@ static int acpi_register_gsi_ioapic(struct device *dev, u32 gsi, ...@@ -676,8 +643,12 @@ static int acpi_register_gsi_ioapic(struct device *dev, u32 gsi,
static void acpi_unregister_gsi_ioapic(u32 gsi) static void acpi_unregister_gsi_ioapic(u32 gsi)
{ {
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
int irq;
mutex_lock(&acpi_ioapic_lock); mutex_lock(&acpi_ioapic_lock);
mp_unregister_gsi(gsi); irq = mp_map_gsi_to_irq(gsi, 0, NULL);
if (irq > 0)
mp_unmap_irq(irq);
mutex_unlock(&acpi_ioapic_lock); mutex_unlock(&acpi_ioapic_lock);
#endif #endif
} }
...@@ -786,7 +757,7 @@ int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base) ...@@ -786,7 +757,7 @@ int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base)
u64 addr; u64 addr;
struct ioapic_domain_cfg cfg = { struct ioapic_domain_cfg cfg = {
.type = IOAPIC_DOMAIN_DYNAMIC, .type = IOAPIC_DOMAIN_DYNAMIC,
.ops = &acpi_irqdomain_ops, .ops = &mp_ioapic_irqdomain_ops,
}; };
ioapic_id = acpi_get_ioapic_id(handle, gsi_base, &addr); ioapic_id = acpi_get_ioapic_id(handle, gsi_base, &addr);
......
...@@ -62,7 +62,7 @@ ENTRY(do_suspend_lowlevel) ...@@ -62,7 +62,7 @@ ENTRY(do_suspend_lowlevel)
pushfq pushfq
popq pt_regs_flags(%rax) popq pt_regs_flags(%rax)
movq $resume_point, saved_rip(%rip) movq $.Lresume_point, saved_rip(%rip)
movq %rsp, saved_rsp movq %rsp, saved_rsp
movq %rbp, saved_rbp movq %rbp, saved_rbp
...@@ -75,10 +75,10 @@ ENTRY(do_suspend_lowlevel) ...@@ -75,10 +75,10 @@ ENTRY(do_suspend_lowlevel)
xorl %eax, %eax xorl %eax, %eax
call x86_acpi_enter_sleep_state call x86_acpi_enter_sleep_state
/* in case something went wrong, restore the machine status and go on */ /* in case something went wrong, restore the machine status and go on */
jmp resume_point jmp .Lresume_point
.align 4 .align 4
resume_point: .Lresume_point:
/* We don't restore %rax, it must be 0 anyway */ /* We don't restore %rax, it must be 0 anyway */
movq $saved_context, %rax movq $saved_context, %rax
movq saved_context_cr4(%rax), %rbx movq saved_context_cr4(%rax), %rbx
......
...@@ -227,6 +227,15 @@ void __init arch_init_ideal_nops(void) ...@@ -227,6 +227,15 @@ void __init arch_init_ideal_nops(void)
#endif #endif
} }
break; break;
case X86_VENDOR_AMD:
if (boot_cpu_data.x86 > 0xf) {
ideal_nops = p6_nops;
return;
}
/* fall through */
default: default:
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
ideal_nops = k8_nops; ideal_nops = k8_nops;
......
...@@ -171,10 +171,6 @@ static int __init apbt_clockevent_register(void) ...@@ -171,10 +171,6 @@ static int __init apbt_clockevent_register(void)
static void apbt_setup_irq(struct apbt_dev *adev) static void apbt_setup_irq(struct apbt_dev *adev)
{ {
/* timer0 irq has been setup early */
if (adev->irq == 0)
return;
irq_modify_status(adev->irq, 0, IRQ_MOVE_PCNTXT); irq_modify_status(adev->irq, 0, IRQ_MOVE_PCNTXT);
irq_set_affinity(adev->irq, cpumask_of(adev->cpu)); irq_set_affinity(adev->irq, cpumask_of(adev->cpu));
} }
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
* *
* Copyright (C) 1997, 1998, 1999, 2000, 2009 Ingo Molnar, Hajnalka Szabo * Copyright (C) 1997, 1998, 1999, 2000, 2009 Ingo Molnar, Hajnalka Szabo
* Moved from arch/x86/kernel/apic/io_apic.c. * Moved from arch/x86/kernel/apic/io_apic.c.
* Jiang Liu <jiang.liu@linux.intel.com>
* Add support of hierarchical irqdomain
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -14,78 +16,112 @@ ...@@ -14,78 +16,112 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/htirq.h> #include <linux/htirq.h>
#include <asm/irqdomain.h>
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/hypertransport.h> #include <asm/hypertransport.h>
static struct irq_domain *htirq_domain;
/* /*
* Hypertransport interrupt support * Hypertransport interrupt support
*/ */
static void target_ht_irq(unsigned int irq, unsigned int dest, u8 vector)
{
struct ht_irq_msg msg;
fetch_ht_irq_msg(irq, &msg);
msg.address_lo &= ~(HT_IRQ_LOW_VECTOR_MASK | HT_IRQ_LOW_DEST_ID_MASK);
msg.address_hi &= ~(HT_IRQ_HIGH_DEST_ID_MASK);
msg.address_lo |= HT_IRQ_LOW_VECTOR(vector) | HT_IRQ_LOW_DEST_ID(dest);
msg.address_hi |= HT_IRQ_HIGH_DEST_ID(dest);
write_ht_irq_msg(irq, &msg);
}
static int static int
ht_set_affinity(struct irq_data *data, const struct cpumask *mask, bool force) ht_set_affinity(struct irq_data *data, const struct cpumask *mask, bool force)
{ {
struct irq_cfg *cfg = irqd_cfg(data); struct irq_data *parent = data->parent_data;
unsigned int dest;
int ret; int ret;
ret = apic_set_affinity(data, mask, &dest); ret = parent->chip->irq_set_affinity(parent, mask, force);
if (ret) if (ret >= 0) {
return ret; struct ht_irq_msg msg;
struct irq_cfg *cfg = irqd_cfg(data);
fetch_ht_irq_msg(data->irq, &msg);
msg.address_lo &= ~(HT_IRQ_LOW_VECTOR_MASK |
HT_IRQ_LOW_DEST_ID_MASK);
msg.address_lo |= HT_IRQ_LOW_VECTOR(cfg->vector) |
HT_IRQ_LOW_DEST_ID(cfg->dest_apicid);
msg.address_hi &= ~(HT_IRQ_HIGH_DEST_ID_MASK);
msg.address_hi |= HT_IRQ_HIGH_DEST_ID(cfg->dest_apicid);
write_ht_irq_msg(data->irq, &msg);
}
target_ht_irq(data->irq, dest, cfg->vector); return ret;
return IRQ_SET_MASK_OK_NOCOPY;
} }
static struct irq_chip ht_irq_chip = { static struct irq_chip ht_irq_chip = {
.name = "PCI-HT", .name = "PCI-HT",
.irq_mask = mask_ht_irq, .irq_mask = mask_ht_irq,
.irq_unmask = unmask_ht_irq, .irq_unmask = unmask_ht_irq,
.irq_ack = apic_ack_edge, .irq_ack = irq_chip_ack_parent,
.irq_set_affinity = ht_set_affinity, .irq_set_affinity = ht_set_affinity,
.irq_retrigger = apic_retrigger_irq, .irq_retrigger = irq_chip_retrigger_hierarchy,
.flags = IRQCHIP_SKIP_SET_WAKE, .flags = IRQCHIP_SKIP_SET_WAKE,
}; };
int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev) static int htirq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *arg)
{ {
struct irq_cfg *cfg; struct ht_irq_cfg *ht_cfg;
struct ht_irq_msg msg; struct irq_alloc_info *info = arg;
unsigned dest; struct pci_dev *dev;
int err; irq_hw_number_t hwirq;
int ret;
if (disable_apic) if (nr_irqs > 1 || !info)
return -ENXIO; return -EINVAL;
dev = info->ht_dev;
hwirq = (info->ht_idx & 0xFF) |
PCI_DEVID(dev->bus->number, dev->devfn) << 8 |
(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 24;
if (irq_find_mapping(domain, hwirq) > 0)
return -EEXIST;
cfg = irq_cfg(irq); ht_cfg = kmalloc(sizeof(*ht_cfg), GFP_KERNEL);
err = assign_irq_vector(irq, cfg, apic->target_cpus()); if (!ht_cfg)
if (err) return -ENOMEM;
return err;
err = apic->cpu_mask_to_apicid_and(cfg->domain, ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, info);
apic->target_cpus(), &dest); if (ret < 0) {
if (err) kfree(ht_cfg);
return err; return ret;
}
/* Initialize msg to a value that will never match the first write. */
ht_cfg->msg.address_lo = 0xffffffff;
ht_cfg->msg.address_hi = 0xffffffff;
ht_cfg->dev = info->ht_dev;
ht_cfg->update = info->ht_update;
ht_cfg->pos = info->ht_pos;
ht_cfg->idx = 0x10 + (info->ht_idx * 2);
irq_domain_set_info(domain, virq, hwirq, &ht_irq_chip, ht_cfg,
handle_edge_irq, ht_cfg, "edge");
msg.address_hi = HT_IRQ_HIGH_DEST_ID(dest); return 0;
}
static void htirq_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *irq_data = irq_domain_get_irq_data(domain, virq);
BUG_ON(nr_irqs != 1);
kfree(irq_data->chip_data);
irq_domain_free_irqs_top(domain, virq, nr_irqs);
}
static void htirq_domain_activate(struct irq_domain *domain,
struct irq_data *irq_data)
{
struct ht_irq_msg msg;
struct irq_cfg *cfg = irqd_cfg(irq_data);
msg.address_hi = HT_IRQ_HIGH_DEST_ID(cfg->dest_apicid);
msg.address_lo = msg.address_lo =
HT_IRQ_LOW_BASE | HT_IRQ_LOW_BASE |
HT_IRQ_LOW_DEST_ID(dest) | HT_IRQ_LOW_DEST_ID(cfg->dest_apicid) |
HT_IRQ_LOW_VECTOR(cfg->vector) | HT_IRQ_LOW_VECTOR(cfg->vector) |
((apic->irq_dest_mode == 0) ? ((apic->irq_dest_mode == 0) ?
HT_IRQ_LOW_DM_PHYSICAL : HT_IRQ_LOW_DM_PHYSICAL :
...@@ -95,13 +131,56 @@ int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev) ...@@ -95,13 +131,56 @@ int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
HT_IRQ_LOW_MT_FIXED : HT_IRQ_LOW_MT_FIXED :
HT_IRQ_LOW_MT_ARBITRATED) | HT_IRQ_LOW_MT_ARBITRATED) |
HT_IRQ_LOW_IRQ_MASKED; HT_IRQ_LOW_IRQ_MASKED;
write_ht_irq_msg(irq_data->irq, &msg);
}
write_ht_irq_msg(irq, &msg); static void htirq_domain_deactivate(struct irq_domain *domain,
struct irq_data *irq_data)
{
struct ht_irq_msg msg;
irq_set_chip_and_handler_name(irq, &ht_irq_chip, memset(&msg, 0, sizeof(msg));
handle_edge_irq, "edge"); write_ht_irq_msg(irq_data->irq, &msg);
}
dev_dbg(&dev->dev, "irq %d for HT\n", irq); static const struct irq_domain_ops htirq_domain_ops = {
.alloc = htirq_domain_alloc,
.free = htirq_domain_free,
.activate = htirq_domain_activate,
.deactivate = htirq_domain_deactivate,
};
return 0; void arch_init_htirq_domain(struct irq_domain *parent)
{
if (disable_apic)
return;
htirq_domain = irq_domain_add_tree(NULL, &htirq_domain_ops, NULL);
if (!htirq_domain)
pr_warn("failed to initialize irqdomain for HTIRQ.\n");
else
htirq_domain->parent = parent;
}
int arch_setup_ht_irq(int idx, int pos, struct pci_dev *dev,
ht_irq_update_t *update)
{
struct irq_alloc_info info;
if (!htirq_domain)
return -ENOSYS;
init_irq_alloc_info(&info, NULL);
info.ht_idx = idx;
info.ht_pos = pos;
info.ht_dev = dev;
info.ht_update = update;
return irq_domain_alloc_irqs(htirq_domain, 1, dev_to_node(&dev->dev),
&info);
}
void arch_teardown_ht_irq(unsigned int irq)
{
irq_domain_free_irqs(irq, 1);
} }
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -21,11 +21,13 @@ early_param("x2apic_phys", set_x2apic_phys_mode); ...@@ -21,11 +21,13 @@ early_param("x2apic_phys", set_x2apic_phys_mode);
static bool x2apic_fadt_phys(void) static bool x2apic_fadt_phys(void)
{ {
#ifdef CONFIG_ACPI
if ((acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID) && if ((acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID) &&
(acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL)) { (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL)) {
printk(KERN_DEBUG "System requires x2apic physical mode\n"); printk(KERN_DEBUG "System requires x2apic physical mode\n");
return true; return true;
} }
#endif
return false; return false;
} }
......
...@@ -41,6 +41,25 @@ void common(void) { ...@@ -41,6 +41,25 @@ void common(void) {
OFFSET(pbe_orig_address, pbe, orig_address); OFFSET(pbe_orig_address, pbe, orig_address);
OFFSET(pbe_next, pbe, next); OFFSET(pbe_next, pbe, next);
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
BLANK();
OFFSET(IA32_SIGCONTEXT_ax, sigcontext_ia32, ax);
OFFSET(IA32_SIGCONTEXT_bx, sigcontext_ia32, bx);
OFFSET(IA32_SIGCONTEXT_cx, sigcontext_ia32, cx);
OFFSET(IA32_SIGCONTEXT_dx, sigcontext_ia32, dx);
OFFSET(IA32_SIGCONTEXT_si, sigcontext_ia32, si);
OFFSET(IA32_SIGCONTEXT_di, sigcontext_ia32, di);
OFFSET(IA32_SIGCONTEXT_bp, sigcontext_ia32, bp);
OFFSET(IA32_SIGCONTEXT_sp, sigcontext_ia32, sp);
OFFSET(IA32_SIGCONTEXT_ip, sigcontext_ia32, ip);
BLANK();
OFFSET(TI_sysenter_return, thread_info, sysenter_return);
BLANK();
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
#endif
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
BLANK(); BLANK();
OFFSET(PARAVIRT_enabled, pv_info, paravirt_enabled); OFFSET(PARAVIRT_enabled, pv_info, paravirt_enabled);
...@@ -49,7 +68,9 @@ void common(void) { ...@@ -49,7 +68,9 @@ void common(void) {
OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable); OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable); OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
OFFSET(PV_CPU_iret, pv_cpu_ops, iret); OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
#ifdef CONFIG_X86_32
OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit); OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit);
#endif
OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0); OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2); OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2);
#endif #endif
......
...@@ -17,17 +17,6 @@ void foo(void); ...@@ -17,17 +17,6 @@ void foo(void);
void foo(void) void foo(void)
{ {
OFFSET(IA32_SIGCONTEXT_ax, sigcontext, ax);
OFFSET(IA32_SIGCONTEXT_bx, sigcontext, bx);
OFFSET(IA32_SIGCONTEXT_cx, sigcontext, cx);
OFFSET(IA32_SIGCONTEXT_dx, sigcontext, dx);
OFFSET(IA32_SIGCONTEXT_si, sigcontext, si);
OFFSET(IA32_SIGCONTEXT_di, sigcontext, di);
OFFSET(IA32_SIGCONTEXT_bp, sigcontext, bp);
OFFSET(IA32_SIGCONTEXT_sp, sigcontext, sp);
OFFSET(IA32_SIGCONTEXT_ip, sigcontext, ip);
BLANK();
OFFSET(CPUINFO_x86, cpuinfo_x86, x86); OFFSET(CPUINFO_x86, cpuinfo_x86, x86);
OFFSET(CPUINFO_x86_vendor, cpuinfo_x86, x86_vendor); OFFSET(CPUINFO_x86_vendor, cpuinfo_x86, x86_vendor);
OFFSET(CPUINFO_x86_model, cpuinfo_x86, x86_model); OFFSET(CPUINFO_x86_model, cpuinfo_x86, x86_model);
...@@ -37,10 +26,6 @@ void foo(void) ...@@ -37,10 +26,6 @@ void foo(void)
OFFSET(CPUINFO_x86_vendor_id, cpuinfo_x86, x86_vendor_id); OFFSET(CPUINFO_x86_vendor_id, cpuinfo_x86, x86_vendor_id);
BLANK(); BLANK();
OFFSET(TI_sysenter_return, thread_info, sysenter_return);
OFFSET(TI_cpu, thread_info, cpu);
BLANK();
OFFSET(PT_EBX, pt_regs, bx); OFFSET(PT_EBX, pt_regs, bx);
OFFSET(PT_ECX, pt_regs, cx); OFFSET(PT_ECX, pt_regs, cx);
OFFSET(PT_EDX, pt_regs, dx); OFFSET(PT_EDX, pt_regs, dx);
...@@ -60,9 +45,6 @@ void foo(void) ...@@ -60,9 +45,6 @@ void foo(void)
OFFSET(PT_OLDSS, pt_regs, ss); OFFSET(PT_OLDSS, pt_regs, ss);
BLANK(); BLANK();
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe, uc.uc_mcontext);
BLANK();
OFFSET(saved_context_gdt_desc, saved_context, gdt_desc); OFFSET(saved_context_gdt_desc, saved_context, gdt_desc);
BLANK(); BLANK();
......
...@@ -29,27 +29,6 @@ int main(void) ...@@ -29,27 +29,6 @@ int main(void)
BLANK(); BLANK();
#endif #endif
#ifdef CONFIG_IA32_EMULATION
OFFSET(TI_sysenter_return, thread_info, sysenter_return);
BLANK();
#define ENTRY(entry) OFFSET(IA32_SIGCONTEXT_ ## entry, sigcontext_ia32, entry)
ENTRY(ax);
ENTRY(bx);
ENTRY(cx);
ENTRY(dx);
ENTRY(si);
ENTRY(di);
ENTRY(bp);
ENTRY(sp);
ENTRY(ip);
BLANK();
#undef ENTRY
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
BLANK();
#endif
#define ENTRY(entry) OFFSET(pt_regs_ ## entry, pt_regs, entry) #define ENTRY(entry) OFFSET(pt_regs_ ## entry, pt_regs, entry)
ENTRY(bx); ENTRY(bx);
ENTRY(cx); ENTRY(cx);
......
...@@ -1155,10 +1155,6 @@ static __init int setup_disablecpuid(char *arg) ...@@ -1155,10 +1155,6 @@ static __init int setup_disablecpuid(char *arg)
} }
__setup("clearcpuid=", setup_disablecpuid); __setup("clearcpuid=", setup_disablecpuid);
DEFINE_PER_CPU(unsigned long, kernel_stack) =
(unsigned long)&init_thread_union + THREAD_SIZE;
EXPORT_PER_CPU_SYMBOL(kernel_stack);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
struct desc_ptr idt_descr = { NR_VECTORS * 16 - 1, (unsigned long) idt_table }; struct desc_ptr idt_descr = { NR_VECTORS * 16 - 1, (unsigned long) idt_table };
struct desc_ptr debug_idt_descr = { NR_VECTORS * 16 - 1, struct desc_ptr debug_idt_descr = { NR_VECTORS * 16 - 1,
......
...@@ -39,14 +39,12 @@ void hyperv_vector_handler(struct pt_regs *regs) ...@@ -39,14 +39,12 @@ void hyperv_vector_handler(struct pt_regs *regs)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
irq_enter(); entering_irq();
exit_idle();
inc_irq_stat(irq_hv_callback_count); inc_irq_stat(irq_hv_callback_count);
if (vmbus_handler) if (vmbus_handler)
vmbus_handler(); vmbus_handler();
irq_exit(); exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
......
...@@ -98,7 +98,8 @@ x86_get_mtrr_mem_range(struct range *range, int nr_range, ...@@ -98,7 +98,8 @@ x86_get_mtrr_mem_range(struct range *range, int nr_range,
continue; continue;
base = range_state[i].base_pfn; base = range_state[i].base_pfn;
if (base < (1<<(20-PAGE_SHIFT)) && mtrr_state.have_fixed && if (base < (1<<(20-PAGE_SHIFT)) && mtrr_state.have_fixed &&
(mtrr_state.enabled & 1)) { (mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED) &&
(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) {
/* Var MTRR contains UC entry below 1M? Skip it: */ /* Var MTRR contains UC entry below 1M? Skip it: */
printk(BIOS_BUG_MSG, i); printk(BIOS_BUG_MSG, i);
if (base + size <= (1<<(20-PAGE_SHIFT))) if (base + size <= (1<<(20-PAGE_SHIFT)))
......
...@@ -102,59 +102,76 @@ static int check_type_overlap(u8 *prev, u8 *curr) ...@@ -102,59 +102,76 @@ static int check_type_overlap(u8 *prev, u8 *curr)
return 0; return 0;
} }
/* /**
* Error/Semi-error returns: * mtrr_type_lookup_fixed - look up memory type in MTRR fixed entries
* 0xFF - when MTRR is not enabled *
* *repeat == 1 implies [start:end] spanned across MTRR range and type returned * Return the MTRR fixed memory type of 'start'.
* corresponds only to [start:*partial_end]. *
* Caller has to lookup again for [*partial_end:end]. * MTRR fixed entries are divided into the following ways:
* 0x00000 - 0x7FFFF : This range is divided into eight 64KB sub-ranges
* 0x80000 - 0xBFFFF : This range is divided into sixteen 16KB sub-ranges
* 0xC0000 - 0xFFFFF : This range is divided into sixty-four 4KB sub-ranges
*
* Return Values:
* MTRR_TYPE_(type) - Matched memory type
* MTRR_TYPE_INVALID - Unmatched
*/ */
static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat) static u8 mtrr_type_lookup_fixed(u64 start, u64 end)
{ {
int i;
u64 base, mask;
u8 prev_match, curr_match;
*repeat = 0;
if (!mtrr_state_set)
return 0xFF;
if (!mtrr_state.enabled)
return 0xFF;
/* Make end inclusive end, instead of exclusive */
end--;
/* Look in fixed ranges. Just return the type as per start */
if (mtrr_state.have_fixed && (start < 0x100000)) {
int idx; int idx;
if (start >= 0x100000)
return MTRR_TYPE_INVALID;
/* 0x0 - 0x7FFFF */
if (start < 0x80000) { if (start < 0x80000) {
idx = 0; idx = 0;
idx += (start >> 16); idx += (start >> 16);
return mtrr_state.fixed_ranges[idx]; return mtrr_state.fixed_ranges[idx];
/* 0x80000 - 0xBFFFF */
} else if (start < 0xC0000) { } else if (start < 0xC0000) {
idx = 1 * 8; idx = 1 * 8;
idx += ((start - 0x80000) >> 14); idx += ((start - 0x80000) >> 14);
return mtrr_state.fixed_ranges[idx]; return mtrr_state.fixed_ranges[idx];
} else if (start < 0x1000000) { }
/* 0xC0000 - 0xFFFFF */
idx = 3 * 8; idx = 3 * 8;
idx += ((start - 0xC0000) >> 12); idx += ((start - 0xC0000) >> 12);
return mtrr_state.fixed_ranges[idx]; return mtrr_state.fixed_ranges[idx];
} }
}
/* /**
* Look in variable ranges * mtrr_type_lookup_variable - look up memory type in MTRR variable entries
* Look of multiple ranges matching this address and pick type *
* as per MTRR precedence * Return Value:
* MTRR_TYPE_(type) - Matched memory type or default memory type (unmatched)
*
* Output Arguments:
* repeat - Set to 1 when [start:end] spanned across MTRR range and type
* returned corresponds only to [start:*partial_end]. Caller has
* to lookup again for [*partial_end:end].
*
* uniform - Set to 1 when an MTRR covers the region uniformly, i.e. the
* region is fully covered by a single MTRR entry or the default
* type.
*/ */
if (!(mtrr_state.enabled & 2)) static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
return mtrr_state.def_type; int *repeat, u8 *uniform)
{
int i;
u64 base, mask;
u8 prev_match, curr_match;
*repeat = 0;
*uniform = 1;
prev_match = 0xFF; /* Make end inclusive instead of exclusive */
end--;
prev_match = MTRR_TYPE_INVALID;
for (i = 0; i < num_var_ranges; ++i) { for (i = 0; i < num_var_ranges; ++i) {
unsigned short start_state, end_state; unsigned short start_state, end_state, inclusive;
if (!(mtrr_state.var_ranges[i].mask_lo & (1 << 11))) if (!(mtrr_state.var_ranges[i].mask_lo & (1 << 11)))
continue; continue;
...@@ -166,20 +183,29 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat) ...@@ -166,20 +183,29 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
start_state = ((start & mask) == (base & mask)); start_state = ((start & mask) == (base & mask));
end_state = ((end & mask) == (base & mask)); end_state = ((end & mask) == (base & mask));
inclusive = ((start < base) && (end > base));
if (start_state != end_state) { if ((start_state != end_state) || inclusive) {
/* /*
* We have start:end spanning across an MTRR. * We have start:end spanning across an MTRR.
* We split the region into * We split the region into either
* either *
* (start:mtrr_end) (mtrr_end:end) * - start_state:1
* or * (start:mtrr_end)(mtrr_end:end)
* (start:mtrr_start) (mtrr_start:end) * - end_state:1
* (start:mtrr_start)(mtrr_start:end)
* - inclusive:1
* (start:mtrr_start)(mtrr_start:mtrr_end)(mtrr_end:end)
*
* depending on kind of overlap. * depending on kind of overlap.
* Return the type for first region and a pointer to *
* the start of second region so that caller will * Return the type of the first region and a pointer
* lookup again on the second region. * to the start of next region so that caller will be
* Note: This way we handle multiple overlaps as well. * advised to lookup again after having adjusted start
* and end.
*
* Note: This way we handle overlaps with multiple
* entries and the default type properly.
*/ */
if (start_state) if (start_state)
*partial_end = base + get_mtrr_size(mask); *partial_end = base + get_mtrr_size(mask);
...@@ -193,59 +219,94 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat) ...@@ -193,59 +219,94 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
end = *partial_end - 1; /* end is inclusive */ end = *partial_end - 1; /* end is inclusive */
*repeat = 1; *repeat = 1;
*uniform = 0;
} }
if ((start & mask) != (base & mask)) if ((start & mask) != (base & mask))
continue; continue;
curr_match = mtrr_state.var_ranges[i].base_lo & 0xff; curr_match = mtrr_state.var_ranges[i].base_lo & 0xff;
if (prev_match == 0xFF) { if (prev_match == MTRR_TYPE_INVALID) {
prev_match = curr_match; prev_match = curr_match;
continue; continue;
} }
*uniform = 0;
if (check_type_overlap(&prev_match, &curr_match)) if (check_type_overlap(&prev_match, &curr_match))
return curr_match; return curr_match;
} }
if (mtrr_tom2) { if (prev_match != MTRR_TYPE_INVALID)
if (start >= (1ULL<<32) && (end < mtrr_tom2))
return MTRR_TYPE_WRBACK;
}
if (prev_match != 0xFF)
return prev_match; return prev_match;
return mtrr_state.def_type; return mtrr_state.def_type;
} }
/* /**
* Returns the effective MTRR type for the region * mtrr_type_lookup - look up memory type in MTRR
* Error return: *
* 0xFF - when MTRR is not enabled * Return Values:
* MTRR_TYPE_(type) - The effective MTRR type for the region
* MTRR_TYPE_INVALID - MTRR is disabled
*
* Output Argument:
* uniform - Set to 1 when an MTRR covers the region uniformly, i.e. the
* region is fully covered by a single MTRR entry or the default
* type.
*/ */
u8 mtrr_type_lookup(u64 start, u64 end) u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
{ {
u8 type, prev_type; u8 type, prev_type, is_uniform = 1, dummy;
int repeat; int repeat;
u64 partial_end; u64 partial_end;
type = __mtrr_type_lookup(start, end, &partial_end, &repeat); if (!mtrr_state_set)
return MTRR_TYPE_INVALID;
if (!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED))
return MTRR_TYPE_INVALID;
/*
* Look up the fixed ranges first, which take priority over
* the variable ranges.
*/
if ((start < 0x100000) &&
(mtrr_state.have_fixed) &&
(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) {
is_uniform = 0;
type = mtrr_type_lookup_fixed(start, end);
goto out;
}
/*
* Look up the variable ranges. Look of multiple ranges matching
* this address and pick type as per MTRR precedence.
*/
type = mtrr_type_lookup_variable(start, end, &partial_end,
&repeat, &is_uniform);
/* /*
* Common path is with repeat = 0. * Common path is with repeat = 0.
* However, we can have cases where [start:end] spans across some * However, we can have cases where [start:end] spans across some
* MTRR range. Do repeated lookups for that case here. * MTRR ranges and/or the default type. Do repeated lookups for
* that case here.
*/ */
while (repeat) { while (repeat) {
prev_type = type; prev_type = type;
start = partial_end; start = partial_end;
type = __mtrr_type_lookup(start, end, &partial_end, &repeat); is_uniform = 0;
type = mtrr_type_lookup_variable(start, end, &partial_end,
&repeat, &dummy);
if (check_type_overlap(&prev_type, &type)) if (check_type_overlap(&prev_type, &type))
return type; goto out;
} }
if (mtrr_tom2 && (start >= (1ULL<<32)) && (end < mtrr_tom2))
type = MTRR_TYPE_WRBACK;
out:
*uniform = is_uniform;
return type; return type;
} }
...@@ -347,7 +408,9 @@ static void __init print_mtrr_state(void) ...@@ -347,7 +408,9 @@ static void __init print_mtrr_state(void)
mtrr_attrib_to_str(mtrr_state.def_type)); mtrr_attrib_to_str(mtrr_state.def_type));
if (mtrr_state.have_fixed) { if (mtrr_state.have_fixed) {
pr_debug("MTRR fixed ranges %sabled:\n", pr_debug("MTRR fixed ranges %sabled:\n",
mtrr_state.enabled & 1 ? "en" : "dis"); ((mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED) &&
(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) ?
"en" : "dis");
print_fixed(0x00000, 0x10000, mtrr_state.fixed_ranges + 0); print_fixed(0x00000, 0x10000, mtrr_state.fixed_ranges + 0);
for (i = 0; i < 2; ++i) for (i = 0; i < 2; ++i)
print_fixed(0x80000 + i * 0x20000, 0x04000, print_fixed(0x80000 + i * 0x20000, 0x04000,
...@@ -360,7 +423,7 @@ static void __init print_mtrr_state(void) ...@@ -360,7 +423,7 @@ static void __init print_mtrr_state(void)
print_fixed_last(); print_fixed_last();
} }
pr_debug("MTRR variable ranges %sabled:\n", pr_debug("MTRR variable ranges %sabled:\n",
mtrr_state.enabled & 2 ? "en" : "dis"); mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED ? "en" : "dis");
high_width = (__ffs64(size_or_mask) - (32 - PAGE_SHIFT) + 3) / 4; high_width = (__ffs64(size_or_mask) - (32 - PAGE_SHIFT) + 3) / 4;
for (i = 0; i < num_var_ranges; ++i) { for (i = 0; i < num_var_ranges; ++i) {
...@@ -382,7 +445,7 @@ static void __init print_mtrr_state(void) ...@@ -382,7 +445,7 @@ static void __init print_mtrr_state(void)
} }
/* Grab all of the MTRR state for this CPU into *state */ /* Grab all of the MTRR state for this CPU into *state */
void __init get_mtrr_state(void) bool __init get_mtrr_state(void)
{ {
struct mtrr_var_range *vrs; struct mtrr_var_range *vrs;
unsigned long flags; unsigned long flags;
...@@ -426,6 +489,8 @@ void __init get_mtrr_state(void) ...@@ -426,6 +489,8 @@ void __init get_mtrr_state(void)
post_set(); post_set();
local_irq_restore(flags); local_irq_restore(flags);
return !!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED);
} }
/* Some BIOS's are messed up and don't set all MTRRs the same! */ /* Some BIOS's are messed up and don't set all MTRRs the same! */
......
...@@ -59,6 +59,12 @@ ...@@ -59,6 +59,12 @@
#define MTRR_TO_PHYS_WC_OFFSET 1000 #define MTRR_TO_PHYS_WC_OFFSET 1000
u32 num_var_ranges; u32 num_var_ranges;
static bool __mtrr_enabled;
static bool mtrr_enabled(void)
{
return __mtrr_enabled;
}
unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES]; unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
static DEFINE_MUTEX(mtrr_mutex); static DEFINE_MUTEX(mtrr_mutex);
...@@ -286,7 +292,7 @@ int mtrr_add_page(unsigned long base, unsigned long size, ...@@ -286,7 +292,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
int i, replace, error; int i, replace, error;
mtrr_type ltype; mtrr_type ltype;
if (!mtrr_if) if (!mtrr_enabled())
return -ENXIO; return -ENXIO;
error = mtrr_if->validate_add_page(base, size, type); error = mtrr_if->validate_add_page(base, size, type);
...@@ -435,6 +441,8 @@ static int mtrr_check(unsigned long base, unsigned long size) ...@@ -435,6 +441,8 @@ static int mtrr_check(unsigned long base, unsigned long size)
int mtrr_add(unsigned long base, unsigned long size, unsigned int type, int mtrr_add(unsigned long base, unsigned long size, unsigned int type,
bool increment) bool increment)
{ {
if (!mtrr_enabled())
return -ENODEV;
if (mtrr_check(base, size)) if (mtrr_check(base, size))
return -EINVAL; return -EINVAL;
return mtrr_add_page(base >> PAGE_SHIFT, size >> PAGE_SHIFT, type, return mtrr_add_page(base >> PAGE_SHIFT, size >> PAGE_SHIFT, type,
...@@ -463,8 +471,8 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size) ...@@ -463,8 +471,8 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
unsigned long lbase, lsize; unsigned long lbase, lsize;
int error = -EINVAL; int error = -EINVAL;
if (!mtrr_if) if (!mtrr_enabled())
return -ENXIO; return -ENODEV;
max = num_var_ranges; max = num_var_ranges;
/* No CPU hotplug when we change MTRR entries */ /* No CPU hotplug when we change MTRR entries */
...@@ -523,6 +531,8 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size) ...@@ -523,6 +531,8 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
*/ */
int mtrr_del(int reg, unsigned long base, unsigned long size) int mtrr_del(int reg, unsigned long base, unsigned long size)
{ {
if (!mtrr_enabled())
return -ENODEV;
if (mtrr_check(base, size)) if (mtrr_check(base, size))
return -EINVAL; return -EINVAL;
return mtrr_del_page(reg, base >> PAGE_SHIFT, size >> PAGE_SHIFT); return mtrr_del_page(reg, base >> PAGE_SHIFT, size >> PAGE_SHIFT);
...@@ -538,6 +548,9 @@ EXPORT_SYMBOL(mtrr_del); ...@@ -538,6 +548,9 @@ EXPORT_SYMBOL(mtrr_del);
* attempts to add a WC MTRR covering size bytes starting at base and * attempts to add a WC MTRR covering size bytes starting at base and
* logs an error if this fails. * logs an error if this fails.
* *
* The called should provide a power of two size on an equivalent
* power of two boundary.
*
* Drivers must store the return value to pass to mtrr_del_wc_if_needed, * Drivers must store the return value to pass to mtrr_del_wc_if_needed,
* but drivers should not try to interpret that return value. * but drivers should not try to interpret that return value.
*/ */
...@@ -545,7 +558,7 @@ int arch_phys_wc_add(unsigned long base, unsigned long size) ...@@ -545,7 +558,7 @@ int arch_phys_wc_add(unsigned long base, unsigned long size)
{ {
int ret; int ret;
if (pat_enabled) if (pat_enabled() || !mtrr_enabled())
return 0; /* Success! (We don't need to do anything.) */ return 0; /* Success! (We don't need to do anything.) */
ret = mtrr_add(base, size, MTRR_TYPE_WRCOMB, true); ret = mtrr_add(base, size, MTRR_TYPE_WRCOMB, true);
...@@ -577,7 +590,7 @@ void arch_phys_wc_del(int handle) ...@@ -577,7 +590,7 @@ void arch_phys_wc_del(int handle)
EXPORT_SYMBOL(arch_phys_wc_del); EXPORT_SYMBOL(arch_phys_wc_del);
/* /*
* phys_wc_to_mtrr_index - translates arch_phys_wc_add's return value * arch_phys_wc_index - translates arch_phys_wc_add's return value
* @handle: Return value from arch_phys_wc_add * @handle: Return value from arch_phys_wc_add
* *
* This will turn the return value from arch_phys_wc_add into an mtrr * This will turn the return value from arch_phys_wc_add into an mtrr
...@@ -587,14 +600,14 @@ EXPORT_SYMBOL(arch_phys_wc_del); ...@@ -587,14 +600,14 @@ EXPORT_SYMBOL(arch_phys_wc_del);
* in printk line. Alas there is an illegitimate use in some ancient * in printk line. Alas there is an illegitimate use in some ancient
* drm ioctls. * drm ioctls.
*/ */
int phys_wc_to_mtrr_index(int handle) int arch_phys_wc_index(int handle)
{ {
if (handle < MTRR_TO_PHYS_WC_OFFSET) if (handle < MTRR_TO_PHYS_WC_OFFSET)
return -1; return -1;
else else
return handle - MTRR_TO_PHYS_WC_OFFSET; return handle - MTRR_TO_PHYS_WC_OFFSET;
} }
EXPORT_SYMBOL_GPL(phys_wc_to_mtrr_index); EXPORT_SYMBOL_GPL(arch_phys_wc_index);
/* /*
* HACK ALERT! * HACK ALERT!
...@@ -734,10 +747,12 @@ void __init mtrr_bp_init(void) ...@@ -734,10 +747,12 @@ void __init mtrr_bp_init(void)
} }
if (mtrr_if) { if (mtrr_if) {
__mtrr_enabled = true;
set_num_var_ranges(); set_num_var_ranges();
init_table(); init_table();
if (use_intel()) { if (use_intel()) {
get_mtrr_state(); /* BIOS may override */
__mtrr_enabled = get_mtrr_state();
if (mtrr_cleanup(phys_addr)) { if (mtrr_cleanup(phys_addr)) {
changed_by_mtrr_cleanup = 1; changed_by_mtrr_cleanup = 1;
...@@ -745,10 +760,16 @@ void __init mtrr_bp_init(void) ...@@ -745,10 +760,16 @@ void __init mtrr_bp_init(void)
} }
} }
} }
if (!mtrr_enabled())
pr_info("MTRR: Disabled\n");
} }
void mtrr_ap_init(void) void mtrr_ap_init(void)
{ {
if (!mtrr_enabled())
return;
if (!use_intel() || mtrr_aps_delayed_init) if (!use_intel() || mtrr_aps_delayed_init)
return; return;
/* /*
...@@ -774,6 +795,9 @@ void mtrr_save_state(void) ...@@ -774,6 +795,9 @@ void mtrr_save_state(void)
{ {
int first_cpu; int first_cpu;
if (!mtrr_enabled())
return;
get_online_cpus(); get_online_cpus();
first_cpu = cpumask_first(cpu_online_mask); first_cpu = cpumask_first(cpu_online_mask);
smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1); smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
...@@ -782,6 +806,8 @@ void mtrr_save_state(void) ...@@ -782,6 +806,8 @@ void mtrr_save_state(void)
void set_mtrr_aps_delayed_init(void) void set_mtrr_aps_delayed_init(void)
{ {
if (!mtrr_enabled())
return;
if (!use_intel()) if (!use_intel())
return; return;
...@@ -793,7 +819,7 @@ void set_mtrr_aps_delayed_init(void) ...@@ -793,7 +819,7 @@ void set_mtrr_aps_delayed_init(void)
*/ */
void mtrr_aps_init(void) void mtrr_aps_init(void)
{ {
if (!use_intel()) if (!use_intel() || !mtrr_enabled())
return; return;
/* /*
...@@ -810,7 +836,7 @@ void mtrr_aps_init(void) ...@@ -810,7 +836,7 @@ void mtrr_aps_init(void)
void mtrr_bp_restore(void) void mtrr_bp_restore(void)
{ {
if (!use_intel()) if (!use_intel() || !mtrr_enabled())
return; return;
mtrr_if->set_all(); mtrr_if->set_all();
...@@ -818,7 +844,7 @@ void mtrr_bp_restore(void) ...@@ -818,7 +844,7 @@ void mtrr_bp_restore(void)
static int __init mtrr_init_finialize(void) static int __init mtrr_init_finialize(void)
{ {
if (!mtrr_if) if (!mtrr_enabled())
return 0; return 0;
if (use_intel()) { if (use_intel()) {
......
...@@ -51,7 +51,7 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt); ...@@ -51,7 +51,7 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
void fill_mtrr_var_range(unsigned int index, void fill_mtrr_var_range(unsigned int index,
u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi); u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
void get_mtrr_state(void); bool get_mtrr_state(void);
extern void set_mtrr_ops(const struct mtrr_ops *ops); extern void set_mtrr_ops(const struct mtrr_ops *ops);
......
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/irqdomain.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/of.h> #include <linux/of.h>
...@@ -17,6 +16,7 @@ ...@@ -17,6 +16,7 @@
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include <linux/initrd.h> #include <linux/initrd.h>
#include <asm/irqdomain.h>
#include <asm/hpet.h> #include <asm/hpet.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/pci_x86.h> #include <asm/pci_x86.h>
...@@ -196,38 +196,31 @@ static struct of_ioapic_type of_ioapic_type[] = ...@@ -196,38 +196,31 @@ static struct of_ioapic_type of_ioapic_type[] =
}, },
}; };
static int ioapic_xlate(struct irq_domain *domain, static int dt_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
struct device_node *controller, unsigned int nr_irqs, void *arg)
const u32 *intspec, u32 intsize,
irq_hw_number_t *out_hwirq, u32 *out_type)
{ {
struct of_phandle_args *irq_data = (void *)arg;
struct of_ioapic_type *it; struct of_ioapic_type *it;
u32 line, idx, gsi; struct irq_alloc_info tmp;
if (WARN_ON(intsize < 2)) if (WARN_ON(irq_data->args_count < 2))
return -EINVAL; return -EINVAL;
if (irq_data->args[1] >= ARRAY_SIZE(of_ioapic_type))
line = intspec[0];
if (intspec[1] >= ARRAY_SIZE(of_ioapic_type))
return -EINVAL; return -EINVAL;
it = &of_ioapic_type[intspec[1]]; it = &of_ioapic_type[irq_data->args[1]];
ioapic_set_alloc_attr(&tmp, NUMA_NO_NODE, it->trigger, it->polarity);
tmp.ioapic_id = mpc_ioapic_id(mp_irqdomain_ioapic_idx(domain));
tmp.ioapic_pin = irq_data->args[0];
idx = (u32)(long)domain->host_data; return mp_irqdomain_alloc(domain, virq, nr_irqs, &tmp);
gsi = mp_pin_to_gsi(idx, line);
if (mp_set_gsi_attr(gsi, it->trigger, it->polarity, cpu_to_node(0)))
return -EBUSY;
*out_hwirq = line;
*out_type = it->out_type;
return 0;
} }
const struct irq_domain_ops ioapic_irq_domain_ops = { static const struct irq_domain_ops ioapic_irq_domain_ops = {
.map = mp_irqdomain_map, .alloc = dt_irqdomain_alloc,
.unmap = mp_irqdomain_unmap, .free = mp_irqdomain_free,
.xlate = ioapic_xlate, .activate = mp_irqdomain_activate,
.deactivate = mp_irqdomain_deactivate,
}; };
static void __init dtb_add_ioapic(struct device_node *dn) static void __init dtb_add_ioapic(struct device_node *dn)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -21,12 +21,6 @@ ...@@ -21,12 +21,6 @@
#include <asm/apic.h> #include <asm/apic.h>
DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
EXPORT_PER_CPU_SYMBOL(irq_stat);
DEFINE_PER_CPU(struct pt_regs *, irq_regs);
EXPORT_PER_CPU_SYMBOL(irq_regs);
#ifdef CONFIG_DEBUG_STACKOVERFLOW #ifdef CONFIG_DEBUG_STACKOVERFLOW
int sysctl_panic_on_stackoverflow __read_mostly; int sysctl_panic_on_stackoverflow __read_mostly;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment