Commit 9e9abecf authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: (613 commits)
  x86: standalone trampoline code
  x86: move suspend wakeup code to C
  x86: coding style fixes to arch/x86/kernel/acpi/sleep.c
  x86: setup_trampoline() - fix section mismatch warning
  x86: section mismatch fixes, #1
  x86: fix paranoia about using BIOS quickboot mechanism.
  x86: print out buggy mptable
  x86: use cpu_online()
  x86: use cpumask_of_cpu()
  x86: remove unnecessary tmp local variable
  x86: remove unnecessary memset()
  x86: use ioapic_read_entry() and ioapic_write_entry()
  x86: avoid redundant loop in io_apic_level_ack_pending()
  x86: remove superfluous initialisation in boot code.
  x86: merge mpparse_{32,64}.c
  x86: unify mp_register_gsi
  x86: unify mp_config_acpi_legacy_irqs
  x86: unify mp_register_ioapic
  x86: unify uniq_io_apic_id
  x86: unify smp_scan_config
  ...
parents d7bb545d 77ad386e
...@@ -212,7 +212,7 @@ Who: Stephen Hemminger <shemminger@linux-foundation.org> ...@@ -212,7 +212,7 @@ Who: Stephen Hemminger <shemminger@linux-foundation.org>
--------------------------- ---------------------------
What: i386/x86_64 bzImage symlinks What: i386/x86_64 bzImage symlinks
When: April 2008 When: April 2010
Why: The i386/x86_64 merge provides a symlink to the old bzImage Why: The i386/x86_64 merge provides a symlink to the old bzImage
location so not yet updated user space tools, e.g. package location so not yet updated user space tools, e.g. package
......
...@@ -170,6 +170,8 @@ Offset Proto Name Meaning ...@@ -170,6 +170,8 @@ Offset Proto Name Meaning
0238/4 2.06+ cmdline_size Maximum size of the kernel command line 0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture 023C/4 2.07+ hardware_subarch Hardware subarchitecture
0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data 0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data
0248/4 2.08+ payload_offset Offset of kernel payload
024C/4 2.08+ payload_length Length of kernel payload
(1) For backwards compatibility, if the setup_sects field contains 0, the (1) For backwards compatibility, if the setup_sects field contains 0, the
real value is 4. real value is 4.
...@@ -512,6 +514,32 @@ Protocol: 2.07+ ...@@ -512,6 +514,32 @@ Protocol: 2.07+
A pointer to data that is specific to hardware subarch A pointer to data that is specific to hardware subarch
Field name: payload_offset
Type: read
Offset/size: 0x248/4
Protocol: 2.08+
If non-zero then this field contains the offset from the end of the
real-mode code to the payload.
The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic
numbers. Currently only gzip compressed ELF is used.
Field name: payload_length
Type: read
Offset/size: 0x24c/4
Protocol: 2.08+
The length of the payload.
**** THE IMAGE CHECKSUM
From boot protocol version 2.08 onwards the CRC-32 is calculated over
the entire file using the characteristic polynomial 0x04C11DB7 and an
initial remainder of 0xffffffff. The checksum is appended to the
file; therefore the CRC of the file up to the limit specified in the
syssize field of the header is always 0.
**** THE KERNEL COMMAND LINE **** THE KERNEL COMMAND LINE
......
...@@ -812,6 +812,19 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -812,6 +812,19 @@ and is between 256 and 4096 characters. It is defined in the file
inttest= [IA64] inttest= [IA64]
iommu= [x86]
off
force
noforce
biomerge
panic
nopanic
merge
nomerge
forcesac
soft
intel_iommu= [DMAR] Intel IOMMU driver (DMAR) option intel_iommu= [DMAR] Intel IOMMU driver (DMAR) option
off off
Disable intel iommu driver. Disable intel iommu driver.
...@@ -1134,6 +1147,11 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1134,6 +1147,11 @@ and is between 256 and 4096 characters. It is defined in the file
or or
memmap=0x10000$0x18690000 memmap=0x10000$0x18690000
memtest= [KNL,X86_64] Enable memtest
Format: <integer>
range: 0,4 : pattern number
default : 0 <disable>
meye.*= [HW] Set MotionEye Camera parameters meye.*= [HW] Set MotionEye Camera parameters
See Documentation/video4linux/meye.txt. See Documentation/video4linux/meye.txt.
......
PAT (Page Attribute Table)
x86 Page Attribute Table (PAT) allows for setting the memory attribute at the
page level granularity. PAT is complementary to the MTRR settings which allows
for setting of memory types over physical address ranges. However, PAT is
more flexible than MTRR due to its capability to set attributes at page level
and also due to the fact that there are no hardware limitations on number of
such attribute settings allowed. Added flexibility comes with guidelines for
not having memory type aliasing for the same physical memory with multiple
virtual addresses.
PAT allows for different types of memory attributes. The most commonly used
ones that will be supported at this time are Write-back, Uncached,
Write-combined and Uncached Minus.
There are many different APIs in the kernel that allows setting of memory
attributes at the page level. In order to avoid aliasing, these interfaces
should be used thoughtfully. Below is a table of interfaces available,
their intended usage and their memory attribute relationships. Internally,
these APIs use a reserve_memtype()/free_memtype() interface on the physical
address range to avoid any aliasing.
-------------------------------------------------------------------
API | RAM | ACPI,... | Reserved/Holes |
-----------------------|----------|------------|------------------|
| | | |
ioremap | -- | UC | UC |
| | | |
ioremap_cache | -- | WB | WB |
| | | |
ioremap_nocache | -- | UC | UC |
| | | |
ioremap_wc | -- | -- | WC |
| | | |
set_memory_uc | UC | -- | -- |
set_memory_wb | | | |
| | | |
set_memory_wc | WC | -- | -- |
set_memory_wb | | | |
| | | |
pci sysfs resource | -- | -- | UC |
| | | |
pci sysfs resource_wc | -- | -- | WC |
is IORESOURCE_PREFETCH| | | |
| | | |
pci proc | -- | -- | UC |
!PCIIOC_WRITE_COMBINE | | | |
| | | |
pci proc | -- | -- | WC |
PCIIOC_WRITE_COMBINE | | | |
| | | |
/dev/mem | -- | UC | UC |
read-write | | | |
| | | |
/dev/mem | -- | UC | UC |
mmap SYNC flag | | | |
| | | |
/dev/mem | -- | WB/WC/UC | WB/WC/UC |
mmap !SYNC flag | |(from exist-| (from exist- |
and | | ing alias)| ing alias) |
any alias to this area| | | |
| | | |
/dev/mem | -- | WB | WB |
mmap !SYNC flag | | | |
no alias to this area | | | |
and | | | |
MTRR says WB | | | |
| | | |
/dev/mem | -- | -- | UC_MINUS |
mmap !SYNC flag | | | |
no alias to this area | | | |
and | | | |
MTRR says !WB | | | |
| | | |
-------------------------------------------------------------------
Notes:
-- in the above table mean "Not suggested usage for the API". Some of the --'s
are strictly enforced by the kernel. Some others are not really enforced
today, but may be enforced in future.
For ioremap and pci access through /sys or /proc - The actual type returned
can be more restrictive, in case of any existing aliasing for that address.
For example: If there is an existing uncached mapping, a new ioremap_wc can
return uncached mapping in place of write-combine requested.
set_memory_[uc|wc] and set_memory_wb should be used in pairs, where driver will
first make a region uc or wc and switch it back to wb after use.
Over time writes to /proc/mtrr will be deprecated in favor of using PAT based
interfaces. Users writing to /proc/mtrr are suggested to use above interfaces.
Drivers should use ioremap_[uc|wc] to access PCI BARs with [uc|wc] access
types.
Drivers should use set_memory_[uc|wc] to set access type for RAM ranges.
...@@ -307,3 +307,8 @@ Debugging ...@@ -307,3 +307,8 @@ Debugging
stuck (default) stuck (default)
Miscellaneous Miscellaneous
nogbpages
Do not use GB pages for kernel direct mappings.
gbpages
Use GB pages for kernel direct mappings.
...@@ -114,7 +114,7 @@ config ARCH_HAS_CPU_RELAX ...@@ -114,7 +114,7 @@ config ARCH_HAS_CPU_RELAX
def_bool y def_bool y
config HAVE_SETUP_PER_CPU_AREA config HAVE_SETUP_PER_CPU_AREA
def_bool X86_64 def_bool X86_64 || (X86_SMP && !X86_VOYAGER)
config ARCH_HIBERNATION_POSSIBLE config ARCH_HIBERNATION_POSSIBLE
def_bool y def_bool y
...@@ -168,7 +168,7 @@ config X86_64_SMP ...@@ -168,7 +168,7 @@ config X86_64_SMP
config X86_HT config X86_HT
bool bool
depends on SMP depends on SMP
depends on (X86_32 && !(X86_VISWS || X86_VOYAGER)) || (X86_64 && !MK8) depends on (X86_32 && !(X86_VISWS || X86_VOYAGER)) || X86_64
default y default y
config X86_BIOS_REBOOT config X86_BIOS_REBOOT
...@@ -178,7 +178,7 @@ config X86_BIOS_REBOOT ...@@ -178,7 +178,7 @@ config X86_BIOS_REBOOT
config X86_TRAMPOLINE config X86_TRAMPOLINE
bool bool
depends on X86_SMP || (X86_VOYAGER && SMP) depends on X86_SMP || (X86_VOYAGER && SMP) || (64BIT && ACPI_SLEEP)
default y default y
config KTIME_SCALAR config KTIME_SCALAR
...@@ -238,8 +238,7 @@ config X86_ELAN ...@@ -238,8 +238,7 @@ config X86_ELAN
config X86_VOYAGER config X86_VOYAGER
bool "Voyager (NCR)" bool "Voyager (NCR)"
depends on X86_32 depends on X86_32 && (SMP || BROKEN)
select SMP if !BROKEN
help help
Voyager is an MCA-based 32-way capable SMP architecture proprietary Voyager is an MCA-based 32-way capable SMP architecture proprietary
to NCR Corp. Machine classes 345x/35xx/4100/51xx are Voyager-based. to NCR Corp. Machine classes 345x/35xx/4100/51xx are Voyager-based.
...@@ -251,9 +250,8 @@ config X86_VOYAGER ...@@ -251,9 +250,8 @@ config X86_VOYAGER
config X86_NUMAQ config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)" bool "NUMAQ (IBM/Sequent)"
select SMP depends on SMP && X86_32
select NUMA select NUMA
depends on X86_32
help help
This option is used for getting Linux to run on a (IBM/Sequent) NUMA This option is used for getting Linux to run on a (IBM/Sequent) NUMA
multiquad box. This changes the way that processors are bootstrapped, multiquad box. This changes the way that processors are bootstrapped,
...@@ -324,8 +322,9 @@ config X86_RDC321X ...@@ -324,8 +322,9 @@ config X86_RDC321X
config X86_VSMP config X86_VSMP
bool "Support for ScaleMP vSMP" bool "Support for ScaleMP vSMP"
depends on X86_64 && PCI select PARAVIRT
help depends on X86_64
help
Support for ScaleMP vSMP systems. Say 'Y' here if this kernel is Support for ScaleMP vSMP systems. Say 'Y' here if this kernel is
supposed to run on these EM64T-based machines. Only choose this option supposed to run on these EM64T-based machines. Only choose this option
if you have one of these machines. if you have one of these machines.
...@@ -380,6 +379,35 @@ config PARAVIRT ...@@ -380,6 +379,35 @@ config PARAVIRT
endif endif
config MEMTEST_BOOTPARAM
bool "Memtest boot parameter"
depends on X86_64
default y
help
This option adds a kernel parameter 'memtest', which allows memtest
to be disabled at boot. If this option is selected, memtest
functionality can be disabled with memtest=0 on the kernel
command line. The purpose of this option is to allow a single
kernel image to be distributed with memtest built in, but not
necessarily enabled.
If you are unsure how to answer this question, answer Y.
config MEMTEST_BOOTPARAM_VALUE
int "Memtest boot parameter default value (0-4)"
depends on MEMTEST_BOOTPARAM
range 0 4
default 0
help
This option sets the default value for the kernel parameter
'memtest', which allows memtest to be disabled at boot. If this
option is set to 0 (zero), the memtest kernel parameter will
default to 0, disabling memtest at bootup. If this option is
set to 4, the memtest kernel parameter will default to 4,
enabling memtest at bootup, and use that as pattern number.
If you are unsure how to answer this question, answer 0.
config ACPI_SRAT config ACPI_SRAT
def_bool y def_bool y
depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH) depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH)
...@@ -504,7 +532,7 @@ config NR_CPUS ...@@ -504,7 +532,7 @@ config NR_CPUS
config SCHED_SMT config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support" bool "SMT (Hyperthreading) scheduler support"
depends on (X86_64 && SMP) || (X86_32 && X86_HT) depends on X86_HT
help help
SMT scheduler support improves the CPU scheduler's decision making SMT scheduler support improves the CPU scheduler's decision making
when dealing with Intel Pentium 4 chips with HyperThreading at a when dealing with Intel Pentium 4 chips with HyperThreading at a
...@@ -514,7 +542,7 @@ config SCHED_SMT ...@@ -514,7 +542,7 @@ config SCHED_SMT
config SCHED_MC config SCHED_MC
def_bool y def_bool y
prompt "Multi-core scheduler support" prompt "Multi-core scheduler support"
depends on (X86_64 && SMP) || (X86_32 && X86_HT) depends on X86_HT
help help
Multi-core scheduler support improves the CPU scheduler's decision Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly making when dealing with multi-core CPU chips at a cost of slightly
...@@ -883,7 +911,7 @@ config NUMA_EMU ...@@ -883,7 +911,7 @@ config NUMA_EMU
number of nodes. This is only useful for debugging. number of nodes. This is only useful for debugging.
config NODES_SHIFT config NODES_SHIFT
int int "Max num nodes shift(1-15)"
range 1 15 if X86_64 range 1 15 if X86_64
default "6" if X86_64 default "6" if X86_64
default "4" if X86_NUMAQ default "4" if X86_NUMAQ
...@@ -1007,6 +1035,21 @@ config MTRR ...@@ -1007,6 +1035,21 @@ config MTRR
See <file:Documentation/mtrr.txt> for more information. See <file:Documentation/mtrr.txt> for more information.
config X86_PAT
def_bool y
prompt "x86 PAT support"
depends on MTRR && NONPROMISC_DEVMEM
help
Use PAT attributes to setup page level cache control.
PATs are the modern equivalents of MTRRs and are much more
flexible than MTRRs.
Say N here if you see bootup problems (boot crash, boot hang,
spontaneous reboots) or a non-working video driver.
If unsure, say Y.
config EFI config EFI
def_bool n def_bool n
prompt "EFI runtime service support" prompt "EFI runtime service support"
...@@ -1075,6 +1118,7 @@ source kernel/Kconfig.hz ...@@ -1075,6 +1118,7 @@ source kernel/Kconfig.hz
config KEXEC config KEXEC
bool "kexec system call" bool "kexec system call"
depends on X86_64 || X86_BIOS_REBOOT
help help
kexec is a system call that implements the ability to shutdown your kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot current kernel, and to start another kernel. It is like a reboot
...@@ -1376,7 +1420,7 @@ endmenu ...@@ -1376,7 +1420,7 @@ endmenu
menu "Bus options (PCI etc.)" menu "Bus options (PCI etc.)"
config PCI config PCI
bool "PCI support" if !X86_VISWS bool "PCI support" if !X86_VISWS && !X86_VSMP
depends on !X86_VOYAGER depends on !X86_VOYAGER
default y default y
select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC) select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC)
......
...@@ -388,7 +388,7 @@ config X86_OOSTORE ...@@ -388,7 +388,7 @@ config X86_OOSTORE
# #
config X86_P6_NOP config X86_P6_NOP
def_bool y def_bool y
depends on (X86_64 || !X86_GENERIC) && (M686 || MPENTIUMII || MPENTIUMIII || MPENTIUMM || MCORE2 || MPENTIUM4) depends on (X86_64 || !X86_GENERIC) && (M686 || MPENTIUMII || MPENTIUMIII || MPENTIUMM || MCORE2 || MPENTIUM4 || MPSC)
config X86_TSC config X86_TSC
def_bool y def_bool y
......
...@@ -54,6 +54,18 @@ config DEBUG_PER_CPU_MAPS ...@@ -54,6 +54,18 @@ config DEBUG_PER_CPU_MAPS
Say N if unsure. Say N if unsure.
config X86_PTDUMP
bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL
select DEBUG_FS
help
Say Y here if you want to show the kernel pagetable layout in a
debugfs file. This information is only useful for kernel developers
who are working in architecture specific areas of the kernel.
It is probably not a good idea to enable this feature in a production
kernel.
If in doubt, say "N"
config DEBUG_RODATA config DEBUG_RODATA
bool "Write protect kernel read-only data structures" bool "Write protect kernel read-only data structures"
default y default y
...@@ -64,6 +76,18 @@ config DEBUG_RODATA ...@@ -64,6 +76,18 @@ config DEBUG_RODATA
data. This is recommended so that we can catch kernel bugs sooner. data. This is recommended so that we can catch kernel bugs sooner.
If in doubt, say "Y". If in doubt, say "Y".
config DIRECT_GBPAGES
bool "Enable gbpages-mapped kernel pagetables"
depends on DEBUG_KERNEL && EXPERIMENTAL && X86_64
help
Enable gigabyte pages support (if the CPU supports it). This can
improve the kernel's performance a tiny bit by reducing TLB
pressure.
This is experimental code.
If in doubt, say "N".
config DEBUG_RODATA_TEST config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature" bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA depends on DEBUG_RODATA
...@@ -82,8 +106,8 @@ config DEBUG_NX_TEST ...@@ -82,8 +106,8 @@ config DEBUG_NX_TEST
config 4KSTACKS config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb" bool "Use 4Kb for kernel stacks instead of 8Kb"
depends on DEBUG_KERNEL
depends on X86_32 depends on X86_32
default y
help help
If you say Y here the kernel will use a 4Kb stacksize for the If you say Y here the kernel will use a 4Kb stacksize for the
kernel stack attached to each process/thread. This facilitates kernel stack attached to each process/thread. This facilitates
......
...@@ -151,7 +151,6 @@ mflags-y += -Iinclude/asm-x86/mach-default ...@@ -151,7 +151,6 @@ mflags-y += -Iinclude/asm-x86/mach-default
# 64 bit does not support subarch support - clear sub arch variables # 64 bit does not support subarch support - clear sub arch variables
fcore-$(CONFIG_X86_64) := fcore-$(CONFIG_X86_64) :=
mcore-$(CONFIG_X86_64) := mcore-$(CONFIG_X86_64) :=
mflags-$(CONFIG_X86_64) :=
KBUILD_CFLAGS += $(mflags-y) KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y) KBUILD_AFLAGS += $(mflags-y)
...@@ -159,9 +158,9 @@ KBUILD_AFLAGS += $(mflags-y) ...@@ -159,9 +158,9 @@ KBUILD_AFLAGS += $(mflags-y)
### ###
# Kernel objects # Kernel objects
head-y := arch/x86/kernel/head_$(BITS).o head-y := arch/x86/kernel/head_$(BITS).o
head-$(CONFIG_X86_64) += arch/x86/kernel/head64.o head-y += arch/x86/kernel/head$(BITS).o
head-y += arch/x86/kernel/init_task.o head-y += arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/ libs-y += arch/x86/lib/
......
...@@ -30,7 +30,7 @@ subdir- := compressed ...@@ -30,7 +30,7 @@ subdir- := compressed
setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
setup-y += header.o main.o mca.o memory.o pm.o pmjump.o setup-y += header.o main.o mca.o memory.o pm.o pmjump.o
setup-y += printf.o string.o tty.o video.o version.o setup-y += printf.o string.o tty.o video.o video-mode.o version.o
setup-$(CONFIG_X86_APM_BOOT) += apm.o setup-$(CONFIG_X86_APM_BOOT) += apm.o
setup-$(CONFIG_X86_VOYAGER) += voyager.o setup-$(CONFIG_X86_VOYAGER) += voyager.o
...@@ -94,6 +94,20 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE ...@@ -94,6 +94,20 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
SETUP_OBJS = $(addprefix $(obj)/,$(setup-y)) SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
sed-offsets := -e 's/^00*/0/' \
-e 's/^\([0-9a-fA-F]*\) . \(input_data\|input_data_end\)$$/\#define \2 0x\1/p'
quiet_cmd_offsets = OFFSETS $@
cmd_offsets = $(NM) $< | sed -n $(sed-offsets) > $@
$(obj)/offsets.h: $(obj)/compressed/vmlinux FORCE
$(call if_changed,offsets)
targets += offsets.h
AFLAGS_header.o += -I$(obj)
$(obj)/header.o: $(obj)/offsets.h
LDFLAGS_setup.elf := -T LDFLAGS_setup.elf := -T
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
$(call if_changed,ld) $(call if_changed,ld)
......
...@@ -286,6 +286,11 @@ int getchar_timeout(void); ...@@ -286,6 +286,11 @@ int getchar_timeout(void);
/* video.c */ /* video.c */
void set_video(void); void set_video(void);
/* video-mode.c */
int set_mode(u16 mode);
int mode_defined(u16 mode);
void probe_cards(int unsafe);
/* video-vesa.c */ /* video-vesa.c */
void vesa_store_edid(void); void vesa_store_edid(void);
......
...@@ -22,7 +22,7 @@ $(obj)/vmlinux: $(src)/vmlinux_$(BITS).lds $(obj)/head_$(BITS).o $(obj)/misc.o $ ...@@ -22,7 +22,7 @@ $(obj)/vmlinux: $(src)/vmlinux_$(BITS).lds $(obj)/head_$(BITS).o $(obj)/misc.o $
$(call if_changed,ld) $(call if_changed,ld)
@: @:
OBJCOPYFLAGS_vmlinux.bin := -O binary -R .note -R .comment -S OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
$(obj)/vmlinux.bin: vmlinux FORCE $(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
......
This diff is collapsed.
...@@ -56,27 +56,27 @@ static const u32 req_flags[NCAPINTS] = ...@@ -56,27 +56,27 @@ static const u32 req_flags[NCAPINTS] =
REQUIRED_MASK7, REQUIRED_MASK7,
}; };
#define A32(a,b,c,d) (((d) << 24)+((c) << 16)+((b) << 8)+(a)) #define A32(a, b, c, d) (((d) << 24)+((c) << 16)+((b) << 8)+(a))
static int is_amd(void) static int is_amd(void)
{ {
return cpu_vendor[0] == A32('A','u','t','h') && return cpu_vendor[0] == A32('A', 'u', 't', 'h') &&
cpu_vendor[1] == A32('e','n','t','i') && cpu_vendor[1] == A32('e', 'n', 't', 'i') &&
cpu_vendor[2] == A32('c','A','M','D'); cpu_vendor[2] == A32('c', 'A', 'M', 'D');
} }
static int is_centaur(void) static int is_centaur(void)
{ {
return cpu_vendor[0] == A32('C','e','n','t') && return cpu_vendor[0] == A32('C', 'e', 'n', 't') &&
cpu_vendor[1] == A32('a','u','r','H') && cpu_vendor[1] == A32('a', 'u', 'r', 'H') &&
cpu_vendor[2] == A32('a','u','l','s'); cpu_vendor[2] == A32('a', 'u', 'l', 's');
} }
static int is_transmeta(void) static int is_transmeta(void)
{ {
return cpu_vendor[0] == A32('G','e','n','u') && return cpu_vendor[0] == A32('G', 'e', 'n', 'u') &&
cpu_vendor[1] == A32('i','n','e','T') && cpu_vendor[1] == A32('i', 'n', 'e', 'T') &&
cpu_vendor[2] == A32('M','x','8','6'); cpu_vendor[2] == A32('M', 'x', '8', '6');
} }
static int has_fpu(void) static int has_fpu(void)
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/setup.h> #include <asm/setup.h>
#include "boot.h" #include "boot.h"
#include "offsets.h"
SETUPSECTS = 4 /* default nr of setup-sectors */ SETUPSECTS = 4 /* default nr of setup-sectors */
BOOTSEG = 0x07C0 /* original address of boot-sector */ BOOTSEG = 0x07C0 /* original address of boot-sector */
...@@ -119,7 +120,7 @@ _start: ...@@ -119,7 +120,7 @@ _start:
# Part 2 of the header, from the old setup.S # Part 2 of the header, from the old setup.S
.ascii "HdrS" # header signature .ascii "HdrS" # header signature
.word 0x0207 # header version number (>= 0x0105) .word 0x0208 # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
.globl realmode_swtch .globl realmode_swtch
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
...@@ -223,6 +224,9 @@ hardware_subarch: .long 0 # subarchitecture, added with 2.07 ...@@ -223,6 +224,9 @@ hardware_subarch: .long 0 # subarchitecture, added with 2.07
hardware_subarch_data: .quad 0 hardware_subarch_data: .quad 0
payload_offset: .long input_data
payload_length: .long input_data_end-input_data
# End of setup header ##################################################### # End of setup header #####################################################
.section ".inittext", "ax" .section ".inittext", "ax"
......
...@@ -100,7 +100,7 @@ static void reset_coprocessor(void) ...@@ -100,7 +100,7 @@ static void reset_coprocessor(void)
/* /*
* Set up the GDT * Set up the GDT
*/ */
#define GDT_ENTRY(flags,base,limit) \ #define GDT_ENTRY(flags, base, limit) \
(((u64)(base & 0xff000000) << 32) | \ (((u64)(base & 0xff000000) << 32) | \
((u64)flags << 40) | \ ((u64)flags << 40) | \
((u64)(limit & 0x00ff0000) << 32) | \ ((u64)(limit & 0x00ff0000) << 32) | \
......
...@@ -50,6 +50,75 @@ typedef unsigned long u32; ...@@ -50,6 +50,75 @@ typedef unsigned long u32;
u8 buf[SETUP_SECT_MAX*512]; u8 buf[SETUP_SECT_MAX*512];
int is_big_kernel; int is_big_kernel;
/*----------------------------------------------------------------------*/
static const u32 crctab32[] = {
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856,
0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,
0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4,
0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3,
0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a,
0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599,
0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190,
0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f,
0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e,
0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed,
0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,
0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3,
0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a,
0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5,
0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010,
0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17,
0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6,
0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615,
0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344,
0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,
0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a,
0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1,
0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c,
0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef,
0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe,
0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31,
0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c,
0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b,
0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242,
0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1,
0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278,
0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7,
0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66,
0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605,
0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8,
0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b,
0x2d02ef8d
};
static u32 partial_crc32_one(u8 c, u32 crc)
{
return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
}
static u32 partial_crc32(const u8 *s, int len, u32 crc)
{
while (len--)
crc = partial_crc32_one(*s++, crc);
return crc;
}
static void die(const char * str, ...) static void die(const char * str, ...)
{ {
va_list args; va_list args;
...@@ -74,6 +143,7 @@ int main(int argc, char ** argv) ...@@ -74,6 +143,7 @@ int main(int argc, char ** argv)
FILE *file; FILE *file;
int fd; int fd;
void *kernel; void *kernel;
u32 crc = 0xffffffffUL;
if (argc > 2 && !strcmp(argv[1], "-b")) if (argc > 2 && !strcmp(argv[1], "-b"))
{ {
...@@ -144,7 +214,8 @@ int main(int argc, char ** argv) ...@@ -144,7 +214,8 @@ int main(int argc, char ** argv)
kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0); kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
if (kernel == MAP_FAILED) if (kernel == MAP_FAILED)
die("Unable to mmap '%s': %m", argv[2]); die("Unable to mmap '%s': %m", argv[2]);
sys_size = (sz + 15) / 16; /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
sys_size = (sz + 15 + 4) / 16;
if (!is_big_kernel && sys_size > DEF_SYSSIZE) if (!is_big_kernel && sys_size > DEF_SYSSIZE)
die("System is too big. Try using bzImage or modules."); die("System is too big. Try using bzImage or modules.");
...@@ -155,12 +226,27 @@ int main(int argc, char ** argv) ...@@ -155,12 +226,27 @@ int main(int argc, char ** argv)
buf[0x1f6] = sys_size >> 16; buf[0x1f6] = sys_size >> 16;
buf[0x1f7] = sys_size >> 24; buf[0x1f7] = sys_size >> 24;
crc = partial_crc32(buf, i, crc);
if (fwrite(buf, 1, i, stdout) != i) if (fwrite(buf, 1, i, stdout) != i)
die("Writing setup failed"); die("Writing setup failed");
/* Copy the kernel code */ /* Copy the kernel code */
crc = partial_crc32(kernel, sz, crc);
if (fwrite(kernel, 1, sz, stdout) != sz) if (fwrite(kernel, 1, sz, stdout) != sz)
die("Writing kernel failed"); die("Writing kernel failed");
/* Add padding leaving 4 bytes for the checksum */
while (sz++ < (sys_size*16) - 4) {
crc = partial_crc32_one('\0', crc);
if (fwrite("\0", 1, 1, stdout) != 1)
die("Writing padding failed");
}
/* Write the CRC */
fprintf(stderr, "CRC %lx\n", crc);
if (fwrite(&crc, 1, 4, stdout) != 4)
die("Writing CRC failed");
close(fd); close(fd);
/* Everything is OK */ /* Everything is OK */
......
...@@ -50,6 +50,7 @@ static int set_bios_mode(u8 mode) ...@@ -50,6 +50,7 @@ static int set_bios_mode(u8 mode)
if (new_mode == mode) if (new_mode == mode)
return 0; /* Mode change OK */ return 0; /* Mode change OK */
#ifndef _WAKEUP
if (new_mode != boot_params.screen_info.orig_video_mode) { if (new_mode != boot_params.screen_info.orig_video_mode) {
/* Mode setting failed, but we didn't end up where we /* Mode setting failed, but we didn't end up where we
started. That's bad. Try to revert to the original started. That's bad. Try to revert to the original
...@@ -59,13 +60,18 @@ static int set_bios_mode(u8 mode) ...@@ -59,13 +60,18 @@ static int set_bios_mode(u8 mode)
: "+a" (ax) : "+a" (ax)
: : "ebx", "ecx", "edx", "esi", "edi"); : : "ebx", "ecx", "edx", "esi", "edi");
} }
#endif
return -1; return -1;
} }
static int bios_probe(void) static int bios_probe(void)
{ {
u8 mode; u8 mode;
#ifdef _WAKEUP
u8 saved_mode = 0x03;
#else
u8 saved_mode = boot_params.screen_info.orig_video_mode; u8 saved_mode = boot_params.screen_info.orig_video_mode;
#endif
u16 crtc; u16 crtc;
struct mode_info *mi; struct mode_info *mi;
int nmodes = 0; int nmodes = 0;
......
/* -*- linux-c -*- ------------------------------------------------------- *
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007-2008 rPath, Inc. - All Rights Reserved
*
* This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2.
*
* ----------------------------------------------------------------------- */
/*
* arch/i386/boot/video-mode.c
*
* Set the video mode. This is separated out into a different
* file in order to be shared with the ACPI wakeup code.
*/
#include "boot.h"
#include "video.h"
#include "vesa.h"
/*
* Common variables
*/
int adapter; /* 0=CGA/MDA/HGC, 1=EGA, 2=VGA+ */
u16 video_segment;
int force_x, force_y; /* Don't query the BIOS for cols/rows */
int do_restore; /* Screen contents changed during mode flip */
int graphic_mode; /* Graphic mode with linear frame buffer */
/* Probe the video drivers and have them generate their mode lists. */
void probe_cards(int unsafe)
{
struct card_info *card;
static u8 probed[2];
if (probed[unsafe])
return;
probed[unsafe] = 1;
for (card = video_cards; card < video_cards_end; card++) {
if (card->unsafe == unsafe) {
if (card->probe)
card->nmodes = card->probe();
else
card->nmodes = 0;
}
}
}
/* Test if a mode is defined */
int mode_defined(u16 mode)
{
struct card_info *card;
struct mode_info *mi;
int i;
for (card = video_cards; card < video_cards_end; card++) {
mi = card->modes;
for (i = 0; i < card->nmodes; i++, mi++) {
if (mi->mode == mode)
return 1;
}
}
return 0;
}
/* Set mode (without recalc) */
static int raw_set_mode(u16 mode, u16 *real_mode)
{
int nmode, i;
struct card_info *card;
struct mode_info *mi;
/* Drop the recalc bit if set */
mode &= ~VIDEO_RECALC;
/* Scan for mode based on fixed ID, position, or resolution */
nmode = 0;
for (card = video_cards; card < video_cards_end; card++) {
mi = card->modes;
for (i = 0; i < card->nmodes; i++, mi++) {
int visible = mi->x || mi->y;
if ((mode == nmode && visible) ||
mode == mi->mode ||
mode == (mi->y << 8)+mi->x) {
*real_mode = mi->mode;
return card->set_mode(mi);
}
if (visible)
nmode++;
}
}
/* Nothing found? Is it an "exceptional" (unprobed) mode? */
for (card = video_cards; card < video_cards_end; card++) {
if (mode >= card->xmode_first &&
mode < card->xmode_first+card->xmode_n) {
struct mode_info mix;
*real_mode = mix.mode = mode;
mix.x = mix.y = 0;
return card->set_mode(&mix);
}
}
/* Otherwise, failure... */
return -1;
}
/*
* Recalculate the vertical video cutoff (hack!)
*/
static void vga_recalc_vertical(void)
{
unsigned int font_size, rows;
u16 crtc;
u8 pt, ov;
set_fs(0);
font_size = rdfs8(0x485); /* BIOS: font size (pixels) */
rows = force_y ? force_y : rdfs8(0x484)+1; /* Text rows */
rows *= font_size; /* Visible scan lines */
rows--; /* ... minus one */
crtc = vga_crtc();
pt = in_idx(crtc, 0x11);
pt &= ~0x80; /* Unlock CR0-7 */
out_idx(pt, crtc, 0x11);
out_idx((u8)rows, crtc, 0x12); /* Lower height register */
ov = in_idx(crtc, 0x07); /* Overflow register */
ov &= 0xbd;
ov |= (rows >> (8-1)) & 0x02;
ov |= (rows >> (9-6)) & 0x40;
out_idx(ov, crtc, 0x07);
}
/* Set mode (with recalc if specified) */
int set_mode(u16 mode)
{
int rv;
u16 real_mode;
/* Very special mode numbers... */
if (mode == VIDEO_CURRENT_MODE)
return 0; /* Nothing to do... */
else if (mode == NORMAL_VGA)
mode = VIDEO_80x25;
else if (mode == EXTENDED_VGA)
mode = VIDEO_8POINT;
rv = raw_set_mode(mode, &real_mode);
if (rv)
return rv;
if (mode & VIDEO_RECALC)
vga_recalc_vertical();
/* Save the canonical mode number for the kernel, not
an alias, size specification or menu position */
#ifndef _WAKEUP
boot_params.hdr.vid_mode = real_mode;
#endif
return 0;
}
...@@ -24,7 +24,11 @@ static struct vesa_mode_info vminfo; ...@@ -24,7 +24,11 @@ static struct vesa_mode_info vminfo;
__videocard video_vesa; __videocard video_vesa;
#ifndef _WAKEUP
static void vesa_store_mode_params_graphics(void); static void vesa_store_mode_params_graphics(void);
#else /* _WAKEUP */
static inline void vesa_store_mode_params_graphics(void) {}
#endif /* _WAKEUP */
static int vesa_probe(void) static int vesa_probe(void)
{ {
...@@ -165,6 +169,8 @@ static int vesa_set_mode(struct mode_info *mode) ...@@ -165,6 +169,8 @@ static int vesa_set_mode(struct mode_info *mode)
} }
#ifndef _WAKEUP
/* Switch DAC to 8-bit mode */ /* Switch DAC to 8-bit mode */
static void vesa_dac_set_8bits(void) static void vesa_dac_set_8bits(void)
{ {
...@@ -288,6 +294,8 @@ void vesa_store_edid(void) ...@@ -288,6 +294,8 @@ void vesa_store_edid(void)
#endif /* CONFIG_FIRMWARE_EDID */ #endif /* CONFIG_FIRMWARE_EDID */
} }
#endif /* not _WAKEUP */
__videocard video_vesa = __videocard video_vesa =
{ {
.card_name = "VESA", .card_name = "VESA",
......
...@@ -210,6 +210,8 @@ static int vga_set_mode(struct mode_info *mode) ...@@ -210,6 +210,8 @@ static int vga_set_mode(struct mode_info *mode)
*/ */
static int vga_probe(void) static int vga_probe(void)
{ {
u16 ega_bx;
static const char *card_name[] = { static const char *card_name[] = {
"CGA/MDA/HGC", "EGA", "VGA" "CGA/MDA/HGC", "EGA", "VGA"
}; };
...@@ -226,12 +228,16 @@ static int vga_probe(void) ...@@ -226,12 +228,16 @@ static int vga_probe(void)
u8 vga_flag; u8 vga_flag;
asm(INT10 asm(INT10
: "=b" (boot_params.screen_info.orig_video_ega_bx) : "=b" (ega_bx)
: "a" (0x1200), "b" (0x10) /* Check EGA/VGA */ : "a" (0x1200), "b" (0x10) /* Check EGA/VGA */
: "ecx", "edx", "esi", "edi"); : "ecx", "edx", "esi", "edi");
#ifndef _WAKEUP
boot_params.screen_info.orig_video_ega_bx = ega_bx;
#endif
/* If we have MDA/CGA/HGC then BL will be unchanged at 0x10 */ /* If we have MDA/CGA/HGC then BL will be unchanged at 0x10 */
if ((u8)boot_params.screen_info.orig_video_ega_bx != 0x10) { if ((u8)ega_bx != 0x10) {
/* EGA/VGA */ /* EGA/VGA */
asm(INT10 asm(INT10
: "=a" (vga_flag) : "=a" (vga_flag)
...@@ -240,7 +246,9 @@ static int vga_probe(void) ...@@ -240,7 +246,9 @@ static int vga_probe(void)
if (vga_flag == 0x1a) { if (vga_flag == 0x1a) {
adapter = ADAPTER_VGA; adapter = ADAPTER_VGA;
#ifndef _WAKEUP
boot_params.screen_info.orig_video_isVGA = 1; boot_params.screen_info.orig_video_isVGA = 1;
#endif
} else { } else {
adapter = ADAPTER_EGA; adapter = ADAPTER_EGA;
} }
......
...@@ -18,21 +18,6 @@ ...@@ -18,21 +18,6 @@
#include "video.h" #include "video.h"
#include "vesa.h" #include "vesa.h"
/*
* Mode list variables
*/
static struct card_info cards[]; /* List of cards to probe for */
/*
* Common variables
*/
int adapter; /* 0=CGA/MDA/HGC, 1=EGA, 2=VGA+ */
u16 video_segment;
int force_x, force_y; /* Don't query the BIOS for cols/rows */
int do_restore = 0; /* Screen contents changed during mode flip */
int graphic_mode; /* Graphic mode with linear frame buffer */
static void store_cursor_position(void) static void store_cursor_position(void)
{ {
u16 curpos; u16 curpos;
...@@ -107,147 +92,6 @@ static void store_mode_params(void) ...@@ -107,147 +92,6 @@ static void store_mode_params(void)
boot_params.screen_info.orig_video_lines = y; boot_params.screen_info.orig_video_lines = y;
} }
/* Probe the video drivers and have them generate their mode lists. */
static void probe_cards(int unsafe)
{
struct card_info *card;
static u8 probed[2];
if (probed[unsafe])
return;
probed[unsafe] = 1;
for (card = video_cards; card < video_cards_end; card++) {
if (card->unsafe == unsafe) {
if (card->probe)
card->nmodes = card->probe();
else
card->nmodes = 0;
}
}
}
/* Test if a mode is defined */
int mode_defined(u16 mode)
{
struct card_info *card;
struct mode_info *mi;
int i;
for (card = video_cards; card < video_cards_end; card++) {
mi = card->modes;
for (i = 0; i < card->nmodes; i++, mi++) {
if (mi->mode == mode)
return 1;
}
}
return 0;
}
/* Set mode (without recalc) */
static int raw_set_mode(u16 mode, u16 *real_mode)
{
int nmode, i;
struct card_info *card;
struct mode_info *mi;
/* Drop the recalc bit if set */
mode &= ~VIDEO_RECALC;
/* Scan for mode based on fixed ID, position, or resolution */
nmode = 0;
for (card = video_cards; card < video_cards_end; card++) {
mi = card->modes;
for (i = 0; i < card->nmodes; i++, mi++) {
int visible = mi->x || mi->y;
if ((mode == nmode && visible) ||
mode == mi->mode ||
mode == (mi->y << 8)+mi->x) {
*real_mode = mi->mode;
return card->set_mode(mi);
}
if (visible)
nmode++;
}
}
/* Nothing found? Is it an "exceptional" (unprobed) mode? */
for (card = video_cards; card < video_cards_end; card++) {
if (mode >= card->xmode_first &&
mode < card->xmode_first+card->xmode_n) {
struct mode_info mix;
*real_mode = mix.mode = mode;
mix.x = mix.y = 0;
return card->set_mode(&mix);
}
}
/* Otherwise, failure... */
return -1;
}
/*
* Recalculate the vertical video cutoff (hack!)
*/
static void vga_recalc_vertical(void)
{
unsigned int font_size, rows;
u16 crtc;
u8 pt, ov;
set_fs(0);
font_size = rdfs8(0x485); /* BIOS: font size (pixels) */
rows = force_y ? force_y : rdfs8(0x484)+1; /* Text rows */
rows *= font_size; /* Visible scan lines */
rows--; /* ... minus one */
crtc = vga_crtc();
pt = in_idx(crtc, 0x11);
pt &= ~0x80; /* Unlock CR0-7 */
out_idx(pt, crtc, 0x11);
out_idx((u8)rows, crtc, 0x12); /* Lower height register */
ov = in_idx(crtc, 0x07); /* Overflow register */
ov &= 0xbd;
ov |= (rows >> (8-1)) & 0x02;
ov |= (rows >> (9-6)) & 0x40;
out_idx(ov, crtc, 0x07);
}
/* Set mode (with recalc if specified) */
static int set_mode(u16 mode)
{
int rv;
u16 real_mode;
/* Very special mode numbers... */
if (mode == VIDEO_CURRENT_MODE)
return 0; /* Nothing to do... */
else if (mode == NORMAL_VGA)
mode = VIDEO_80x25;
else if (mode == EXTENDED_VGA)
mode = VIDEO_8POINT;
rv = raw_set_mode(mode, &real_mode);
if (rv)
return rv;
if (mode & VIDEO_RECALC)
vga_recalc_vertical();
/* Save the canonical mode number for the kernel, not
an alias, size specification or menu position */
boot_params.hdr.vid_mode = real_mode;
return 0;
}
static unsigned int get_entry(void) static unsigned int get_entry(void)
{ {
char entry_buf[4]; char entry_buf[4];
...@@ -486,6 +330,7 @@ void set_video(void) ...@@ -486,6 +330,7 @@ void set_video(void)
printf("Undefined video mode number: %x\n", mode); printf("Undefined video mode number: %x\n", mode);
mode = ASK_VGA; mode = ASK_VGA;
} }
boot_params.hdr.vid_mode = mode;
vesa_store_edid(); vesa_store_edid();
store_mode_params(); store_mode_params();
......
...@@ -468,7 +468,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -468,7 +468,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
restorer = ka->sa.sa_restorer; restorer = ka->sa.sa_restorer;
} else { } else {
/* Return stub is in 32bit vsyscall page */ /* Return stub is in 32bit vsyscall page */
if (current->binfmt->hasvdso) if (current->mm->context.vdso)
restorer = VDSO32_SYMBOL(current->mm->context.vdso, restorer = VDSO32_SYMBOL(current->mm->context.vdso,
sigreturn); sigreturn);
else else
......
...@@ -162,12 +162,14 @@ sysenter_tracesys: ...@@ -162,12 +162,14 @@ sysenter_tracesys:
SAVE_REST SAVE_REST
CLEAR_RREGS CLEAR_RREGS
movq %r9,R9(%rsp) movq %r9,R9(%rsp)
movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq $-ENOSYS,RAX(%rsp)/* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
RESTORE_REST RESTORE_REST
xchgl %ebp,%r9d xchgl %ebp,%r9d
cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */
jmp sysenter_do_call jmp sysenter_do_call
CFI_ENDPROC CFI_ENDPROC
ENDPROC(ia32_sysenter_target) ENDPROC(ia32_sysenter_target)
...@@ -261,13 +263,15 @@ cstar_tracesys: ...@@ -261,13 +263,15 @@ cstar_tracesys:
SAVE_REST SAVE_REST
CLEAR_RREGS CLEAR_RREGS
movq %r9,R9(%rsp) movq %r9,R9(%rsp)
movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
RESTORE_REST RESTORE_REST
xchgl %ebp,%r9d xchgl %ebp,%r9d
movl RSP-ARGOFFSET(%rsp), %r8d movl RSP-ARGOFFSET(%rsp), %r8d
cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */
jmp cstar_do_call jmp cstar_do_call
END(ia32_cstar_target) END(ia32_cstar_target)
...@@ -325,7 +329,7 @@ ENTRY(ia32_syscall) ...@@ -325,7 +329,7 @@ ENTRY(ia32_syscall)
jnz ia32_tracesys jnz ia32_tracesys
ia32_do_syscall: ia32_do_syscall:
cmpl $(IA32_NR_syscalls-1),%eax cmpl $(IA32_NR_syscalls-1),%eax
ja ia32_badsys ja int_ret_from_sys_call /* ia32_tracesys has set RAX(%rsp) */
IA32_ARG_FIXUP IA32_ARG_FIXUP
call *ia32_sys_call_table(,%rax,8) # xxx: rip relative call *ia32_sys_call_table(,%rax,8) # xxx: rip relative
ia32_sysret: ia32_sysret:
...@@ -335,7 +339,7 @@ ia32_sysret: ...@@ -335,7 +339,7 @@ ia32_sysret:
ia32_tracesys: ia32_tracesys:
SAVE_REST SAVE_REST
CLEAR_RREGS CLEAR_RREGS
movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
movq %rsp,%rdi /* &pt_regs -> arg1 */ movq %rsp,%rdi /* &pt_regs -> arg1 */
call syscall_trace_enter call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
......
...@@ -26,51 +26,27 @@ ...@@ -26,51 +26,27 @@
#include <linux/file.h> #include <linux/file.h>
#include <linux/signal.h> #include <linux/signal.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <linux/resource.h>
#include <linux/times.h> #include <linux/times.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/smp.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/shm.h>
#include <linux/slab.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/nfs_fs.h>
#include <linux/quota.h>
#include <linux/module.h>
#include <linux/sunrpc/svc.h>
#include <linux/nfsd/nfsd.h>
#include <linux/nfsd/cache.h>
#include <linux/nfsd/xdr.h>
#include <linux/nfsd/syscall.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/ipc.h>
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/binfmts.h>
#include <linux/init.h>
#include <linux/aio_abi.h>
#include <linux/aio.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/vfs.h> #include <linux/vfs.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/highuid.h> #include <linux/highuid.h>
#include <linux/vmalloc.h>
#include <linux/fsnotify.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
#include <asm/mman.h> #include <asm/mman.h>
#include <asm/types.h> #include <asm/types.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/semaphore.h> #include <asm/semaphore.h>
#include <asm/atomic.h> #include <asm/atomic.h>
#include <asm/ldt.h>
#include <net/scm.h>
#include <net/sock.h>
#include <asm/ia32.h> #include <asm/ia32.h>
#include <asm/vgtod.h>
#define AA(__x) ((unsigned long)(__x)) #define AA(__x) ((unsigned long)(__x))
...@@ -804,11 +780,6 @@ asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv, ...@@ -804,11 +780,6 @@ asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
if (IS_ERR(filename)) if (IS_ERR(filename))
return error; return error;
error = compat_do_execve(filename, argv, envp, regs); error = compat_do_execve(filename, argv, envp, regs);
if (error == 0) {
task_lock(current);
current->ptrace &= ~PT_DTRACE;
task_unlock(current);
}
putname(filename); putname(filename);
return error; return error;
} }
......
...@@ -2,8 +2,7 @@ ...@@ -2,8 +2,7 @@
# Makefile for the linux kernel. # Makefile for the linux kernel.
# #
extra-y := head_$(BITS).o init_task.o vmlinux.lds extra-y := head_$(BITS).o head$(BITS).o init_task.o vmlinux.lds
extra-$(CONFIG_X86_64) += head64.o
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE) CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
...@@ -19,7 +18,7 @@ CFLAGS_tsc_64.o := $(nostackp) ...@@ -19,7 +18,7 @@ CFLAGS_tsc_64.o := $(nostackp)
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
obj-y += traps_$(BITS).o irq_$(BITS).o obj-y += traps_$(BITS).o irq_$(BITS).o
obj-y += time_$(BITS).o ioport.o ldt.o obj-y += time_$(BITS).o ioport.o ldt.o
obj-y += setup_$(BITS).o i8259_$(BITS).o obj-y += setup_$(BITS).o i8259_$(BITS).o setup.o
obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o
obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o
obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o
...@@ -29,6 +28,7 @@ obj-y += alternative.o i8253.o ...@@ -29,6 +28,7 @@ obj-y += alternative.o i8253.o
obj-$(CONFIG_X86_64) += pci-nommu_64.o bugs_64.o obj-$(CONFIG_X86_64) += pci-nommu_64.o bugs_64.o
obj-y += tsc_$(BITS).o io_delay.o rtc.o obj-y += tsc_$(BITS).o io_delay.o rtc.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline.o
obj-y += i387.o obj-y += i387.o
obj-y += ptrace.o obj-y += ptrace.o
obj-y += ds.o obj-y += ds.o
...@@ -47,11 +47,12 @@ obj-$(CONFIG_MICROCODE) += microcode.o ...@@ -47,11 +47,12 @@ obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_PCI) += early-quirks.o obj-$(CONFIG_PCI) += early-quirks.o
apm-y := apm_32.o apm-y := apm_32.o
obj-$(CONFIG_APM) += apm.o obj-$(CONFIG_APM) += apm.o
obj-$(CONFIG_X86_SMP) += smp_$(BITS).o smpboot_$(BITS).o tsc_sync.o obj-$(CONFIG_X86_SMP) += smp.o
obj-$(CONFIG_X86_32_SMP) += smpcommon_32.o obj-$(CONFIG_X86_SMP) += smpboot.o tsc_sync.o ipi.o tlb_$(BITS).o
obj-$(CONFIG_X86_64_SMP) += smp_64.o smpboot_64.o tsc_sync.o obj-$(CONFIG_X86_32_SMP) += smpcommon.o
obj-$(CONFIG_X86_64_SMP) += tsc_sync.o smpcommon.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_$(BITS).o obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_$(BITS).o
obj-$(CONFIG_X86_MPPARSE) += mpparse_$(BITS).o obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi_$(BITS).o obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi_$(BITS).o
obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
...@@ -60,7 +61,7 @@ obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o ...@@ -60,7 +61,7 @@ obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-$(CONFIG_X86_NUMAQ) += numaq_32.o obj-$(CONFIG_X86_NUMAQ) += numaq_32.o
obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
obj-$(CONFIG_X86_VSMP) += vsmp_64.o obj-y += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_ACPI_SRAT) += srat_32.o obj-$(CONFIG_ACPI_SRAT) += srat_32.o
...@@ -89,7 +90,7 @@ scx200-y += scx200_32.o ...@@ -89,7 +90,7 @@ scx200-y += scx200_32.o
### ###
# 64 bit specific files # 64 bit specific files
ifeq ($(CONFIG_X86_64),y) ifeq ($(CONFIG_X86_64),y)
obj-y += genapic_64.o genapic_flat_64.o obj-y += genapic_64.o genapic_flat_64.o genx2apic_uv_x.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_AUDIT) += audit_64.o obj-$(CONFIG_AUDIT) += audit_64.o
......
subdir- := realmode
obj-$(CONFIG_ACPI) += boot.o obj-$(CONFIG_ACPI) += boot.o
obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_rm.o wakeup_$(BITS).o
ifneq ($(CONFIG_ACPI_PROCESSOR),) ifneq ($(CONFIG_ACPI_PROCESSOR),)
obj-y += cstate.o processor.o obj-y += cstate.o processor.o
endif endif
$(obj)/wakeup_rm.o: $(obj)/realmode/wakeup.bin
$(obj)/realmode/wakeup.bin: FORCE
$(Q)$(MAKE) $(build)=$(obj)/realmode $@
...@@ -39,6 +39,11 @@ ...@@ -39,6 +39,11 @@
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/smp.h>
#ifdef CONFIG_X86_LOCAL_APIC
# include <mach_apic.h>
#endif
static int __initdata acpi_force = 0; static int __initdata acpi_force = 0;
...@@ -52,9 +57,7 @@ EXPORT_SYMBOL(acpi_disabled); ...@@ -52,9 +57,7 @@ EXPORT_SYMBOL(acpi_disabled);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/genapic.h>
static inline int acpi_madt_oem_check(char *oem_id, char *oem_table_id) { return 0; }
#else /* X86 */ #else /* X86 */
...@@ -111,7 +114,7 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size) ...@@ -111,7 +114,7 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size)
if (!phys_addr || !size) if (!phys_addr || !size)
return NULL; return NULL;
if (phys_addr+size <= (end_pfn_map << PAGE_SHIFT) + PAGE_SIZE) if (phys_addr+size <= (max_pfn_mapped << PAGE_SHIFT) + PAGE_SIZE)
return __va(phys_addr); return __va(phys_addr);
return NULL; return NULL;
...@@ -237,6 +240,16 @@ static int __init acpi_parse_madt(struct acpi_table_header *table) ...@@ -237,6 +240,16 @@ static int __init acpi_parse_madt(struct acpi_table_header *table)
return 0; return 0;
} }
static void __cpuinit acpi_register_lapic(int id, u8 enabled)
{
if (!enabled) {
++disabled_cpus;
return;
}
generic_processor_info(id, 0);
}
static int __init static int __init
acpi_parse_lapic(struct acpi_subtable_header * header, const unsigned long end) acpi_parse_lapic(struct acpi_subtable_header * header, const unsigned long end)
{ {
...@@ -256,8 +269,26 @@ acpi_parse_lapic(struct acpi_subtable_header * header, const unsigned long end) ...@@ -256,8 +269,26 @@ acpi_parse_lapic(struct acpi_subtable_header * header, const unsigned long end)
* to not preallocating memory for all NR_CPUS * to not preallocating memory for all NR_CPUS
* when we use CPU hotplug. * when we use CPU hotplug.
*/ */
mp_register_lapic(processor->id, /* APIC ID */ acpi_register_lapic(processor->id, /* APIC ID */
processor->lapic_flags & ACPI_MADT_ENABLED); /* Enabled? */ processor->lapic_flags & ACPI_MADT_ENABLED);
return 0;
}
static int __init
acpi_parse_sapic(struct acpi_subtable_header *header, const unsigned long end)
{
struct acpi_madt_local_sapic *processor = NULL;
processor = (struct acpi_madt_local_sapic *)header;
if (BAD_MADT_ENTRY(processor, end))
return -EINVAL;
acpi_table_print_madt_entry(header);
acpi_register_lapic((processor->id << 8) | processor->eid,/* APIC ID */
processor->lapic_flags & ACPI_MADT_ENABLED);
return 0; return 0;
} }
...@@ -300,6 +331,8 @@ acpi_parse_lapic_nmi(struct acpi_subtable_header * header, const unsigned long e ...@@ -300,6 +331,8 @@ acpi_parse_lapic_nmi(struct acpi_subtable_header * header, const unsigned long e
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
struct mp_ioapic_routing mp_ioapic_routing[MAX_IO_APICS];
static int __init static int __init
acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end) acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end)
{ {
...@@ -532,7 +565,7 @@ static int __cpuinit _acpi_map_lsapic(acpi_handle handle, int *pcpu) ...@@ -532,7 +565,7 @@ static int __cpuinit _acpi_map_lsapic(acpi_handle handle, int *pcpu)
buffer.pointer = NULL; buffer.pointer = NULL;
tmp_map = cpu_present_map; tmp_map = cpu_present_map;
mp_register_lapic(physid, lapic->lapic_flags & ACPI_MADT_ENABLED); acpi_register_lapic(physid, lapic->lapic_flags & ACPI_MADT_ENABLED);
/* /*
* If mp_register_lapic successfully generates a new logical cpu * If mp_register_lapic successfully generates a new logical cpu
...@@ -732,6 +765,16 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table) ...@@ -732,6 +765,16 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
* Parse LAPIC entries in MADT * Parse LAPIC entries in MADT
* returns 0 on success, < 0 on error * returns 0 on success, < 0 on error
*/ */
static void __init acpi_register_lapic_address(unsigned long address)
{
mp_lapic_addr = address;
set_fixmap_nocache(FIX_APIC_BASE, address);
if (boot_cpu_physical_apicid == -1U)
boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
}
static int __init acpi_parse_madt_lapic_entries(void) static int __init acpi_parse_madt_lapic_entries(void)
{ {
int count; int count;
...@@ -753,10 +796,14 @@ static int __init acpi_parse_madt_lapic_entries(void) ...@@ -753,10 +796,14 @@ static int __init acpi_parse_madt_lapic_entries(void)
return count; return count;
} }
mp_register_lapic_address(acpi_lapic_addr); acpi_register_lapic_address(acpi_lapic_addr);
count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_SAPIC,
acpi_parse_sapic, MAX_APICS);
count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC, acpi_parse_lapic, if (!count)
MAX_APICS); count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC,
acpi_parse_lapic, MAX_APICS);
if (!count) { if (!count) {
printk(KERN_ERR PREFIX "No LAPIC entries present\n"); printk(KERN_ERR PREFIX "No LAPIC entries present\n");
/* TBD: Cleanup to allow fallback to MPS */ /* TBD: Cleanup to allow fallback to MPS */
......
#
# arch/x86/kernel/acpi/realmode/Makefile
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
targets := wakeup.bin wakeup.elf
wakeup-y += wakeup.o wakemain.o video-mode.o copy.o
# The link order of the video-*.o modules can matter. In particular,
# video-vga.o *must* be listed first, followed by video-vesa.o.
# Hardware-specific drivers should follow in the order they should be
# probed, and video-bios.o should typically be last.
wakeup-y += video-vga.o
wakeup-y += video-vesa.o
wakeup-y += video-bios.o
targets += $(wakeup-y)
bootsrc := $(src)/../../../boot
# ---------------------------------------------------------------------------
# How to compile the 16-bit code. Note we always compile for -march=i386,
# that way we can complain to the user if the CPU is insufficient.
# Compile with _SETUP since this is similar to the boot-time setup code.
KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D_WAKEUP -D__KERNEL__ \
-I$(srctree)/$(bootsrc) \
$(cflags-y) \
-Wall -Wstrict-prototypes \
-march=i386 -mregparm=3 \
-include $(srctree)/$(bootsrc)/code16gcc.h \
-fno-strict-aliasing -fomit-frame-pointer \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-toplevel-reorder,\
$(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2)
KBUILD_CFLAGS += $(call cc-option, -m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
WAKEUP_OBJS = $(addprefix $(obj)/,$(wakeup-y))
LDFLAGS_wakeup.elf := -T
CPPFLAGS_wakeup.lds += -P -C
$(obj)/wakeup.elf: $(src)/wakeup.lds $(WAKEUP_OBJS) FORCE
$(call if_changed,ld)
OBJCOPYFLAGS_wakeup.bin := -O binary
$(obj)/wakeup.bin: $(obj)/wakeup.elf FORCE
$(call if_changed,objcopy)
#include "../../../boot/copy.S"
#include "../../../boot/video-bios.c"
#include "../../../boot/video-mode.c"
#include "../../../boot/video-vesa.c"
#include "../../../boot/video-vga.c"
#include "wakeup.h"
#include "boot.h"
static void udelay(int loops)
{
while (loops--)
io_delay(); /* Approximately 1 us */
}
static void beep(unsigned int hz)
{
u8 enable;
if (!hz) {
enable = 0x00; /* Turn off speaker */
} else {
u16 div = 1193181/hz;
outb(0xb6, 0x43); /* Ctr 2, squarewave, load, binary */
io_delay();
outb(div, 0x42); /* LSB of counter */
io_delay();
outb(div >> 8, 0x42); /* MSB of counter */
io_delay();
enable = 0x03; /* Turn on speaker */
}
inb(0x61); /* Dummy read of System Control Port B */
io_delay();
outb(enable, 0x61); /* Enable timer 2 output to speaker */
io_delay();
}
#define DOT_HZ 880
#define DASH_HZ 587
#define US_PER_DOT 125000
/* Okay, this is totally silly, but it's kind of fun. */
static void send_morse(const char *pattern)
{
char s;
while ((s = *pattern++)) {
switch (s) {
case '.':
beep(DOT_HZ);
udelay(US_PER_DOT);
beep(0);
udelay(US_PER_DOT);
break;
case '-':
beep(DASH_HZ);
udelay(US_PER_DOT * 3);
beep(0);
udelay(US_PER_DOT);
break;
default: /* Assume it's a space */
udelay(US_PER_DOT * 3);
break;
}
}
}
void main(void)
{
/* Kill machine if structures are wrong */
if (wakeup_header.real_magic != 0x12345678)
while (1);
if (wakeup_header.realmode_flags & 4)
send_morse("...-");
if (wakeup_header.realmode_flags & 1)
asm volatile("lcallw $0xc000,$3");
if (wakeup_header.realmode_flags & 2) {
/* Need to call BIOS */
probe_cards(0);
set_mode(wakeup_header.video_mode);
}
}
/*
* ACPI wakeup real mode startup stub
*/
#include <asm/segment.h>
#include <asm/msr-index.h>
#include <asm/page.h>
#include <asm/pgtable.h>
.code16
.section ".header", "a"
/* This should match the structure in wakeup.h */
.globl wakeup_header
wakeup_header:
video_mode: .short 0 /* Video mode number */
pmode_return: .byte 0x66, 0xea /* ljmpl */
.long 0 /* offset goes here */
.short __KERNEL_CS
pmode_cr0: .long 0 /* Saved %cr0 */
pmode_cr3: .long 0 /* Saved %cr3 */
pmode_cr4: .long 0 /* Saved %cr4 */
pmode_efer: .quad 0 /* Saved EFER */
pmode_gdt: .quad 0
realmode_flags: .long 0
real_magic: .long 0
trampoline_segment: .word 0
signature: .long 0x51ee1111
.text
.globl _start
.code16
wakeup_code:
_start:
cli
cld
/* Set up segments */
movw %cs, %ax
movw %ax, %ds
movw %ax, %es
movw %ax, %ss
movl $wakeup_stack_end, %esp
/* Clear the EFLAGS */
pushl $0
popfl
/* Check header signature... */
movl signature, %eax
cmpl $0x51ee1111, %eax
jne bogus_real_magic
/* Check we really have everything... */
movl end_signature, %eax
cmpl $0x65a22c82, %eax
jne bogus_real_magic
/* Call the C code */
calll main
/* Do any other stuff... */
#ifndef CONFIG_64BIT
/* This could also be done in C code... */
movl pmode_cr3, %eax
movl %eax, %cr3
movl pmode_cr4, %ecx
jecxz 1f
movl %ecx, %cr4
1:
movl pmode_efer, %eax
movl pmode_efer + 4, %edx
movl %eax, %ecx
orl %edx, %ecx
jz 1f
movl $0xc0000080, %ecx
wrmsr
1:
lgdtl pmode_gdt
/* This really couldn't... */
movl pmode_cr0, %eax
movl %eax, %cr0
jmp pmode_return
#else
pushw $0
pushw trampoline_segment
pushw $0
lret
#endif
bogus_real_magic:
1:
hlt
jmp 1b
.data
.balign 4
.globl HEAP, heap_end
HEAP:
.long wakeup_heap
heap_end:
.long wakeup_stack
.bss
wakeup_heap:
.space 2048
wakeup_stack:
.space 2048
wakeup_stack_end:
/*
* Definitions for the wakeup data structure at the head of the
* wakeup code.
*/
#ifndef ARCH_X86_KERNEL_ACPI_RM_WAKEUP_H
#define ARCH_X86_KERNEL_ACPI_RM_WAKEUP_H
#ifndef __ASSEMBLY__
#include <linux/types.h>
/* This must match data at wakeup.S */
struct wakeup_header {
u16 video_mode; /* Video mode number */
u16 _jmp1; /* ljmpl opcode, 32-bit only */
u32 pmode_entry; /* Protected mode resume point, 32-bit only */
u16 _jmp2; /* CS value, 32-bit only */
u32 pmode_cr0; /* Protected mode cr0 */
u32 pmode_cr3; /* Protected mode cr3 */
u32 pmode_cr4; /* Protected mode cr4 */
u32 pmode_efer_low; /* Protected mode EFER */
u32 pmode_efer_high;
u64 pmode_gdt;
u32 realmode_flags;
u32 real_magic;
u16 trampoline_segment; /* segment with trampoline code, 64-bit only */
u32 signature; /* To check we have correct structure */
} __attribute__((__packed__));
extern struct wakeup_header wakeup_header;
#endif
#define HEADER_OFFSET 0x3f00
#define WAKEUP_SIZE 0x4000
#endif /* ARCH_X86_KERNEL_ACPI_RM_WAKEUP_H */
/*
* wakeup.ld
*
* Linker script for the real-mode wakeup code
*/
#undef i386
#include "wakeup.h"
OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(_start)
SECTIONS
{
. = HEADER_OFFSET;
.header : {
*(.header)
}
. = 0;
.text : {
*(.text*)
}
. = ALIGN(16);
.rodata : {
*(.rodata*)
}
.videocards : {
video_cards = .;
*(.videocards)
video_cards_end = .;
}
. = ALIGN(16);
.data : {
*(.data*)
}
.signature : {
end_signature = .;
LONG(0x65a22c82)
}
. = ALIGN(16);
.bss : {
__bss_start = .;
*(.bss)
__bss_end = .;
}
. = ALIGN(16);
_end = .;
/DISCARD/ : {
*(.note*)
}
. = ASSERT(_end <= WAKEUP_SIZE, "Wakeup too big!");
}
...@@ -10,30 +10,72 @@ ...@@ -10,30 +10,72 @@
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <asm/smp.h> #include "realmode/wakeup.h"
#include "sleep.h"
/* address in low memory of the wakeup routine. */ unsigned long acpi_wakeup_address;
unsigned long acpi_wakeup_address = 0;
unsigned long acpi_realmode_flags; unsigned long acpi_realmode_flags;
extern char wakeup_start, wakeup_end;
extern unsigned long acpi_copy_wakeup_routine(unsigned long); /* address in low memory of the wakeup routine. */
static unsigned long acpi_realmode;
#ifdef CONFIG_64BIT
static char temp_stack[10240];
#endif
/** /**
* acpi_save_state_mem - save kernel state * acpi_save_state_mem - save kernel state
* *
* Create an identity mapped page table and copy the wakeup routine to * Create an identity mapped page table and copy the wakeup routine to
* low memory. * low memory.
*
* Note that this is too late to change acpi_wakeup_address.
*/ */
int acpi_save_state_mem(void) int acpi_save_state_mem(void)
{ {
if (!acpi_wakeup_address) { struct wakeup_header *header;
printk(KERN_ERR "Could not allocate memory during boot, S3 disabled\n");
if (!acpi_realmode) {
printk(KERN_ERR "Could not allocate memory during boot, "
"S3 disabled\n");
return -ENOMEM; return -ENOMEM;
} }
memcpy((void *)acpi_wakeup_address, &wakeup_start, memcpy((void *)acpi_realmode, &wakeup_code_start, WAKEUP_SIZE);
&wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address); header = (struct wakeup_header *)(acpi_realmode + HEADER_OFFSET);
if (header->signature != 0x51ee1111) {
printk(KERN_ERR "wakeup header does not match\n");
return -EINVAL;
}
header->video_mode = saved_video_mode;
#ifndef CONFIG_64BIT
store_gdt((struct desc_ptr *)&header->pmode_gdt);
header->pmode_efer_low = nx_enabled;
if (header->pmode_efer_low & 1) {
/* This is strange, why not save efer, always? */
rdmsr(MSR_EFER, header->pmode_efer_low,
header->pmode_efer_high);
}
#endif /* !CONFIG_64BIT */
header->pmode_cr0 = read_cr0();
header->pmode_cr4 = read_cr4();
header->realmode_flags = acpi_realmode_flags;
header->real_magic = 0x12345678;
#ifndef CONFIG_64BIT
header->pmode_entry = (u32)&wakeup_pmode_return;
header->pmode_cr3 = (u32)(swsusp_pg_dir - __PAGE_OFFSET);
saved_magic = 0x12345678;
#else /* CONFIG_64BIT */
header->trampoline_segment = setup_trampoline() >> 4;
init_rsp = (unsigned long)temp_stack + 4096;
initial_code = (unsigned long)wakeup_long64;
saved_magic = 0x123456789abcdef0;
#endif /* CONFIG_64BIT */
return 0; return 0;
} }
...@@ -56,15 +98,20 @@ void acpi_restore_state_mem(void) ...@@ -56,15 +98,20 @@ void acpi_restore_state_mem(void)
*/ */
void __init acpi_reserve_bootmem(void) void __init acpi_reserve_bootmem(void)
{ {
if ((&wakeup_end - &wakeup_start) > PAGE_SIZE*2) { if ((&wakeup_code_end - &wakeup_code_start) > WAKEUP_SIZE) {
printk(KERN_ERR printk(KERN_ERR
"ACPI: Wakeup code way too big, S3 disabled.\n"); "ACPI: Wakeup code way too big, S3 disabled.\n");
return; return;
} }
acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE*2); acpi_realmode = (unsigned long)alloc_bootmem_low(WAKEUP_SIZE);
if (!acpi_wakeup_address)
if (!acpi_realmode) {
printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n"); printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
return;
}
acpi_wakeup_address = acpi_realmode;
} }
......
/*
* Variables and functions used by the code in sleep.c
*/
#include <asm/trampoline.h>
extern char wakeup_code_start, wakeup_code_end;
extern unsigned long saved_video_mode;
extern long saved_magic;
extern int wakeup_pmode_return;
extern char swsusp_pg_dir[PAGE_SIZE];
extern unsigned long acpi_copy_wakeup_routine(unsigned long);
extern void wakeup_long64(void);
/*
* sleep.c - x86-specific ACPI sleep support.
*
* Copyright (C) 2001-2003 Patrick Mochel
* Copyright (C) 2001-2003 Pavel Machek <pavel@suse.cz>
*/
#include <linux/acpi.h>
#include <linux/bootmem.h>
#include <linux/dmi.h>
#include <linux/cpumask.h>
#include <asm/smp.h>
/* Ouch, we want to delete this. We already have better version in userspace, in
s2ram from suspend.sf.net project */
static __init int reset_videomode_after_s3(const struct dmi_system_id *d)
{
acpi_realmode_flags |= 2;
return 0;
}
static __initdata struct dmi_system_id acpisleep_dmi_table[] = {
{ /* Reset video mode after returning from ACPI S3 sleep */
.callback = reset_videomode_after_s3,
.ident = "Toshiba Satellite 4030cdt",
.matches = {
DMI_MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
},
},
{}
};
static int __init acpisleep_dmi_init(void)
{
dmi_check_system(acpisleep_dmi_table);
return 0;
}
core_initcall(acpisleep_dmi_init);
...@@ -3,178 +3,12 @@ ...@@ -3,178 +3,12 @@
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/page.h> #include <asm/page.h>
# # Copyright 2003, 2008 Pavel Machek <pavel@suse.cz>, distribute under GPLv2
# wakeup_code runs in real mode, and at unknown address (determined at run-time).
# Therefore it must only use relative jumps/calls.
#
# Do we need to deal with A20? It is okay: ACPI specs says A20 must be enabled
#
# If physical address of wakeup_code is 0x12345, BIOS should call us with
# cs = 0x1234, eip = 0x05
#
#define BEEP \
inb $97, %al; \
outb %al, $0x80; \
movb $3, %al; \
outb %al, $97; \
outb %al, $0x80; \
movb $-74, %al; \
outb %al, $67; \
outb %al, $0x80; \
movb $-119, %al; \
outb %al, $66; \
outb %al, $0x80; \
movb $15, %al; \
outb %al, $66;
ALIGN
.align 4096
ENTRY(wakeup_start)
wakeup_code:
wakeup_code_start = .
.code16
cli
cld
# setup data segment
movw %cs, %ax
movw %ax, %ds # Make ds:0 point to wakeup_start
movw %ax, %ss
testl $4, realmode_flags - wakeup_code
jz 1f
BEEP
1:
mov $(wakeup_stack - wakeup_code), %sp # Private stack is needed for ASUS board
pushl $0 # Kill any dangerous flags
popfl
movl real_magic - wakeup_code, %eax
cmpl $0x12345678, %eax
jne bogus_real_magic
testl $1, realmode_flags - wakeup_code
jz 1f
lcall $0xc000,$3
movw %cs, %ax
movw %ax, %ds # Bios might have played with that
movw %ax, %ss
1:
testl $2, realmode_flags - wakeup_code
jz 1f
mov video_mode - wakeup_code, %ax
call mode_set
1:
# set up page table
movl $swsusp_pg_dir-__PAGE_OFFSET, %eax
movl %eax, %cr3
testl $1, real_efer_save_restore - wakeup_code
jz 4f
# restore efer setting
movl real_save_efer_edx - wakeup_code, %edx
movl real_save_efer_eax - wakeup_code, %eax
mov $0xc0000080, %ecx
wrmsr
4:
# make sure %cr4 is set correctly (features, etc)
movl real_save_cr4 - wakeup_code, %eax
movl %eax, %cr4
# need a gdt -- use lgdtl to force 32-bit operands, in case
# the GDT is located past 16 megabytes.
lgdtl real_save_gdt - wakeup_code
movl real_save_cr0 - wakeup_code, %eax
movl %eax, %cr0
jmp 1f
1:
movl real_magic - wakeup_code, %eax
cmpl $0x12345678, %eax
jne bogus_real_magic
testl $8, realmode_flags - wakeup_code
jz 1f
BEEP
1:
ljmpl $__KERNEL_CS, $wakeup_pmode_return
real_save_gdt: .word 0
.long 0
real_save_cr0: .long 0
real_save_cr3: .long 0
real_save_cr4: .long 0
real_magic: .long 0
video_mode: .long 0
realmode_flags: .long 0
real_efer_save_restore: .long 0
real_save_efer_edx: .long 0
real_save_efer_eax: .long 0
bogus_real_magic:
jmp bogus_real_magic
/* This code uses an extended set of video mode numbers. These include:
* Aliases for standard modes
* NORMAL_VGA (-1)
* EXTENDED_VGA (-2)
* ASK_VGA (-3)
* Video modes numbered by menu position -- NOT RECOMMENDED because of lack
* of compatibility when extending the table. These are between 0x00 and 0xff.
*/
#define VIDEO_FIRST_MENU 0x0000
/* Standard BIOS video modes (BIOS number + 0x0100) */
#define VIDEO_FIRST_BIOS 0x0100
/* VESA BIOS video modes (VESA number + 0x0200) */
#define VIDEO_FIRST_VESA 0x0200
/* Video7 special modes (BIOS number + 0x0900) */
#define VIDEO_FIRST_V7 0x0900
# Setting of user mode (AX=mode ID) => CF=success
# For now, we only handle VESA modes (0x0200..0x03ff). To handle other
# modes, we should probably compile in the video code from the boot
# directory.
mode_set:
movw %ax, %bx
subb $VIDEO_FIRST_VESA>>8, %bh
cmpb $2, %bh
jb check_vesa
setbad:
clc
ret
check_vesa:
orw $0x4000, %bx # Use linear frame buffer
movw $0x4f02, %ax # VESA BIOS mode set call
int $0x10
cmpw $0x004f, %ax # AL=4f if implemented
jnz setbad # AH=0 if OK
stc
ret
.code32 .code32
ALIGN ALIGN
.org 0x800 ENTRY(wakeup_pmode_return)
wakeup_stack_begin: # Stack grows down
.org 0xff0 # Just below end of page
wakeup_stack:
ENTRY(wakeup_end)
.org 0x1000
wakeup_pmode_return: wakeup_pmode_return:
movw $__KERNEL_DS, %ax movw $__KERNEL_DS, %ax
movw %ax, %ss movw %ax, %ss
...@@ -187,7 +21,7 @@ wakeup_pmode_return: ...@@ -187,7 +21,7 @@ wakeup_pmode_return:
lgdt saved_gdt lgdt saved_gdt
lidt saved_idt lidt saved_idt
lldt saved_ldt lldt saved_ldt
ljmp $(__KERNEL_CS),$1f ljmp $(__KERNEL_CS), $1f
1: 1:
movl %cr3, %eax movl %cr3, %eax
movl %eax, %cr3 movl %eax, %cr3
...@@ -201,82 +35,41 @@ wakeup_pmode_return: ...@@ -201,82 +35,41 @@ wakeup_pmode_return:
jne bogus_magic jne bogus_magic
# jump to place where we left off # jump to place where we left off
movl saved_eip,%eax movl saved_eip, %eax
jmp *%eax jmp *%eax
bogus_magic: bogus_magic:
jmp bogus_magic jmp bogus_magic
##
# acpi_copy_wakeup_routine
#
# Copy the above routine to low memory.
#
# Parameters:
# %eax: place to copy wakeup routine to
#
# Returned address is location of code in low memory (past data and stack)
#
ENTRY(acpi_copy_wakeup_routine)
pushl %ebx save_registers:
sgdt saved_gdt sgdt saved_gdt
sidt saved_idt sidt saved_idt
sldt saved_ldt sldt saved_ldt
str saved_tss str saved_tss
movl nx_enabled, %edx
movl %edx, real_efer_save_restore - wakeup_start (%eax)
testl $1, real_efer_save_restore - wakeup_start (%eax)
jz 2f
# save efer setting
pushl %eax
movl %eax, %ebx
mov $0xc0000080, %ecx
rdmsr
movl %edx, real_save_efer_edx - wakeup_start (%ebx)
movl %eax, real_save_efer_eax - wakeup_start (%ebx)
popl %eax
2:
movl %cr3, %edx
movl %edx, real_save_cr3 - wakeup_start (%eax)
movl %cr4, %edx
movl %edx, real_save_cr4 - wakeup_start (%eax)
movl %cr0, %edx
movl %edx, real_save_cr0 - wakeup_start (%eax)
sgdt real_save_gdt - wakeup_start (%eax)
movl saved_videomode, %edx
movl %edx, video_mode - wakeup_start (%eax)
movl acpi_realmode_flags, %edx
movl %edx, realmode_flags - wakeup_start (%eax)
movl $0x12345678, real_magic - wakeup_start (%eax)
movl $0x12345678, saved_magic
popl %ebx
ret
save_registers:
leal 4(%esp), %eax leal 4(%esp), %eax
movl %eax, saved_context_esp movl %eax, saved_context_esp
movl %ebx, saved_context_ebx movl %ebx, saved_context_ebx
movl %ebp, saved_context_ebp movl %ebp, saved_context_ebp
movl %esi, saved_context_esi movl %esi, saved_context_esi
movl %edi, saved_context_edi movl %edi, saved_context_edi
pushfl ; popl saved_context_eflags pushfl
popl saved_context_eflags
movl $ret_point, saved_eip
movl $ret_point, saved_eip
ret ret
restore_registers: restore_registers:
movl saved_context_ebp, %ebp movl saved_context_ebp, %ebp
movl saved_context_ebx, %ebx movl saved_context_ebx, %ebx
movl saved_context_esi, %esi movl saved_context_esi, %esi
movl saved_context_edi, %edi movl saved_context_edi, %edi
pushl saved_context_eflags ; popfl pushl saved_context_eflags
ret popfl
ret
ENTRY(do_suspend_lowlevel) ENTRY(do_suspend_lowlevel)
call save_processor_state call save_processor_state
......
...@@ -7,191 +7,18 @@ ...@@ -7,191 +7,18 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
# Copyright 2003 Pavel Machek <pavel@suse.cz>, distribute under GPLv2 # Copyright 2003 Pavel Machek <pavel@suse.cz>, distribute under GPLv2
#
# wakeup_code runs in real mode, and at unknown address (determined at run-time).
# Therefore it must only use relative jumps/calls.
#
# Do we need to deal with A20? It is okay: ACPI specs says A20 must be enabled
#
# If physical address of wakeup_code is 0x12345, BIOS should call us with
# cs = 0x1234, eip = 0x05
#
#define BEEP \
inb $97, %al; \
outb %al, $0x80; \
movb $3, %al; \
outb %al, $97; \
outb %al, $0x80; \
movb $-74, %al; \
outb %al, $67; \
outb %al, $0x80; \
movb $-119, %al; \
outb %al, $66; \
outb %al, $0x80; \
movb $15, %al; \
outb %al, $66;
ALIGN
.align 16
ENTRY(wakeup_start)
wakeup_code:
wakeup_code_start = .
.code16
# Running in *copy* of this code, somewhere in low 1MB.
cli
cld
# setup data segment
movw %cs, %ax
movw %ax, %ds # Make ds:0 point to wakeup_start
movw %ax, %ss
# Data segment must be set up before we can see whether to beep.
testl $4, realmode_flags - wakeup_code
jz 1f
BEEP
1:
# Private stack is needed for ASUS board
mov $(wakeup_stack - wakeup_code), %sp
pushl $0 # Kill any dangerous flags
popfl
movl real_magic - wakeup_code, %eax
cmpl $0x12345678, %eax
jne bogus_real_magic
testl $1, realmode_flags - wakeup_code
jz 1f
lcall $0xc000,$3
movw %cs, %ax
movw %ax, %ds # Bios might have played with that
movw %ax, %ss
1:
testl $2, realmode_flags - wakeup_code
jz 1f
mov video_mode - wakeup_code, %ax
call mode_set
1:
mov %ds, %ax # Find 32bit wakeup_code addr
movzx %ax, %esi # (Convert %ds:gdt to a liner ptr)
shll $4, %esi
# Fix up the vectors
addl %esi, wakeup_32_vector - wakeup_code
addl %esi, wakeup_long64_vector - wakeup_code
addl %esi, gdt_48a + 2 - wakeup_code # Fixup the gdt pointer
lidtl %ds:idt_48a - wakeup_code
lgdtl %ds:gdt_48a - wakeup_code # load gdt with whatever is
# appropriate
movl $1, %eax # protected mode (PE) bit
lmsw %ax # This is it!
jmp 1f
1:
ljmpl *(wakeup_32_vector - wakeup_code)
.balign 4
wakeup_32_vector:
.long wakeup_32 - wakeup_code
.word __KERNEL32_CS, 0
.code32
wakeup_32:
# Running in this code, but at low address; paging is not yet turned on.
movl $__KERNEL_DS, %eax
movl %eax, %ds
/*
* Prepare for entering 64bits mode
*/
/* Enable PAE */
xorl %eax, %eax
btsl $5, %eax
movl %eax, %cr4
/* Setup early boot stage 4 level pagetables */
leal (wakeup_level4_pgt - wakeup_code)(%esi), %eax
movl %eax, %cr3
/* Check if nx is implemented */
movl $0x80000001, %eax
cpuid
movl %edx,%edi
/* Enable Long Mode */
xorl %eax, %eax
btsl $_EFER_LME, %eax
/* No Execute supported? */
btl $20,%edi
jnc 1f
btsl $_EFER_NX, %eax
/* Make changes effective */
1: movl $MSR_EFER, %ecx
xorl %edx, %edx
wrmsr
xorl %eax, %eax
btsl $31, %eax /* Enable paging and in turn activate Long Mode */
btsl $0, %eax /* Enable protected mode */
/* Make changes effective */
movl %eax, %cr0
/* At this point:
CR4.PAE must be 1
CS.L must be 0
CR3 must point to PML4
Next instruction must be a branch
This must be on identity-mapped page
*/
/*
* At this point we're in long mode but in 32bit compatibility mode
* with EFER.LME = 1, CS.L = 0, CS.D = 1 (and in turn
* EFER.LMA = 1). Now we want to jump in 64bit mode, to do that we load
* the new gdt/idt that has __KERNEL_CS with CS.L = 1.
*/
/* Finally jump in 64bit mode */
ljmp *(wakeup_long64_vector - wakeup_code)(%esi)
.balign 4
wakeup_long64_vector:
.long wakeup_long64 - wakeup_code
.word __KERNEL_CS, 0
.code64 .code64
/* Hooray, we are in Long 64-bit mode (but still running in
* low memory)
*/
wakeup_long64:
/* /*
* We must switch to a new descriptor in kernel space for the GDT * Hooray, we are in Long 64-bit mode (but still running in low memory)
* because soon the kernel won't have access anymore to the userspace
* addresses where we're currently running on. We have to do that here
* because in 32bit we couldn't load a 64bit linear address.
*/ */
lgdt cpu_gdt_descr ENTRY(wakeup_long64)
wakeup_long64:
movq saved_magic, %rax movq saved_magic, %rax
movq $0x123456789abcdef0, %rdx movq $0x123456789abcdef0, %rdx
cmpq %rdx, %rax cmpq %rdx, %rax
jne bogus_64_magic jne bogus_64_magic
nop
nop
movw $__KERNEL_DS, %ax movw $__KERNEL_DS, %ax
movw %ax, %ss movw %ax, %ss
movw %ax, %ds movw %ax, %ds
...@@ -208,130 +35,8 @@ wakeup_long64: ...@@ -208,130 +35,8 @@ wakeup_long64:
movq saved_rip, %rax movq saved_rip, %rax
jmp *%rax jmp *%rax
.code32
.align 64
gdta:
/* Its good to keep gdt in sync with one in trampoline.S */
.word 0, 0, 0, 0 # dummy
/* ??? Why I need the accessed bit set in order for this to work? */
.quad 0x00cf9b000000ffff # __KERNEL32_CS
.quad 0x00af9b000000ffff # __KERNEL_CS
.quad 0x00cf93000000ffff # __KERNEL_DS
idt_48a:
.word 0 # idt limit = 0
.word 0, 0 # idt base = 0L
gdt_48a:
.word 0x800 # gdt limit=2048,
# 256 GDT entries
.long gdta - wakeup_code # gdt base (relocated in later)
real_magic: .quad 0
video_mode: .quad 0
realmode_flags: .quad 0
.code16
bogus_real_magic:
jmp bogus_real_magic
.code64
bogus_64_magic: bogus_64_magic:
jmp bogus_64_magic jmp bogus_64_magic
/* This code uses an extended set of video mode numbers. These include:
* Aliases for standard modes
* NORMAL_VGA (-1)
* EXTENDED_VGA (-2)
* ASK_VGA (-3)
* Video modes numbered by menu position -- NOT RECOMMENDED because of lack
* of compatibility when extending the table. These are between 0x00 and 0xff.
*/
#define VIDEO_FIRST_MENU 0x0000
/* Standard BIOS video modes (BIOS number + 0x0100) */
#define VIDEO_FIRST_BIOS 0x0100
/* VESA BIOS video modes (VESA number + 0x0200) */
#define VIDEO_FIRST_VESA 0x0200
/* Video7 special modes (BIOS number + 0x0900) */
#define VIDEO_FIRST_V7 0x0900
# Setting of user mode (AX=mode ID) => CF=success
# For now, we only handle VESA modes (0x0200..0x03ff). To handle other
# modes, we should probably compile in the video code from the boot
# directory.
.code16
mode_set:
movw %ax, %bx
subb $VIDEO_FIRST_VESA>>8, %bh
cmpb $2, %bh
jb check_vesa
setbad:
clc
ret
check_vesa:
orw $0x4000, %bx # Use linear frame buffer
movw $0x4f02, %ax # VESA BIOS mode set call
int $0x10
cmpw $0x004f, %ax # AL=4f if implemented
jnz setbad # AH=0 if OK
stc
ret
wakeup_stack_begin: # Stack grows down
.org 0xff0
wakeup_stack: # Just below end of page
.org 0x1000
ENTRY(wakeup_level4_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
.fill 510,8,0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE
ENTRY(wakeup_end)
##
# acpi_copy_wakeup_routine
#
# Copy the above routine to low memory.
#
# Parameters:
# %rdi: place to copy wakeup routine to
#
# Returned address is location of code in low memory (past data and stack)
#
.code64
ENTRY(acpi_copy_wakeup_routine)
pushq %rax
pushq %rdx
movl saved_video_mode, %edx
movl %edx, video_mode - wakeup_start (,%rdi)
movl acpi_realmode_flags, %edx
movl %edx, realmode_flags - wakeup_start (,%rdi)
movq $0x12345678, real_magic - wakeup_start (,%rdi)
movq $0x123456789abcdef0, %rdx
movq %rdx, saved_magic
movq saved_magic, %rax
movq $0x123456789abcdef0, %rdx
cmpq %rdx, %rax
jne bogus_64_magic
# restore the regs we used
popq %rdx
popq %rax
ENTRY(do_suspend_lowlevel_s4bios)
ret
.align 2 .align 2
.p2align 4,,15 .p2align 4,,15
...@@ -414,7 +119,7 @@ do_suspend_lowlevel: ...@@ -414,7 +119,7 @@ do_suspend_lowlevel:
jmp restore_processor_state jmp restore_processor_state
.LFE5: .LFE5:
.Lfe5: .Lfe5:
.size do_suspend_lowlevel,.Lfe5-do_suspend_lowlevel .size do_suspend_lowlevel, .Lfe5-do_suspend_lowlevel
.data .data
ALIGN ALIGN
......
/*
* Wrapper script for the realmode binary as a transport object
* before copying to low memory.
*/
.section ".rodata","a"
.globl wakeup_code_start, wakeup_code_end
wakeup_code_start:
.incbin "arch/x86/kernel/acpi/realmode/wakeup.bin"
wakeup_code_end:
.size wakeup_code_start, .-wakeup_code_start
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#include <asm/mce.h> #include <asm/mce.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/vsyscall.h> #include <asm/vsyscall.h>
#include <asm/cacheflush.h>
#include <asm/io.h>
#define MAX_PATCH_LEN (255-1) #define MAX_PATCH_LEN (255-1)
...@@ -177,7 +179,7 @@ static const unsigned char*const * find_nop_table(void) ...@@ -177,7 +179,7 @@ static const unsigned char*const * find_nop_table(void)
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
/* Use this to add nops to a buffer, then text_poke the whole buffer. */ /* Use this to add nops to a buffer, then text_poke the whole buffer. */
static void add_nops(void *insns, unsigned int len) void add_nops(void *insns, unsigned int len)
{ {
const unsigned char *const *noptable = find_nop_table(); const unsigned char *const *noptable = find_nop_table();
...@@ -190,6 +192,7 @@ static void add_nops(void *insns, unsigned int len) ...@@ -190,6 +192,7 @@ static void add_nops(void *insns, unsigned int len)
len -= noplen; len -= noplen;
} }
} }
EXPORT_SYMBOL_GPL(add_nops);
extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
extern u8 *__smp_locks[], *__smp_locks_end[]; extern u8 *__smp_locks[], *__smp_locks_end[];
...@@ -205,7 +208,7 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end) ...@@ -205,7 +208,7 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
struct alt_instr *a; struct alt_instr *a;
char insnbuf[MAX_PATCH_LEN]; char insnbuf[MAX_PATCH_LEN];
DPRINTK("%s: alt table %p -> %p\n", __FUNCTION__, start, end); DPRINTK("%s: alt table %p -> %p\n", __func__, start, end);
for (a = start; a < end; a++) { for (a = start; a < end; a++) {
u8 *instr = a->instr; u8 *instr = a->instr;
BUG_ON(a->replacementlen > a->instrlen); BUG_ON(a->replacementlen > a->instrlen);
...@@ -217,13 +220,13 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end) ...@@ -217,13 +220,13 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
if (instr >= (u8 *)VSYSCALL_START && instr < (u8*)VSYSCALL_END) { if (instr >= (u8 *)VSYSCALL_START && instr < (u8*)VSYSCALL_END) {
instr = __va(instr - (u8*)VSYSCALL_START + (u8*)__pa_symbol(&__vsyscall_0)); instr = __va(instr - (u8*)VSYSCALL_START + (u8*)__pa_symbol(&__vsyscall_0));
DPRINTK("%s: vsyscall fixup: %p => %p\n", DPRINTK("%s: vsyscall fixup: %p => %p\n",
__FUNCTION__, a->instr, instr); __func__, a->instr, instr);
} }
#endif #endif
memcpy(insnbuf, a->replacement, a->replacementlen); memcpy(insnbuf, a->replacement, a->replacementlen);
add_nops(insnbuf + a->replacementlen, add_nops(insnbuf + a->replacementlen,
a->instrlen - a->replacementlen); a->instrlen - a->replacementlen);
text_poke(instr, insnbuf, a->instrlen); text_poke_early(instr, insnbuf, a->instrlen);
} }
} }
...@@ -284,7 +287,6 @@ void alternatives_smp_module_add(struct module *mod, char *name, ...@@ -284,7 +287,6 @@ void alternatives_smp_module_add(struct module *mod, char *name,
void *text, void *text_end) void *text, void *text_end)
{ {
struct smp_alt_module *smp; struct smp_alt_module *smp;
unsigned long flags;
if (noreplace_smp) if (noreplace_smp)
return; return;
...@@ -307,42 +309,40 @@ void alternatives_smp_module_add(struct module *mod, char *name, ...@@ -307,42 +309,40 @@ void alternatives_smp_module_add(struct module *mod, char *name,
smp->text = text; smp->text = text;
smp->text_end = text_end; smp->text_end = text_end;
DPRINTK("%s: locks %p -> %p, text %p -> %p, name %s\n", DPRINTK("%s: locks %p -> %p, text %p -> %p, name %s\n",
__FUNCTION__, smp->locks, smp->locks_end, __func__, smp->locks, smp->locks_end,
smp->text, smp->text_end, smp->name); smp->text, smp->text_end, smp->name);
spin_lock_irqsave(&smp_alt, flags); spin_lock(&smp_alt);
list_add_tail(&smp->next, &smp_alt_modules); list_add_tail(&smp->next, &smp_alt_modules);
if (boot_cpu_has(X86_FEATURE_UP)) if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(smp->locks, smp->locks_end, alternatives_smp_unlock(smp->locks, smp->locks_end,
smp->text, smp->text_end); smp->text, smp->text_end);
spin_unlock_irqrestore(&smp_alt, flags); spin_unlock(&smp_alt);
} }
void alternatives_smp_module_del(struct module *mod) void alternatives_smp_module_del(struct module *mod)
{ {
struct smp_alt_module *item; struct smp_alt_module *item;
unsigned long flags;
if (smp_alt_once || noreplace_smp) if (smp_alt_once || noreplace_smp)
return; return;
spin_lock_irqsave(&smp_alt, flags); spin_lock(&smp_alt);
list_for_each_entry(item, &smp_alt_modules, next) { list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod) if (mod != item->mod)
continue; continue;
list_del(&item->next); list_del(&item->next);
spin_unlock_irqrestore(&smp_alt, flags); spin_unlock(&smp_alt);
DPRINTK("%s: %s\n", __FUNCTION__, item->name); DPRINTK("%s: %s\n", __func__, item->name);
kfree(item); kfree(item);
return; return;
} }
spin_unlock_irqrestore(&smp_alt, flags); spin_unlock(&smp_alt);
} }
void alternatives_smp_switch(int smp) void alternatives_smp_switch(int smp)
{ {
struct smp_alt_module *mod; struct smp_alt_module *mod;
unsigned long flags;
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
/* /*
...@@ -359,7 +359,7 @@ void alternatives_smp_switch(int smp) ...@@ -359,7 +359,7 @@ void alternatives_smp_switch(int smp)
return; return;
BUG_ON(!smp && (num_online_cpus() > 1)); BUG_ON(!smp && (num_online_cpus() > 1));
spin_lock_irqsave(&smp_alt, flags); spin_lock(&smp_alt);
/* /*
* Avoid unnecessary switches because it forces JIT based VMs to * Avoid unnecessary switches because it forces JIT based VMs to
...@@ -383,7 +383,7 @@ void alternatives_smp_switch(int smp) ...@@ -383,7 +383,7 @@ void alternatives_smp_switch(int smp)
mod->text, mod->text_end); mod->text, mod->text_end);
} }
smp_mode = smp; smp_mode = smp;
spin_unlock_irqrestore(&smp_alt, flags); spin_unlock(&smp_alt);
} }
#endif #endif
...@@ -411,7 +411,7 @@ void apply_paravirt(struct paravirt_patch_site *start, ...@@ -411,7 +411,7 @@ void apply_paravirt(struct paravirt_patch_site *start,
/* Pad the rest with nops */ /* Pad the rest with nops */
add_nops(insnbuf + used, p->len - used); add_nops(insnbuf + used, p->len - used);
text_poke(p->instr, insnbuf, p->len); text_poke_early(p->instr, insnbuf, p->len);
} }
} }
extern struct paravirt_patch_site __start_parainstructions[], extern struct paravirt_patch_site __start_parainstructions[],
...@@ -420,8 +420,6 @@ extern struct paravirt_patch_site __start_parainstructions[], ...@@ -420,8 +420,6 @@ extern struct paravirt_patch_site __start_parainstructions[],
void __init alternative_instructions(void) void __init alternative_instructions(void)
{ {
unsigned long flags;
/* The patching is not fully atomic, so try to avoid local interruptions /* The patching is not fully atomic, so try to avoid local interruptions
that might execute the to be patched code. that might execute the to be patched code.
Other CPUs are not running. */ Other CPUs are not running. */
...@@ -430,7 +428,6 @@ void __init alternative_instructions(void) ...@@ -430,7 +428,6 @@ void __init alternative_instructions(void)
stop_mce(); stop_mce();
#endif #endif
local_irq_save(flags);
apply_alternatives(__alt_instructions, __alt_instructions_end); apply_alternatives(__alt_instructions, __alt_instructions_end);
/* switch to patch-once-at-boottime-only mode and free the /* switch to patch-once-at-boottime-only mode and free the
...@@ -462,7 +459,6 @@ void __init alternative_instructions(void) ...@@ -462,7 +459,6 @@ void __init alternative_instructions(void)
} }
#endif #endif
apply_paravirt(__parainstructions, __parainstructions_end); apply_paravirt(__parainstructions, __parainstructions_end);
local_irq_restore(flags);
if (smp_alt_once) if (smp_alt_once)
free_init_pages("SMP alternatives", free_init_pages("SMP alternatives",
...@@ -475,18 +471,71 @@ void __init alternative_instructions(void) ...@@ -475,18 +471,71 @@ void __init alternative_instructions(void)
#endif #endif
} }
/* /**
* Warning: * text_poke_early - Update instructions on a live kernel at boot time
* @addr: address to modify
* @opcode: source of the copy
* @len: length to copy
*
* When you use this code to patch more than one byte of an instruction * When you use this code to patch more than one byte of an instruction
* you need to make sure that other CPUs cannot execute this code in parallel. * you need to make sure that other CPUs cannot execute this code in parallel.
* Also no thread must be currently preempted in the middle of these instructions. * Also no thread must be currently preempted in the middle of these
* And on the local CPU you need to be protected again NMI or MCE handlers * instructions. And on the local CPU you need to be protected again NMI or MCE
* seeing an inconsistent instruction while you patch. * handlers seeing an inconsistent instruction while you patch.
*/ */
void __kprobes text_poke(void *addr, unsigned char *opcode, int len) void *text_poke_early(void *addr, const void *opcode, size_t len)
{ {
unsigned long flags;
local_irq_save(flags);
memcpy(addr, opcode, len); memcpy(addr, opcode, len);
local_irq_restore(flags);
sync_core();
/* Could also do a CLFLUSH here to speed up CPU recovery; but
that causes hangs on some VIA CPUs. */
return addr;
}
/**
* text_poke - Update instructions on a live kernel
* @addr: address to modify
* @opcode: source of the copy
* @len: length to copy
*
* Only atomic text poke/set should be allowed when not doing early patching.
* It means the size must be writable atomically and the address must be aligned
* in a way that permits an atomic write. It also makes sure we fit on a single
* page.
*/
void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
{
unsigned long flags;
char *vaddr;
int nr_pages = 2;
BUG_ON(len > sizeof(long));
BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1))
- ((long)addr & ~(sizeof(long) - 1)));
if (kernel_text_address((unsigned long)addr)) {
struct page *pages[2] = { virt_to_page(addr),
virt_to_page(addr + PAGE_SIZE) };
if (!pages[1])
nr_pages = 1;
vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL);
BUG_ON(!vaddr);
local_irq_save(flags);
memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
local_irq_restore(flags);
vunmap(vaddr);
} else {
/*
* modules are in vmalloc'ed memory, always writable.
*/
local_irq_save(flags);
memcpy(addr, opcode, len);
local_irq_restore(flags);
}
sync_core(); sync_core();
/* Could also do a CLFLUSH here to speed up CPU recovery; but /* Could also do a CLFLUSH here to speed up CPU recovery; but
that causes hangs on some VIA CPUs. */ that causes hangs on some VIA CPUs. */
return addr;
} }
...@@ -27,11 +27,11 @@ ...@@ -27,11 +27,11 @@
#include <asm/k8.h> #include <asm/k8.h>
int gart_iommu_aperture; int gart_iommu_aperture;
int gart_iommu_aperture_disabled __initdata = 0; int gart_iommu_aperture_disabled __initdata;
int gart_iommu_aperture_allowed __initdata = 0; int gart_iommu_aperture_allowed __initdata;
int fallback_aper_order __initdata = 1; /* 64MB */ int fallback_aper_order __initdata = 1; /* 64MB */
int fallback_aper_force __initdata = 0; int fallback_aper_force __initdata;
int fix_aperture __initdata = 1; int fix_aperture __initdata = 1;
......
...@@ -50,6 +50,11 @@ ...@@ -50,6 +50,11 @@
# error SPURIOUS_APIC_VECTOR definition error # error SPURIOUS_APIC_VECTOR definition error
#endif #endif
unsigned long mp_lapic_addr;
DEFINE_PER_CPU(u16, x86_bios_cpu_apicid) = BAD_APICID;
EXPORT_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
/* /*
* Knob to control our willingness to enable the local APIC. * Knob to control our willingness to enable the local APIC.
* *
...@@ -620,6 +625,35 @@ int setup_profiling_timer(unsigned int multiplier) ...@@ -620,6 +625,35 @@ int setup_profiling_timer(unsigned int multiplier)
return -EINVAL; return -EINVAL;
} }
/*
* Setup extended LVT, AMD specific (K8, family 10h)
*
* Vector mappings are hard coded. On K8 only offset 0 (APIC500) and
* MCE interrupts are supported. Thus MCE offset must be set to 0.
*/
#define APIC_EILVT_LVTOFF_MCE 0
#define APIC_EILVT_LVTOFF_IBS 1
static void setup_APIC_eilvt(u8 lvt_off, u8 vector, u8 msg_type, u8 mask)
{
unsigned long reg = (lvt_off << 4) + APIC_EILVT0;
unsigned int v = (mask << 16) | (msg_type << 8) | vector;
apic_write(reg, v);
}
u8 setup_APIC_eilvt_mce(u8 vector, u8 msg_type, u8 mask)
{
setup_APIC_eilvt(APIC_EILVT_LVTOFF_MCE, vector, msg_type, mask);
return APIC_EILVT_LVTOFF_MCE;
}
u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask)
{
setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask);
return APIC_EILVT_LVTOFF_IBS;
}
/* /*
* Local APIC start and shutdown * Local APIC start and shutdown
*/ */
...@@ -868,12 +902,50 @@ void __init init_bsp_APIC(void) ...@@ -868,12 +902,50 @@ void __init init_bsp_APIC(void)
apic_write_around(APIC_LVT1, value); apic_write_around(APIC_LVT1, value);
} }
void __cpuinit lapic_setup_esr(void)
{
unsigned long oldvalue, value, maxlvt;
if (lapic_is_integrated() && !esr_disable) {
/* !82489DX */
maxlvt = lapic_get_maxlvt();
if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0);
oldvalue = apic_read(APIC_ESR);
/* enables sending errors */
value = ERROR_APIC_VECTOR;
apic_write_around(APIC_LVTERR, value);
/*
* spec says clear errors after enabling vector.
*/
if (maxlvt > 3)
apic_write(APIC_ESR, 0);
value = apic_read(APIC_ESR);
if (value != oldvalue)
apic_printk(APIC_VERBOSE, "ESR value before enabling "
"vector: 0x%08lx after: 0x%08lx\n",
oldvalue, value);
} else {
if (esr_disable)
/*
* Something untraceable is creating bad interrupts on
* secondary quads ... for the moment, just leave the
* ESR disabled - we can't do anything useful with the
* errors anyway - mbligh
*/
printk(KERN_INFO "Leaving ESR disabled.\n");
else
printk(KERN_INFO "No ESR for 82489DX.\n");
}
}
/** /**
* setup_local_APIC - setup the local APIC * setup_local_APIC - setup the local APIC
*/ */
void __cpuinit setup_local_APIC(void) void __cpuinit setup_local_APIC(void)
{ {
unsigned long oldvalue, value, maxlvt, integrated; unsigned long value, integrated;
int i, j; int i, j;
/* Pound the ESR really hard over the head with a big hammer - mbligh */ /* Pound the ESR really hard over the head with a big hammer - mbligh */
...@@ -997,40 +1069,13 @@ void __cpuinit setup_local_APIC(void) ...@@ -997,40 +1069,13 @@ void __cpuinit setup_local_APIC(void)
if (!integrated) /* 82489DX */ if (!integrated) /* 82489DX */
value |= APIC_LVT_LEVEL_TRIGGER; value |= APIC_LVT_LEVEL_TRIGGER;
apic_write_around(APIC_LVT1, value); apic_write_around(APIC_LVT1, value);
}
if (integrated && !esr_disable) { void __cpuinit end_local_APIC_setup(void)
/* !82489DX */ {
maxlvt = lapic_get_maxlvt(); unsigned long value;
if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0);
oldvalue = apic_read(APIC_ESR);
/* enables sending errors */
value = ERROR_APIC_VECTOR;
apic_write_around(APIC_LVTERR, value);
/*
* spec says clear errors after enabling vector.
*/
if (maxlvt > 3)
apic_write(APIC_ESR, 0);
value = apic_read(APIC_ESR);
if (value != oldvalue)
apic_printk(APIC_VERBOSE, "ESR value before enabling "
"vector: 0x%08lx after: 0x%08lx\n",
oldvalue, value);
} else {
if (esr_disable)
/*
* Something untraceable is creating bad interrupts on
* secondary quads ... for the moment, just leave the
* ESR disabled - we can't do anything useful with the
* errors anyway - mbligh
*/
printk(KERN_INFO "Leaving ESR disabled.\n");
else
printk(KERN_INFO "No ESR for 82489DX.\n");
}
lapic_setup_esr();
/* Disable the local apic timer */ /* Disable the local apic timer */
value = apic_read(APIC_LVTT); value = apic_read(APIC_LVTT);
value |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR); value |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
...@@ -1147,7 +1192,7 @@ void __init init_apic_mappings(void) ...@@ -1147,7 +1192,7 @@ void __init init_apic_mappings(void)
* default configuration (or the MP table is broken). * default configuration (or the MP table is broken).
*/ */
if (boot_cpu_physical_apicid == -1U) if (boot_cpu_physical_apicid == -1U)
boot_cpu_physical_apicid = GET_APIC_ID(apic_read(APIC_ID)); boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
{ {
...@@ -1185,6 +1230,9 @@ void __init init_apic_mappings(void) ...@@ -1185,6 +1230,9 @@ void __init init_apic_mappings(void)
* This initializes the IO-APIC and APIC hardware if this is * This initializes the IO-APIC and APIC hardware if this is
* a UP kernel. * a UP kernel.
*/ */
int apic_version[MAX_APICS];
int __init APIC_init_uniprocessor(void) int __init APIC_init_uniprocessor(void)
{ {
if (enable_local_apic < 0) if (enable_local_apic < 0)
...@@ -1214,12 +1262,13 @@ int __init APIC_init_uniprocessor(void) ...@@ -1214,12 +1262,13 @@ int __init APIC_init_uniprocessor(void)
* might be zero if read from MP tables. Get it from LAPIC. * might be zero if read from MP tables. Get it from LAPIC.
*/ */
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
boot_cpu_physical_apicid = GET_APIC_ID(apic_read(APIC_ID)); boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
#endif #endif
phys_cpu_present_map = physid_mask_of_physid(boot_cpu_physical_apicid); phys_cpu_present_map = physid_mask_of_physid(boot_cpu_physical_apicid);
setup_local_APIC(); setup_local_APIC();
end_local_APIC_setup();
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
if (smp_found_config) if (smp_found_config)
if (!skip_ioapic_setup && nr_ioapics) if (!skip_ioapic_setup && nr_ioapics)
...@@ -1288,6 +1337,29 @@ void smp_error_interrupt(struct pt_regs *regs) ...@@ -1288,6 +1337,29 @@ void smp_error_interrupt(struct pt_regs *regs)
irq_exit(); irq_exit();
} }
#ifdef CONFIG_SMP
void __init smp_intr_init(void)
{
/*
* IRQ0 must be given a fixed assignment and initialized,
* because it's used before the IO-APIC is set up.
*/
set_intr_gate(FIRST_DEVICE_VECTOR, interrupt[0]);
/*
* The reschedule interrupt is a CPU-to-CPU reschedule-helper
* IPI, driven by wakeup.
*/
set_intr_gate(RESCHEDULE_VECTOR, reschedule_interrupt);
/* IPI for invalidation */
set_intr_gate(INVALIDATE_TLB_VECTOR, invalidate_interrupt);
/* IPI for generic function call */
set_intr_gate(CALL_FUNCTION_VECTOR, call_function_interrupt);
}
#endif
/* /*
* Initialize APIC interrupts * Initialize APIC interrupts
*/ */
...@@ -1394,6 +1466,88 @@ void disconnect_bsp_APIC(int virt_wire_setup) ...@@ -1394,6 +1466,88 @@ void disconnect_bsp_APIC(int virt_wire_setup)
} }
} }
unsigned int __cpuinitdata maxcpus = NR_CPUS;
void __cpuinit generic_processor_info(int apicid, int version)
{
int cpu;
cpumask_t tmp_map;
physid_mask_t phys_cpu;
/*
* Validate version
*/
if (version == 0x0) {
printk(KERN_WARNING "BIOS bug, APIC version is 0 for CPU#%d! "
"fixing up to 0x10. (tell your hw vendor)\n",
version);
version = 0x10;
}
apic_version[apicid] = version;
phys_cpu = apicid_to_cpu_present(apicid);
physids_or(phys_cpu_present_map, phys_cpu_present_map, phys_cpu);
if (num_processors >= NR_CPUS) {
printk(KERN_WARNING "WARNING: NR_CPUS limit of %i reached."
" Processor ignored.\n", NR_CPUS);
return;
}
if (num_processors >= maxcpus) {
printk(KERN_WARNING "WARNING: maxcpus limit of %i reached."
" Processor ignored.\n", maxcpus);
return;
}
num_processors++;
cpus_complement(tmp_map, cpu_present_map);
cpu = first_cpu(tmp_map);
if (apicid == boot_cpu_physical_apicid)
/*
* x86_bios_cpu_apicid is required to have processors listed
* in same order as logical cpu numbers. Hence the first
* entry is BSP, and so on.
*/
cpu = 0;
/*
* Would be preferable to switch to bigsmp when CONFIG_HOTPLUG_CPU=y
* but we need to work other dependencies like SMP_SUSPEND etc
* before this can be done without some confusion.
* if (CPU_HOTPLUG_ENABLED || num_processors > 8)
* - Ashok Raj <ashok.raj@intel.com>
*/
if (num_processors > 8) {
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL:
if (!APIC_XAPIC(version)) {
def_to_bigsmp = 0;
break;
}
/* If P4 and above fall through */
case X86_VENDOR_AMD:
def_to_bigsmp = 1;
}
}
#ifdef CONFIG_SMP
/* are we being called early in kernel startup? */
if (x86_cpu_to_apicid_early_ptr) {
u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr;
u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
cpu_to_apicid[cpu] = apicid;
bios_cpu_apicid[cpu] = apicid;
} else {
per_cpu(x86_cpu_to_apicid, cpu) = apicid;
per_cpu(x86_bios_cpu_apicid, cpu) = apicid;
}
#endif
cpu_set(cpu, cpu_possible_map);
cpu_set(cpu, cpu_present_map);
}
/* /*
* Power management * Power management
*/ */
......
...@@ -34,13 +34,15 @@ ...@@ -34,13 +34,15 @@
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/hpet.h> #include <asm/hpet.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/mach_apic.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/timex.h> #include <asm/timex.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <mach_ipi.h>
#include <mach_apic.h>
int disable_apic_timer __cpuinitdata; int disable_apic_timer __cpuinitdata;
static int apic_calibrate_pmtmr __initdata; static int apic_calibrate_pmtmr __initdata;
int disable_apic; int disable_apic;
...@@ -83,6 +85,12 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events); ...@@ -83,6 +85,12 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
static unsigned long apic_phys; static unsigned long apic_phys;
unsigned long mp_lapic_addr;
DEFINE_PER_CPU(u16, x86_bios_cpu_apicid) = BAD_APICID;
EXPORT_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
unsigned int __cpuinitdata maxcpus = NR_CPUS;
/* /*
* Get the LAPIC version * Get the LAPIC version
*/ */
...@@ -431,7 +439,8 @@ void __cpuinit check_boot_apic_timer_broadcast(void) ...@@ -431,7 +439,8 @@ void __cpuinit check_boot_apic_timer_broadcast(void)
lapic_clockevent.features |= CLOCK_EVT_FEAT_DUMMY; lapic_clockevent.features |= CLOCK_EVT_FEAT_DUMMY;
local_irq_enable(); local_irq_enable();
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_FORCE, &boot_cpu_id); clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_FORCE,
&boot_cpu_physical_apicid);
local_irq_disable(); local_irq_disable();
} }
...@@ -640,10 +649,10 @@ int __init verify_local_APIC(void) ...@@ -640,10 +649,10 @@ int __init verify_local_APIC(void)
/* /*
* The ID register is read/write in a real APIC. * The ID register is read/write in a real APIC.
*/ */
reg0 = apic_read(APIC_ID); reg0 = read_apic_id();
apic_printk(APIC_DEBUG, "Getting ID: %x\n", reg0); apic_printk(APIC_DEBUG, "Getting ID: %x\n", reg0);
apic_write(APIC_ID, reg0 ^ APIC_ID_MASK); apic_write(APIC_ID, reg0 ^ APIC_ID_MASK);
reg1 = apic_read(APIC_ID); reg1 = read_apic_id();
apic_printk(APIC_DEBUG, "Getting ID: %x\n", reg1); apic_printk(APIC_DEBUG, "Getting ID: %x\n", reg1);
apic_write(APIC_ID, reg0); apic_write(APIC_ID, reg0);
if (reg1 != (reg0 ^ APIC_ID_MASK)) if (reg1 != (reg0 ^ APIC_ID_MASK))
...@@ -728,6 +737,7 @@ void __cpuinit setup_local_APIC(void) ...@@ -728,6 +737,7 @@ void __cpuinit setup_local_APIC(void)
unsigned int value; unsigned int value;
int i, j; int i, j;
preempt_disable();
value = apic_read(APIC_LVR); value = apic_read(APIC_LVR);
BUILD_BUG_ON((SPURIOUS_APIC_VECTOR & 0x0f) != 0x0f); BUILD_BUG_ON((SPURIOUS_APIC_VECTOR & 0x0f) != 0x0f);
...@@ -821,6 +831,7 @@ void __cpuinit setup_local_APIC(void) ...@@ -821,6 +831,7 @@ void __cpuinit setup_local_APIC(void)
else else
value = APIC_DM_NMI | APIC_LVT_MASKED; value = APIC_DM_NMI | APIC_LVT_MASKED;
apic_write(APIC_LVT1, value); apic_write(APIC_LVT1, value);
preempt_enable();
} }
void __cpuinit lapic_setup_esr(void) void __cpuinit lapic_setup_esr(void)
...@@ -857,10 +868,34 @@ static int __init detect_init_APIC(void) ...@@ -857,10 +868,34 @@ static int __init detect_init_APIC(void)
} }
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
boot_cpu_id = 0; boot_cpu_physical_apicid = 0;
return 0; return 0;
} }
void __init early_init_lapic_mapping(void)
{
unsigned long apic_phys;
/*
* If no local APIC can be found then go out
* : it means there is no mpatable and MADT
*/
if (!smp_found_config)
return;
apic_phys = mp_lapic_addr;
set_fixmap_nocache(FIX_APIC_BASE, apic_phys);
apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
APIC_BASE, apic_phys);
/*
* Fetch the APIC ID of the BSP in case we have a
* default configuration (or the MP table is broken).
*/
boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
}
/** /**
* init_apic_mappings - initialize APIC mappings * init_apic_mappings - initialize APIC mappings
*/ */
...@@ -881,16 +916,11 @@ void __init init_apic_mappings(void) ...@@ -881,16 +916,11 @@ void __init init_apic_mappings(void)
apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n", apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
APIC_BASE, apic_phys); APIC_BASE, apic_phys);
/* Put local APIC into the resource map. */
lapic_resource.start = apic_phys;
lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
insert_resource(&iomem_resource, &lapic_resource);
/* /*
* Fetch the APIC ID of the BSP in case we have a * Fetch the APIC ID of the BSP in case we have a
* default configuration (or the MP table is broken). * default configuration (or the MP table is broken).
*/ */
boot_cpu_id = GET_APIC_ID(apic_read(APIC_ID)); boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
} }
/* /*
...@@ -911,8 +941,8 @@ int __init APIC_init_uniprocessor(void) ...@@ -911,8 +941,8 @@ int __init APIC_init_uniprocessor(void)
verify_local_APIC(); verify_local_APIC();
phys_cpu_present_map = physid_mask_of_physid(boot_cpu_id); phys_cpu_present_map = physid_mask_of_physid(boot_cpu_physical_apicid);
apic_write(APIC_ID, SET_APIC_ID(boot_cpu_id)); apic_write(APIC_ID, SET_APIC_ID(boot_cpu_physical_apicid));
setup_local_APIC(); setup_local_APIC();
...@@ -1029,6 +1059,52 @@ void disconnect_bsp_APIC(int virt_wire_setup) ...@@ -1029,6 +1059,52 @@ void disconnect_bsp_APIC(int virt_wire_setup)
apic_write(APIC_LVT1, value); apic_write(APIC_LVT1, value);
} }
void __cpuinit generic_processor_info(int apicid, int version)
{
int cpu;
cpumask_t tmp_map;
if (num_processors >= NR_CPUS) {
printk(KERN_WARNING "WARNING: NR_CPUS limit of %i reached."
" Processor ignored.\n", NR_CPUS);
return;
}
if (num_processors >= maxcpus) {
printk(KERN_WARNING "WARNING: maxcpus limit of %i reached."
" Processor ignored.\n", maxcpus);
return;
}
num_processors++;
cpus_complement(tmp_map, cpu_present_map);
cpu = first_cpu(tmp_map);
physid_set(apicid, phys_cpu_present_map);
if (apicid == boot_cpu_physical_apicid) {
/*
* x86_bios_cpu_apicid is required to have processors listed
* in same order as logical cpu numbers. Hence the first
* entry is BSP, and so on.
*/
cpu = 0;
}
/* are we being called early in kernel startup? */
if (x86_cpu_to_apicid_early_ptr) {
u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr;
u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
cpu_to_apicid[cpu] = apicid;
bios_cpu_apicid[cpu] = apicid;
} else {
per_cpu(x86_cpu_to_apicid, cpu) = apicid;
per_cpu(x86_bios_cpu_apicid, cpu) = apicid;
}
cpu_set(cpu, cpu_possible_map);
cpu_set(cpu, cpu_present_map);
}
/* /*
* Power management * Power management
*/ */
...@@ -1065,7 +1141,7 @@ static int lapic_suspend(struct sys_device *dev, pm_message_t state) ...@@ -1065,7 +1141,7 @@ static int lapic_suspend(struct sys_device *dev, pm_message_t state)
maxlvt = lapic_get_maxlvt(); maxlvt = lapic_get_maxlvt();
apic_pm_state.apic_id = apic_read(APIC_ID); apic_pm_state.apic_id = read_apic_id();
apic_pm_state.apic_taskpri = apic_read(APIC_TASKPRI); apic_pm_state.apic_taskpri = apic_read(APIC_TASKPRI);
apic_pm_state.apic_ldr = apic_read(APIC_LDR); apic_pm_state.apic_ldr = apic_read(APIC_LDR);
apic_pm_state.apic_dfr = apic_read(APIC_DFR); apic_pm_state.apic_dfr = apic_read(APIC_DFR);
...@@ -1180,9 +1256,19 @@ __cpuinit int apic_is_clustered_box(void) ...@@ -1180,9 +1256,19 @@ __cpuinit int apic_is_clustered_box(void)
{ {
int i, clusters, zeros; int i, clusters, zeros;
unsigned id; unsigned id;
u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr; u16 *bios_cpu_apicid;
DECLARE_BITMAP(clustermap, NUM_APIC_CLUSTERS); DECLARE_BITMAP(clustermap, NUM_APIC_CLUSTERS);
/*
* there is not this kind of box with AMD CPU yet.
* Some AMD box with quadcore cpu and 8 sockets apicid
* will be [4, 0x23] or [8, 0x27] could be thought to
* vsmp box still need checking...
*/
if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) && !is_vsmp_box())
return 0;
bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
bitmap_zero(clustermap, NUM_APIC_CLUSTERS); bitmap_zero(clustermap, NUM_APIC_CLUSTERS);
for (i = 0; i < NR_CPUS; i++) { for (i = 0; i < NR_CPUS; i++) {
...@@ -1219,6 +1305,12 @@ __cpuinit int apic_is_clustered_box(void) ...@@ -1219,6 +1305,12 @@ __cpuinit int apic_is_clustered_box(void)
++zeros; ++zeros;
} }
/* ScaleMP vSMPowered boxes have one cluster per board and TSCs are
* not guaranteed to be synced between boards
*/
if (is_vsmp_box() && clusters > 1)
return 1;
/* /*
* If clusters > 2, then should be multi-chassis. * If clusters > 2, then should be multi-chassis.
* May have to revisit this when multi-core + hyperthreaded CPUs come * May have to revisit this when multi-core + hyperthreaded CPUs come
...@@ -1290,3 +1382,21 @@ static __init int setup_apicpmtimer(char *s) ...@@ -1290,3 +1382,21 @@ static __init int setup_apicpmtimer(char *s)
} }
__setup("apicpmtimer", setup_apicpmtimer); __setup("apicpmtimer", setup_apicpmtimer);
static int __init lapic_insert_resource(void)
{
if (!apic_phys)
return -1;
/* Put local APIC into the resource map. */
lapic_resource.start = apic_phys;
lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
insert_resource(&iomem_resource, &lapic_resource);
return 0;
}
/*
* need call insert after e820_reserve_resources()
* that is using request_resource
*/
late_initcall(lapic_insert_resource);
...@@ -2217,7 +2217,6 @@ static struct dmi_system_id __initdata apm_dmi_table[] = { ...@@ -2217,7 +2217,6 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
*/ */
static int __init apm_init(void) static int __init apm_init(void)
{ {
struct proc_dir_entry *apm_proc;
struct desc_struct *gdt; struct desc_struct *gdt;
int err; int err;
...@@ -2322,9 +2321,7 @@ static int __init apm_init(void) ...@@ -2322,9 +2321,7 @@ static int __init apm_init(void)
set_base(gdt[APM_DS >> 3], set_base(gdt[APM_DS >> 3],
__va((unsigned long)apm_info.bios.dseg << 4)); __va((unsigned long)apm_info.bios.dseg << 4));
apm_proc = create_proc_entry("apm", 0, NULL); proc_create("apm", 0, NULL, &apm_file_ops);
if (apm_proc)
apm_proc->proc_fops = &apm_file_ops;
kapmd_task = kthread_create(apm, NULL, "kapmd"); kapmd_task = kthread_create(apm, NULL, "kapmd");
if (IS_ERR(kapmd_task)) { if (IS_ERR(kapmd_task)) {
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include "sigframe_32.h" #include "sigframe.h"
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/processor.h> #include <asm/processor.h>
......
...@@ -9,13 +9,25 @@ ...@@ -9,13 +9,25 @@
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/cacheflush.h>
void __init check_bugs(void) void __init check_bugs(void)
{ {
identify_cpu(&boot_cpu_data); identify_boot_cpu();
#if !defined(CONFIG_SMP) #if !defined(CONFIG_SMP)
printk("CPU: "); printk("CPU: ");
print_cpu_info(&boot_cpu_data); print_cpu_info(&boot_cpu_data);
#endif #endif
alternative_instructions(); alternative_instructions();
/*
* Make sure the first 2MB area is not mapped by huge pages
* There are typically fixed size MTRRs in there and overlapping
* MTRRs into large pages causes slow downs.
*
* Right now we don't do that with gbpages because there seems
* very little benefit for that case.
*/
if (!direct_gbpages)
set_memory_4k((unsigned long)__va(0), 1);
} }
...@@ -3,9 +3,9 @@ ...@@ -3,9 +3,9 @@
# #
obj-y := intel_cacheinfo.o addon_cpuid_features.o obj-y := intel_cacheinfo.o addon_cpuid_features.o
obj-y += feature_names.o obj-y += proc.o feature_names.o
obj-$(CONFIG_X86_32) += common.o proc.o bugs.o obj-$(CONFIG_X86_32) += common.o bugs.o
obj-$(CONFIG_X86_32) += amd.o obj-$(CONFIG_X86_32) += amd.o
obj-$(CONFIG_X86_32) += cyrix.o obj-$(CONFIG_X86_32) += cyrix.o
obj-$(CONFIG_X86_32) += centaur.o obj-$(CONFIG_X86_32) += centaur.o
......
...@@ -4,8 +4,8 @@ ...@@ -4,8 +4,8 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/mach_apic.h>
#include <mach_apic.h>
#include "cpu.h" #include "cpu.h"
/* /*
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* the chip setting when fixing the bug but they also tweaked some * the chip setting when fixing the bug but they also tweaked some
* performance at the same time.. * performance at the same time..
*/ */
extern void vide(void); extern void vide(void);
__asm__(".align 4\nvide: ret"); __asm__(".align 4\nvide: ret");
...@@ -63,12 +63,12 @@ static __cpuinit int amd_apic_timer_broken(void) ...@@ -63,12 +63,12 @@ static __cpuinit int amd_apic_timer_broken(void)
int force_mwait __cpuinitdata; int force_mwait __cpuinitdata;
void __cpuinit early_init_amd(struct cpuinfo_x86 *c) static void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
{ {
if (cpuid_eax(0x80000000) >= 0x80000007) { if (cpuid_eax(0x80000000) >= 0x80000007) {
c->x86_power = cpuid_edx(0x80000007); c->x86_power = cpuid_edx(0x80000007);
if (c->x86_power & (1<<8)) if (c->x86_power & (1<<8))
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability); set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
} }
} }
...@@ -81,7 +81,8 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -81,7 +81,8 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned long long value; unsigned long long value;
/* Disable TLB flush filter by setting HWCR.FFDIS on K8 /*
* Disable TLB flush filter by setting HWCR.FFDIS on K8
* bit 6 of msr C001_0015 * bit 6 of msr C001_0015
* *
* Errata 63 for SH-B3 steppings * Errata 63 for SH-B3 steppings
...@@ -102,15 +103,16 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -102,15 +103,16 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
* no bus pipeline) * no bus pipeline)
*/ */
/* Bit 31 in normal CPUID used for nonstandard 3DNow ID; /*
3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway */ * Bit 31 in normal CPUID used for nonstandard 3DNow ID;
clear_bit(0*32+31, c->x86_capability); * 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway
*/
clear_cpu_cap(c, 0*32+31);
r = get_model_name(c); r = get_model_name(c);
switch(c->x86) switch (c->x86) {
{ case 4:
case 4:
/* /*
* General Systems BIOSen alias the cpu frequency registers * General Systems BIOSen alias the cpu frequency registers
* of the Elan at 0x000df000. Unfortuantly, one of the Linux * of the Elan at 0x000df000. Unfortuantly, one of the Linux
...@@ -120,61 +122,60 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -120,61 +122,60 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
#define CBAR (0xfffc) /* Configuration Base Address (32-bit) */ #define CBAR (0xfffc) /* Configuration Base Address (32-bit) */
#define CBAR_ENB (0x80000000) #define CBAR_ENB (0x80000000)
#define CBAR_KEY (0X000000CB) #define CBAR_KEY (0X000000CB)
if (c->x86_model==9 || c->x86_model == 10) { if (c->x86_model == 9 || c->x86_model == 10) {
if (inl (CBAR) & CBAR_ENB) if (inl (CBAR) & CBAR_ENB)
outl (0 | CBAR_KEY, CBAR); outl (0 | CBAR_KEY, CBAR);
} }
break; break;
case 5: case 5:
if( c->x86_model < 6 ) if (c->x86_model < 6) {
{
/* Based on AMD doc 20734R - June 2000 */ /* Based on AMD doc 20734R - June 2000 */
if ( c->x86_model == 0 ) { if (c->x86_model == 0) {
clear_bit(X86_FEATURE_APIC, c->x86_capability); clear_cpu_cap(c, X86_FEATURE_APIC);
set_bit(X86_FEATURE_PGE, c->x86_capability); set_cpu_cap(c, X86_FEATURE_PGE);
} }
break; break;
} }
if ( c->x86_model == 6 && c->x86_mask == 1 ) { if (c->x86_model == 6 && c->x86_mask == 1) {
const int K6_BUG_LOOP = 1000000; const int K6_BUG_LOOP = 1000000;
int n; int n;
void (*f_vide)(void); void (*f_vide)(void);
unsigned long d, d2; unsigned long d, d2;
printk(KERN_INFO "AMD K6 stepping B detected - "); printk(KERN_INFO "AMD K6 stepping B detected - ");
/* /*
* It looks like AMD fixed the 2.6.2 bug and improved indirect * It looks like AMD fixed the 2.6.2 bug and improved indirect
* calls at the same time. * calls at the same time.
*/ */
n = K6_BUG_LOOP; n = K6_BUG_LOOP;
f_vide = vide; f_vide = vide;
rdtscl(d); rdtscl(d);
while (n--) while (n--)
f_vide(); f_vide();
rdtscl(d2); rdtscl(d2);
d = d2-d; d = d2-d;
if (d > 20*K6_BUG_LOOP) if (d > 20*K6_BUG_LOOP)
printk("system stability may be impaired when more than 32 MB are used.\n"); printk("system stability may be impaired when more than 32 MB are used.\n");
else else
printk("probably OK (after B9730xxxx).\n"); printk("probably OK (after B9730xxxx).\n");
printk(KERN_INFO "Please see http://membres.lycos.fr/poulot/k6bug.html\n"); printk(KERN_INFO "Please see http://membres.lycos.fr/poulot/k6bug.html\n");
} }
/* K6 with old style WHCR */ /* K6 with old style WHCR */
if (c->x86_model < 8 || if (c->x86_model < 8 ||
(c->x86_model== 8 && c->x86_mask < 8)) { (c->x86_model == 8 && c->x86_mask < 8)) {
/* We can only write allocate on the low 508Mb */ /* We can only write allocate on the low 508Mb */
if(mbytes>508) if (mbytes > 508)
mbytes=508; mbytes = 508;
rdmsr(MSR_K6_WHCR, l, h); rdmsr(MSR_K6_WHCR, l, h);
if ((l&0x0000FFFF)==0) { if ((l&0x0000FFFF) == 0) {
unsigned long flags; unsigned long flags;
l=(1<<0)|((mbytes/4)<<1); l = (1<<0)|((mbytes/4)<<1);
local_irq_save(flags); local_irq_save(flags);
wbinvd(); wbinvd();
wrmsr(MSR_K6_WHCR, l, h); wrmsr(MSR_K6_WHCR, l, h);
...@@ -185,17 +186,17 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -185,17 +186,17 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
break; break;
} }
if ((c->x86_model == 8 && c->x86_mask >7) || if ((c->x86_model == 8 && c->x86_mask > 7) ||
c->x86_model == 9 || c->x86_model == 13) { c->x86_model == 9 || c->x86_model == 13) {
/* The more serious chips .. */ /* The more serious chips .. */
if(mbytes>4092) if (mbytes > 4092)
mbytes=4092; mbytes = 4092;
rdmsr(MSR_K6_WHCR, l, h); rdmsr(MSR_K6_WHCR, l, h);
if ((l&0xFFFF0000)==0) { if ((l&0xFFFF0000) == 0) {
unsigned long flags; unsigned long flags;
l=((mbytes>>2)<<22)|(1<<16); l = ((mbytes>>2)<<22)|(1<<16);
local_irq_save(flags); local_irq_save(flags);
wbinvd(); wbinvd();
wrmsr(MSR_K6_WHCR, l, h); wrmsr(MSR_K6_WHCR, l, h);
...@@ -207,7 +208,7 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -207,7 +208,7 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
/* Set MTRR capability flag if appropriate */ /* Set MTRR capability flag if appropriate */
if (c->x86_model == 13 || c->x86_model == 9 || if (c->x86_model == 13 || c->x86_model == 9 ||
(c->x86_model == 8 && c->x86_mask >= 8)) (c->x86_model == 8 && c->x86_mask >= 8))
set_bit(X86_FEATURE_K6_MTRR, c->x86_capability); set_cpu_cap(c, X86_FEATURE_K6_MTRR);
break; break;
} }
...@@ -217,10 +218,11 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -217,10 +218,11 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
break; break;
} }
break; break;
case 6: /* An Athlon/Duron */ case 6: /* An Athlon/Duron */
/* Bit 15 of Athlon specific MSR 15, needs to be 0 /*
* to enable SSE on Palomino/Morgan/Barton CPU's. * Bit 15 of Athlon specific MSR 15, needs to be 0
* to enable SSE on Palomino/Morgan/Barton CPU's.
* If the BIOS didn't enable it already, enable it here. * If the BIOS didn't enable it already, enable it here.
*/ */
if (c->x86_model >= 6 && c->x86_model <= 10) { if (c->x86_model >= 6 && c->x86_model <= 10) {
...@@ -229,15 +231,16 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -229,15 +231,16 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
rdmsr(MSR_K7_HWCR, l, h); rdmsr(MSR_K7_HWCR, l, h);
l &= ~0x00008000; l &= ~0x00008000;
wrmsr(MSR_K7_HWCR, l, h); wrmsr(MSR_K7_HWCR, l, h);
set_bit(X86_FEATURE_XMM, c->x86_capability); set_cpu_cap(c, X86_FEATURE_XMM);
} }
} }
/* It's been determined by AMD that Athlons since model 8 stepping 1 /*
* It's been determined by AMD that Athlons since model 8 stepping 1
* are more robust with CLK_CTL set to 200xxxxx instead of 600xxxxx * are more robust with CLK_CTL set to 200xxxxx instead of 600xxxxx
* As per AMD technical note 27212 0.2 * As per AMD technical note 27212 0.2
*/ */
if ((c->x86_model == 8 && c->x86_mask>=1) || (c->x86_model > 8)) { if ((c->x86_model == 8 && c->x86_mask >= 1) || (c->x86_model > 8)) {
rdmsr(MSR_K7_CLK_CTL, l, h); rdmsr(MSR_K7_CLK_CTL, l, h);
if ((l & 0xfff00000) != 0x20000000) { if ((l & 0xfff00000) != 0x20000000) {
printk ("CPU: CLK_CTL MSR was %x. Reprogramming to %x\n", l, printk ("CPU: CLK_CTL MSR was %x. Reprogramming to %x\n", l,
...@@ -253,20 +256,19 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -253,20 +256,19 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
/* Use K8 tuning for Fam10h and Fam11h */ /* Use K8 tuning for Fam10h and Fam11h */
case 0x10: case 0x10:
case 0x11: case 0x11:
set_bit(X86_FEATURE_K8, c->x86_capability); set_cpu_cap(c, X86_FEATURE_K8);
break; break;
case 6: case 6:
set_bit(X86_FEATURE_K7, c->x86_capability); set_cpu_cap(c, X86_FEATURE_K7);
break; break;
} }
if (c->x86 >= 6) if (c->x86 >= 6)
set_bit(X86_FEATURE_FXSAVE_LEAK, c->x86_capability); set_cpu_cap(c, X86_FEATURE_FXSAVE_LEAK);
display_cacheinfo(c); display_cacheinfo(c);
if (cpuid_eax(0x80000000) >= 0x80000008) { if (cpuid_eax(0x80000000) >= 0x80000008)
c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1; c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
}
#ifdef CONFIG_X86_HT #ifdef CONFIG_X86_HT
/* /*
...@@ -302,20 +304,20 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -302,20 +304,20 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
/* K6s reports MCEs but don't actually have all the MSRs */ /* K6s reports MCEs but don't actually have all the MSRs */
if (c->x86 < 6) if (c->x86 < 6)
clear_bit(X86_FEATURE_MCE, c->x86_capability); clear_cpu_cap(c, X86_FEATURE_MCE);
if (cpu_has_xmm2) if (cpu_has_xmm2)
set_bit(X86_FEATURE_MFENCE_RDTSC, c->x86_capability); set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
} }
static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size) static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 *c, unsigned int size)
{ {
/* AMD errata T13 (order #21922) */ /* AMD errata T13 (order #21922) */
if ((c->x86 == 6)) { if ((c->x86 == 6)) {
if (c->x86_model == 3 && c->x86_mask == 0) /* Duron Rev A0 */ if (c->x86_model == 3 && c->x86_mask == 0) /* Duron Rev A0 */
size = 64; size = 64;
if (c->x86_model == 4 && if (c->x86_model == 4 &&
(c->x86_mask==0 || c->x86_mask==1)) /* Tbird rev A1/A2 */ (c->x86_mask == 0 || c->x86_mask == 1)) /* Tbird rev A1/A2 */
size = 256; size = 256;
} }
return size; return size;
...@@ -323,19 +325,20 @@ static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned in ...@@ -323,19 +325,20 @@ static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned in
static struct cpu_dev amd_cpu_dev __cpuinitdata = { static struct cpu_dev amd_cpu_dev __cpuinitdata = {
.c_vendor = "AMD", .c_vendor = "AMD",
.c_ident = { "AuthenticAMD" }, .c_ident = { "AuthenticAMD" },
.c_models = { .c_models = {
{ .vendor = X86_VENDOR_AMD, .family = 4, .model_names = { .vendor = X86_VENDOR_AMD, .family = 4, .model_names =
{ {
[3] = "486 DX/2", [3] = "486 DX/2",
[7] = "486 DX/2-WB", [7] = "486 DX/2-WB",
[8] = "486 DX/4", [8] = "486 DX/4",
[9] = "486 DX/4-WB", [9] = "486 DX/4-WB",
[14] = "Am5x86-WT", [14] = "Am5x86-WT",
[15] = "Am5x86-WB" [15] = "Am5x86-WB"
} }
}, },
}, },
.c_early_init = early_init_amd,
.c_init = init_amd, .c_init = init_amd,
.c_size_cache = amd_size_cache, .c_size_cache = amd_size_cache,
}; };
...@@ -345,3 +348,5 @@ int __init amd_init_cpu(void) ...@@ -345,3 +348,5 @@ int __init amd_init_cpu(void)
cpu_devs[X86_VENDOR_AMD] = &amd_cpu_dev; cpu_devs[X86_VENDOR_AMD] = &amd_cpu_dev;
return 0; return 0;
} }
cpu_vendor_dev_register(X86_VENDOR_AMD, &amd_cpu_dev);
This diff is collapsed.
This diff is collapsed.
...@@ -14,6 +14,7 @@ struct cpu_dev { ...@@ -14,6 +14,7 @@ struct cpu_dev {
struct cpu_model_info c_models[4]; struct cpu_model_info c_models[4];
void (*c_early_init)(struct cpuinfo_x86 *c);
void (*c_init)(struct cpuinfo_x86 * c); void (*c_init)(struct cpuinfo_x86 * c);
void (*c_identify)(struct cpuinfo_x86 * c); void (*c_identify)(struct cpuinfo_x86 * c);
unsigned int (*c_size_cache)(struct cpuinfo_x86 * c, unsigned int size); unsigned int (*c_size_cache)(struct cpuinfo_x86 * c, unsigned int size);
...@@ -21,18 +22,17 @@ struct cpu_dev { ...@@ -21,18 +22,17 @@ struct cpu_dev {
extern struct cpu_dev * cpu_devs [X86_VENDOR_NUM]; extern struct cpu_dev * cpu_devs [X86_VENDOR_NUM];
struct cpu_vendor_dev {
int vendor;
struct cpu_dev *cpu_dev;
};
#define cpu_vendor_dev_register(cpu_vendor_id, cpu_dev) \
static struct cpu_vendor_dev __cpu_vendor_dev_##cpu_vendor_id __used \
__attribute__((__section__(".x86cpuvendor.init"))) = \
{ cpu_vendor_id, cpu_dev }
extern struct cpu_vendor_dev __x86cpuvendor_start[], __x86cpuvendor_end[];
extern int get_model_name(struct cpuinfo_x86 *c); extern int get_model_name(struct cpuinfo_x86 *c);
extern void display_cacheinfo(struct cpuinfo_x86 *c); extern void display_cacheinfo(struct cpuinfo_x86 *c);
extern void early_init_intel(struct cpuinfo_x86 *c);
extern void early_init_amd(struct cpuinfo_x86 *c);
/* Specific CPU type init functions */
int intel_cpu_init(void);
int amd_init_cpu(void);
int cyrix_init_cpu(void);
int nsc_init_cpu(void);
int centaur_init_cpu(void);
int transmeta_init_cpu(void);
int nexgen_init_cpu(void);
int umc_init_cpu(void);
This diff is collapsed.
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* This file must not contain any executable code. * This file must not contain any executable code.
*/ */
#include "asm/cpufeature.h" #include <asm/cpufeature.h>
/* /*
* These flag bits must match the definitions in <asm/cpufeature.h>. * These flag bits must match the definitions in <asm/cpufeature.h>.
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
struct movsl_mask movsl_mask __read_mostly; struct movsl_mask movsl_mask __read_mostly;
#endif #endif
void __cpuinit early_init_intel(struct cpuinfo_x86 *c) static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
{ {
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */ /* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64) if (c->x86 == 15 && c->x86_cache_alignment == 64)
...@@ -45,7 +45,7 @@ void __cpuinit early_init_intel(struct cpuinfo_x86 *c) ...@@ -45,7 +45,7 @@ void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
* *
* This is called before we do cpu ident work * This is called before we do cpu ident work
*/ */
int __cpuinit ppro_with_ram_bug(void) int __cpuinit ppro_with_ram_bug(void)
{ {
/* Uses data from early_cpu_detect now */ /* Uses data from early_cpu_detect now */
...@@ -58,7 +58,7 @@ int __cpuinit ppro_with_ram_bug(void) ...@@ -58,7 +58,7 @@ int __cpuinit ppro_with_ram_bug(void)
} }
return 0; return 0;
} }
/* /*
* P4 Xeon errata 037 workaround. * P4 Xeon errata 037 workaround.
...@@ -69,7 +69,7 @@ static void __cpuinit Intel_errata_workarounds(struct cpuinfo_x86 *c) ...@@ -69,7 +69,7 @@ static void __cpuinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
unsigned long lo, hi; unsigned long lo, hi;
if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) { if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) {
rdmsr (MSR_IA32_MISC_ENABLE, lo, hi); rdmsr(MSR_IA32_MISC_ENABLE, lo, hi);
if ((lo & (1<<9)) == 0) { if ((lo & (1<<9)) == 0) {
printk (KERN_INFO "CPU: C0 stepping P4 Xeon detected.\n"); printk (KERN_INFO "CPU: C0 stepping P4 Xeon detected.\n");
printk (KERN_INFO "CPU: Disabling hardware prefetching (Errata 037)\n"); printk (KERN_INFO "CPU: Disabling hardware prefetching (Errata 037)\n");
...@@ -127,10 +127,10 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -127,10 +127,10 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
*/ */
c->f00f_bug = 0; c->f00f_bug = 0;
if (!paravirt_enabled() && c->x86 == 5) { if (!paravirt_enabled() && c->x86 == 5) {
static int f00f_workaround_enabled = 0; static int f00f_workaround_enabled;
c->f00f_bug = 1; c->f00f_bug = 1;
if ( !f00f_workaround_enabled ) { if (!f00f_workaround_enabled) {
trap_init_f00f_bug(); trap_init_f00f_bug();
printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n"); printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n");
f00f_workaround_enabled = 1; f00f_workaround_enabled = 1;
...@@ -139,20 +139,22 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -139,20 +139,22 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
#endif #endif
l2 = init_intel_cacheinfo(c); l2 = init_intel_cacheinfo(c);
if (c->cpuid_level > 9 ) { if (c->cpuid_level > 9) {
unsigned eax = cpuid_eax(10); unsigned eax = cpuid_eax(10);
/* Check for version and the number of counters */ /* Check for version and the number of counters */
if ((eax & 0xff) && (((eax>>8) & 0xff) > 1)) if ((eax & 0xff) && (((eax>>8) & 0xff) > 1))
set_bit(X86_FEATURE_ARCH_PERFMON, c->x86_capability); set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
} }
/* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until model 3 mask 3 */ /* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until model 3 mask 3 */
if ((c->x86<<8 | c->x86_model<<4 | c->x86_mask) < 0x633) if ((c->x86<<8 | c->x86_model<<4 | c->x86_mask) < 0x633)
clear_bit(X86_FEATURE_SEP, c->x86_capability); clear_cpu_cap(c, X86_FEATURE_SEP);
/* Names for the Pentium II/Celeron processors /*
detectable only by also checking the cache size. * Names for the Pentium II/Celeron processors
Dixon is NOT a Celeron. */ * detectable only by also checking the cache size.
* Dixon is NOT a Celeron.
*/
if (c->x86 == 6) { if (c->x86 == 6) {
switch (c->x86_model) { switch (c->x86_model) {
case 5: case 5:
...@@ -163,14 +165,14 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -163,14 +165,14 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
p = "Mobile Pentium II (Dixon)"; p = "Mobile Pentium II (Dixon)";
} }
break; break;
case 6: case 6:
if (l2 == 128) if (l2 == 128)
p = "Celeron (Mendocino)"; p = "Celeron (Mendocino)";
else if (c->x86_mask == 0 || c->x86_mask == 5) else if (c->x86_mask == 0 || c->x86_mask == 5)
p = "Celeron-A"; p = "Celeron-A";
break; break;
case 8: case 8:
if (l2 == 128) if (l2 == 128)
p = "Celeron (Coppermine)"; p = "Celeron (Coppermine)";
...@@ -178,9 +180,9 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -178,9 +180,9 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
} }
} }
if ( p ) if (p)
strcpy(c->x86_model_id, p); strcpy(c->x86_model_id, p);
c->x86_max_cores = num_cpu_cores(c); c->x86_max_cores = num_cpu_cores(c);
detect_ht(c); detect_ht(c);
...@@ -207,28 +209,29 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -207,28 +209,29 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
#endif #endif
if (cpu_has_xmm2) if (cpu_has_xmm2)
set_bit(X86_FEATURE_LFENCE_RDTSC, c->x86_capability); set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
if (c->x86 == 15) { if (c->x86 == 15) {
set_bit(X86_FEATURE_P4, c->x86_capability); set_cpu_cap(c, X86_FEATURE_P4);
} }
if (c->x86 == 6) if (c->x86 == 6)
set_bit(X86_FEATURE_P3, c->x86_capability); set_cpu_cap(c, X86_FEATURE_P3);
if (cpu_has_ds) { if (cpu_has_ds) {
unsigned int l1; unsigned int l1;
rdmsr(MSR_IA32_MISC_ENABLE, l1, l2); rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
if (!(l1 & (1<<11))) if (!(l1 & (1<<11)))
set_bit(X86_FEATURE_BTS, c->x86_capability); set_cpu_cap(c, X86_FEATURE_BTS);
if (!(l1 & (1<<12))) if (!(l1 & (1<<12)))
set_bit(X86_FEATURE_PEBS, c->x86_capability); set_cpu_cap(c, X86_FEATURE_PEBS);
} }
if (cpu_has_bts) if (cpu_has_bts)
ds_init_intel(c); ds_init_intel(c);
} }
static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned int size) static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
{ {
/* Intel PIII Tualatin. This comes in two flavours. /*
* Intel PIII Tualatin. This comes in two flavours.
* One has 256kb of cache, the other 512. We have no way * One has 256kb of cache, the other 512. We have no way
* to determine which, so we use a boottime override * to determine which, so we use a boottime override
* for the 512kb model, and assume 256 otherwise. * for the 512kb model, and assume 256 otherwise.
...@@ -240,42 +243,42 @@ static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned ...@@ -240,42 +243,42 @@ static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned
static struct cpu_dev intel_cpu_dev __cpuinitdata = { static struct cpu_dev intel_cpu_dev __cpuinitdata = {
.c_vendor = "Intel", .c_vendor = "Intel",
.c_ident = { "GenuineIntel" }, .c_ident = { "GenuineIntel" },
.c_models = { .c_models = {
{ .vendor = X86_VENDOR_INTEL, .family = 4, .model_names = { .vendor = X86_VENDOR_INTEL, .family = 4, .model_names =
{ {
[0] = "486 DX-25/33", [0] = "486 DX-25/33",
[1] = "486 DX-50", [1] = "486 DX-50",
[2] = "486 SX", [2] = "486 SX",
[3] = "486 DX/2", [3] = "486 DX/2",
[4] = "486 SL", [4] = "486 SL",
[5] = "486 SX/2", [5] = "486 SX/2",
[7] = "486 DX/2-WB", [7] = "486 DX/2-WB",
[8] = "486 DX/4", [8] = "486 DX/4",
[9] = "486 DX/4-WB" [9] = "486 DX/4-WB"
} }
}, },
{ .vendor = X86_VENDOR_INTEL, .family = 5, .model_names = { .vendor = X86_VENDOR_INTEL, .family = 5, .model_names =
{ {
[0] = "Pentium 60/66 A-step", [0] = "Pentium 60/66 A-step",
[1] = "Pentium 60/66", [1] = "Pentium 60/66",
[2] = "Pentium 75 - 200", [2] = "Pentium 75 - 200",
[3] = "OverDrive PODP5V83", [3] = "OverDrive PODP5V83",
[4] = "Pentium MMX", [4] = "Pentium MMX",
[7] = "Mobile Pentium 75 - 200", [7] = "Mobile Pentium 75 - 200",
[8] = "Mobile Pentium MMX" [8] = "Mobile Pentium MMX"
} }
}, },
{ .vendor = X86_VENDOR_INTEL, .family = 6, .model_names = { .vendor = X86_VENDOR_INTEL, .family = 6, .model_names =
{ {
[0] = "Pentium Pro A-step", [0] = "Pentium Pro A-step",
[1] = "Pentium Pro", [1] = "Pentium Pro",
[3] = "Pentium II (Klamath)", [3] = "Pentium II (Klamath)",
[4] = "Pentium II (Deschutes)", [4] = "Pentium II (Deschutes)",
[5] = "Pentium II (Deschutes)", [5] = "Pentium II (Deschutes)",
[6] = "Mobile Pentium II", [6] = "Mobile Pentium II",
[7] = "Pentium III (Katmai)", [7] = "Pentium III (Katmai)",
[8] = "Pentium III (Coppermine)", [8] = "Pentium III (Coppermine)",
[10] = "Pentium III (Cascades)", [10] = "Pentium III (Cascades)",
[11] = "Pentium III (Tualatin)", [11] = "Pentium III (Tualatin)",
} }
...@@ -290,15 +293,12 @@ static struct cpu_dev intel_cpu_dev __cpuinitdata = { ...@@ -290,15 +293,12 @@ static struct cpu_dev intel_cpu_dev __cpuinitdata = {
} }
}, },
}, },
.c_early_init = early_init_intel,
.c_init = init_intel, .c_init = init_intel,
.c_size_cache = intel_size_cache, .c_size_cache = intel_size_cache,
}; };
__init int intel_cpu_init(void) cpu_vendor_dev_register(X86_VENDOR_INTEL, &intel_cpu_dev);
{
cpu_devs[X86_VENDOR_INTEL] = &intel_cpu_dev;
return 0;
}
#ifndef CONFIG_X86_CMPXCHG #ifndef CONFIG_X86_CMPXCHG
unsigned long cmpxchg_386_u8(volatile void *ptr, u8 old, u8 new) unsigned long cmpxchg_386_u8(volatile void *ptr, u8 old, u8 new)
...@@ -364,5 +364,5 @@ unsigned long long cmpxchg_486_u64(volatile void *ptr, u64 old, u64 new) ...@@ -364,5 +364,5 @@ unsigned long long cmpxchg_486_u64(volatile void *ptr, u64 old, u64 new)
EXPORT_SYMBOL(cmpxchg_486_u64); EXPORT_SYMBOL(cmpxchg_486_u64);
#endif #endif
// arch_initcall(intel_cpu_init); /* arch_initcall(intel_cpu_init); */
This diff is collapsed.
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/module.h> #include <linux/module.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/msr.h> #include <asm/msr.h>
...@@ -26,23 +26,26 @@ static int firstbank; ...@@ -26,23 +26,26 @@ static int firstbank;
#define MCE_RATE 15*HZ /* timer rate is 15s */ #define MCE_RATE 15*HZ /* timer rate is 15s */
static void mce_checkregs (void *info) static void mce_checkregs(void *info)
{ {
u32 low, high; u32 low, high;
int i; int i;
for (i=firstbank; i<nr_mce_banks; i++) { for (i = firstbank; i < nr_mce_banks; i++) {
rdmsr (MSR_IA32_MC0_STATUS+i*4, low, high); rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
if (high & (1<<31)) { if (high & (1<<31)) {
printk(KERN_INFO "MCE: The hardware reports a non " printk(KERN_INFO "MCE: The hardware reports a non "
"fatal, correctable incident occurred on " "fatal, correctable incident occurred on "
"CPU %d.\n", "CPU %d.\n",
smp_processor_id()); smp_processor_id());
printk (KERN_INFO "Bank %d: %08x%08x\n", i, high, low); printk(KERN_INFO "Bank %d: %08x%08x\n", i, high, low);
/* Scrub the error so we don't pick it up in MCE_RATE seconds time. */ /*
wrmsr (MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL); * Scrub the error so we don't pick it up in MCE_RATE
* seconds time.
*/
wrmsr(MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL);
/* Serialize */ /* Serialize */
wmb(); wmb();
...@@ -55,10 +58,10 @@ static void mce_work_fn(struct work_struct *work); ...@@ -55,10 +58,10 @@ static void mce_work_fn(struct work_struct *work);
static DECLARE_DELAYED_WORK(mce_work, mce_work_fn); static DECLARE_DELAYED_WORK(mce_work, mce_work_fn);
static void mce_work_fn(struct work_struct *work) static void mce_work_fn(struct work_struct *work)
{ {
on_each_cpu(mce_checkregs, NULL, 1, 1); on_each_cpu(mce_checkregs, NULL, 1, 1);
schedule_delayed_work(&mce_work, round_jiffies_relative(MCE_RATE)); schedule_delayed_work(&mce_work, round_jiffies_relative(MCE_RATE));
} }
static int __init init_nonfatal_mce_checker(void) static int __init init_nonfatal_mce_checker(void)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment