- 07 Dec, 2006 40 commits
-
-
Rusty Russell authored
1) Each hypervisor writes a probe function to detect whether we are running under that hypervisor. paravirt_probe() registers this function. 2) If vmlinux is booted with ring != 0, we call all the probe functions (with registers except %esp intact) in link order: the winner will not return. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
-
Rusty Russell authored
Both lhype and Xen want to call the core of the x86 cpu detect code before calling start_kernel. (extracted from larger patch) AK: folded in start_kernel header patch Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
-
Rusty Russell authored
It turns out that the most called ops, by several orders of magnitude, are the interrupt manipulation ops. These are obvious candidates for patching, so mark them up and create infrastructure for it. The method used is that the ops structure has a patch function, which is called for each place which needs to be patched: this returns a number of instructions (the rest are NOP-padded). Usually we can spare a register (%eax) for the binary patched code to use, but in a couple of critical places in entry.S we can't: we make the clobbers explicit at the call site, and manually clobber the allowed registers in debug mode as an extra check. And: Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT. And: AK: Fix warnings in x86-64 alternative.c build And: AK: Fix compilation with defconfig And: ^From: Andrew Morton <akpm@osdl.org> Some binutlises still like to emit references to __stop_parainstructions and __start_parainstructions. And: AK: Fix warnings about unused variables when PARAVIRT is disabled. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
-
Rusty Russell authored
Create a paravirt.h header for all the critical operations which need to be replaced with hypervisor calls, and include that instead of defining native operations, when CONFIG_PARAVIRT. This patch does the dumbest possible replacement of paravirtualized instructions: calls through a "paravirt_ops" structure. Currently these are function implementations of native hardware: hypervisors will override the ops structure with their own variants. All the pv-ops functions are declared "fastcall" so that a specific register-based ABI is used, to make inlining assember easier. And: +From: Andy Whitcroft <apw@shadowen.org> The paravirt ops introduce a 'weak' attribute onto memory_setup(). Code ordering leads to the following warnings on x86: arch/i386/kernel/setup.c:651: warning: weak declaration of `memory_setup' after first use results in unspecified behavior Move memory_setup() to avoid this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andy Whitcroft <apw@shadowen.org>
-
Nicolas Kaiser authored
Fix double #includes in arch/i386 Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Signed-off-by: Andi Kleen <ak@suse.de>
-
David Rientjes authored
Substitutes allocate_pgdat virtual address lookup with pfn_to_kaddr macro. Signed-off-by: David Rientjes <rientjes@cs.washington.edu> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
-
Andi Kleen authored
It's two big and used by two callers. Calls should be cheap enough anyways. Signed-off-by: Andi Kleen <ak@suse.de>
-
Chuck Ebbert authored
IOPL is implicitly saved and restored on task switch, so explicit check is no longer needed. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Interrupt could happen between setting the IO-APIC entry and setting its interrupt data. Pointed out by Linus. Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Interrupt could happen between setting the IO-APIC entry and setting its interrupt data. Pointed out by Linus. Signed-off-by: Andi Kleen <ak@suse.de>
-
Paolo 'Blaisorblade' Giarrusso authored
For both i386 and x86_64, copy from arch/$ARCH/lib/delay.c comments about the used magic constants, plus a few other niceties. Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andi Kleen <ak@suse.de> include/asm-i386/delay.h | 5 ++++- include/asm-x86_64/delay.h | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-)
-
Paolo 'Blaisorblade' Giarrusso authored
Port two patches from i386 to x86_64 delay.c to make sure all rounding is done upward instead of downward. There is no sign in commit messages that the mismatch was done on purpose, and "delay() guarantees sleeping at least for the specified time" is still a valid rule IMHO. The original x86 patches are both from pre-GIT era, i.e.: "[PATCH] round up in __udelay()" in commit 54c7e1f5cc6771ff644d7bc21a2b829308bd126f "[PATCH] add 1 in __const_udelay()" in commit 42c77a9801b8877d8b90f65f75db758822a0bccc (both commits are from converted BK repository to x86_64). AK: fixed gcc warning linux/arch/x86_64/lib/delay.c:43: warning: suggest parentheses around + or - inside shift (did this actually work?) Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andi Kleen <ak@suse.de>
-
Muli Ben-Yehuda authored
This patch makes it possible to compile Calgary in but not use it by default. In this mode, use 'iommu=calgary' to activate it. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: Andi Kleen <ak@suse.de>
-
Muli Ben-Yehuda authored
This patch cleans up the previous "Use BIOS supplied BBAR information" patch. Mostly stylistic clenaups, but also check for ioremap failure when we ioremap the BBAR rather than when trying to use it. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Laurent Vivier <Laurent.Vivier@bull.net>
-
Laurent Vivier authored
Find the BBAR register address of each Calgary using the "Extended BIOS Data Area" rather than calculating it ourselves. Also get the bus topology (what PHB each bus is on) from Calgary rather than calculating it ourselves. This patch fixes http://bugzilla.kernel.org/show_bug.cgi?id=7407. Signed-off-by: Laurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: Andi Kleen <ak@suse.de>
-
Muli Ben-Yehuda authored
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: Andi Kleen <ak@suse.de>
-
bibo,mao authored
This patch moves e820 memory map print and memmap boot param parsing function from setup.c to e820.c, also adds limit_regions and print_memory_map declaration in header file. Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 152 +++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 158 --------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 153 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 155 insertions(+), 152 deletions(-)
-
bibo,mao authored
This patch moves e820/efi memmap table walking function from setup.c to e820.c, also this patch adds extern declaration in header file. Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 117 insertions(+), 118 deletions(-)
-
bibo,mao authored
Move more code from setup.c into e820.c Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
-
bibo,mao authored
This patch moves bios e820 map sanitize and copy function from setup.c to e820.c Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 252 +++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 240 -------------------------------------------- 2 files changed, 252 insertions(+), 240 deletions(-)
-
bibo,mao authored
This patch creates new file named e820.c to hanle standard io/mem resources, moving request_standard_resources function from setup.c to e820.c. Also this patch modifies Makfile to compile file e820.c. Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Makefile | 2 arch/i386/kernel/Makefile | 2 arch/i386/kernel/e820.c | 289 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 276 ------------------------------------------- 3 files changed, 293 insertions(+), 274 deletions(-)
-
Albert Cahalan authored
The recent change to make x86_64 support i386 binaries compiled with -mregparm=3 only covered signal handlers without SA_SIGINFO. (the 3-arg "real-time" ones) To be compatible with i386, both types should be supported. Signed-off-by: Albert Cahalan <acahalan@gmail.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Instead of adding all kinds of more quirks try various timer routing variants in check_timer. In particular this tries to handle quirks from: - Nvidia NF2-4 reference BIOS: wrong timer override - Asus: Wrong timer override but no HPET table - ATI: require timer disabled in 8259 - Some boards: require timer enabled in 8259 We just try many of the the known variants in the hopefully right order in check_timer. Trying pin 0/2 on Nvidia suggested by Tim Hockin. TBD Experimental. Needs a lot of testing Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Makes the intention of the code cleaner to read and avoids a potential deadlock on mmap_sem. Also change the types of the arguments to not include __user because they're really not user addresses. Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Instead of open coded __get_user Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Caller of probe_kernel_address shouldn't need to know that pka is internally implemented with __get_user. So move the __user cast into pka. Signed-off-by: Andi Kleen <ak@suse.de>
-
Yinghai Lu authored
Clear the irq releated entries in irq_vector, irq_domain and vector_irq instead of clearing irq_vector only. So when new irq is created, it could reuse that vector. (actually is the second loop scanning from FIRST_DEVICE_VECTOR+8). This could avoid the vectors are used up with enough module inserting and removing Cc: Eric W. Biedierman <ebiederm@xmission.com> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-By: Yinghai Lu <yinghai.lu@amd.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
CLFLUSH is a lot faster than WBINVD so try to use that. Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
Also report it in /proc/cpuinfo similar to x86-64. Needed for followon patch Signed-off-by: Andi Kleen <ak@suse.de>
-
Andi Kleen authored
CLFLUSH is a lot faster than WBINVD so avoid the later if at all possible. Always pass the complete list of pages to other CPUs to cut down the number of IPIs. Minor other cleanup and sync with i386 version. Signed-off-by: Andi Kleen <ak@suse.de>
-
Joe Korty authored
The entry.S code at work_notifysig is surely wrong. It drops into unrelated code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n. [PATCH] Make vm86 support optional tree 9b5daef5280800a0006343a17f63072658d91a1d pushed to git Jan 8, 2006, and first appears in 2.6.16 The 'fix' here is to also compile out the vm86 test & branch when CONFIG_VM86=n. Signed-off-by: Joe Korty <joe.korty@ccur.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Avi Kivity authored
Code that wants to use struct desc_struct cannot do so on i386 because desc.h contains other code that will only compile on x86_64. So extract the structure definitions into a asm-x86_64/desc_defs.h. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andi Kleen <ak@suse.de> include/asm-x86_64/desc.h | 53 ------------------------------- include/asm-x86_64/desc_defs.h | 69 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+), 52 deletions(-)
-
Vivek Goyal authored
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
-
Vivek Goyal authored
Extend bzImage protocol to enable bootloaders to load a completely relocatable bzImage. Now protected mode component of kernel is also relocatable and a boot-loader can load the protected mode component at a differnt physical address than 1MB. (If kernel was built with CONFIG_RELOCATABLE) Kexec can make use of it to load this kernel at a different physical address to capture kernel crash dumps. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
-
Vivek Goyal authored
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN. Hardcoding the kernel physical start value creates a problem in relocatable kernel context due to boot loader limitations. For ex, if somebody compiles a relocatable kernel to be run from address 4MB, but this kernel will run from location 1MB as grub loads the kernel at physical address 1MB. Kernel thinks that I am a relocatable kernel and I should run from the address I have been loaded at. So somebody wanting to run kernel from 4MB alignment location (for improved performance regions) can't do that. o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make more sense in relocatable kernel context. At run time kernel will move itself to a physical addr location which meets user specified alignment restrictions. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Vivek Goyal authored
o Relocations generated w.r.t absolute symbols are not processed as by definition, absolute symbols are not to be relocated. Explicitly warn user about absolutions relocations present at compile time. o These relocations get introduced either due to linker optimizations or some programming oversights. o Also create a list of symbols which have been audited to be safe and don't emit warnings for these. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Eric W. Biederman authored
This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is selected it will be able to be loaded at any 4K aligned address below 1G. The technique used is to compile the decompressor with -fPIC and modify it so the decompressor is fully relocatable. For the main kernel relocations are generated. Resulting in a kernel that is relocatable with no runtime overhead and no need to modify the source code. A reserved 32bit word in the parameters has been assigned to serve as a stack so we figure out where are running. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Eric W. Biederman authored
Print the addresses of non-absolute symbols relative to _text so that ld will generate relocations. Allowing a relocatable kernel to relocate them. We can't actually use the symbol names because kallsyms includes static symbols that are not exported from their object files. Add the _text symbol definitions to the architectures which don't define it otherwise linker will fail. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Eric W. Biederman authored
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but it triggers a full kernel rebuild for the silliest of reasons. This modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h which prevents the full rebuild problem, which makes the code much more maintainer and hopefully user friendly. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-
Eric W. Biederman authored
Currently when we are reserving the memory the kernel text resides in we start at __PHYSICAL_START which happens to be correct but not very obvious. In addition when we start relocating the kernel __PHYSICAL_START is the wrong value, as it is an absolute symbol that does not get relocated. By starting the reservation at __pa_symbol(_text) the code is clearer and will be correct when relocated. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
-