- 30 Jan, 2008 40 commits
-
-
Glauber de Oliveira Costa authored
This patch adds room for read and write_cr8 functions back in pv_cpu_ops struct Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Glauber de Oliveira Costa authored
With PARAVIRT, we actually have arch_{dup,exit}_mmap functions, so we can't include the generic header Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Glauber de Oliveira Costa authored
x86_64 lacks a native_init_IRQ() function, so we turn the arch's init_IRQ() function into a native construct Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Glauber de Oliveira Costa authored
We use a __stringify construction at paravirt_patch_64.c. It's better practice to include the stringify header directly Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Yinghai Lu authored
currently when gart iommu is enabled by BIOS or previous we got " Checking aperture... CPU 0: aperture @4000000 size 64MB CPU 1: aperture @4000000 size 64MB " we should use use Node instead. we will get " Checking aperture... Node 0: aperture @4000000 size 64MB Node 1: aperture @4000000 size 64MB " Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Hiroshi Shimamoto authored
Move the select_idle_routine() call to after the detect_ht() call at identify_cpu() on 64-bit. This change is for printing the polling idle and HT enabled warning message properly. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Hiroshi Shimamoto authored
The warning message at idle_setup() is never shown because smp_num_sibling hasn't been updated at this point yet. Move this polling idle and HT enabled warning to select_idle_routine(). I also implement this warning on 64-bit kernel. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Yinghai Lu authored
Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Michael Opdenacker authored
do not add the pcspkr platform device if pcspkr support is disabled. Signed-off-by: Michael Opdenacker <michael@free-electrons.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Yinghai Lu authored
Andi's patch " x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. " calling early_init_amd in early_identify_cpu and identify_cpu two times. this patch remove the one in identify_cpu Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jesse Barnes authored
On some machines, buggy BIOSes don't properly setup WB MTRRs to cover all available RAM, meaning the last few megs (or even gigs) of memory will be marked uncached. Since Linux tends to allocate from high memory addresses first, this causes the machine to be unusably slow as soon as the kernel starts really using memory (i.e. right around init time). This patch works around the problem by scanning the MTRRs at boot and figuring out whether the current end_pfn value (setup by early e820 code) goes beyond the highest WB MTRR range, and if so, trimming it to match. A fairly obnoxious KERN_WARNING is printed too, letting the user know that not all of their memory is available due to a likely BIOS bug. Something similar could be done on i386 if needed, but the boot ordering would be slightly different, since the MTRR code on i386 depends on the boot_cpu_data structure being setup. This patch fixes a bug in the last patch that caused the code to run on non-Intel machines (AMD machines apparently don't need it and it's untested on other non-Intel machines, so best keep it off). Further enhancements and fixes from: Yinghai Lu <Yinghai.Lu@Sun.COM> Andi Kleen <ak@suse.de> Signed-off-by: Jesse Barnes <jesse.barnes@intel.com> Tested-by: Justin Piszcz <jpiszcz@lucidpixels.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
They now look like: hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000] This makes it easier to pinpoint bugs to specific libraries. And printing the offset into a mapping also always allows to find the correct fault point in a library even with randomized mappings. Previously there was no way to actually find the correct code address inside the randomized mapping. Relies on earlier patch to shorten the printk formats. They are often now longer than 80 characters, but I think that's worth it. [includes fix from Eric Dumazet to check d_path error value] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
When the kernel panics early for some unrelated reason there would be eventually an early exception inside panic because clear_local_APIC tried to disable the not yet mapped APIC. Check for that explicitely. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
On VMs implemented using JITs that cache translated code changing the lock prefixes is a quite costly operation that forces the JIT to throw away and retranslate a lot of code. Previously a SMP kernel would rewrite the locks once for each CPU which is quite unnecessary. This patch changes the code to never switch at boot in the normal case (SMP kernel booting with >1 CPU) or only once for SMP kernel on UP. This makes a significant difference in boot up performance on AMD SimNow! Also I expect it to be a little faster on native systems too because a smp switch does a lot of text_poke()s which each synchronize the pipeline. v1->v2: Rename max_cpus v1->v2: Fix off by one in UP check (Thomas Gleixner) Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
On x86-64 there are several memory allocations before bootmem. To avoid them stomping on each other they used to be all hard coded in bad_area(). Replace this with an array that is filled as needed. This cleans up the code considerably and allows to expand its use. Cc: peterz@infradead.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Adding the address of the faulting library missed removing a line ending from X86_32. Also update the shorter printk format for X86_32 in fault_64.c to make it easier to se the remaining differences. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Previously there was a AMD specific quirk to handle the case of AMD Fam10h MWAIT not supporting any C states. But it turns out that CPUID already has ways to detectly detect that without using special quirks. The new code simply checks if MWAIT supports at least C1 and doesn't use it if it doesn't. No more vendor specific code. Note this is does not simply clear MWAIT because MWAIT can be still useful even without C states. Credit goes to Ben Serebrin for pointing out the (nearly) obvious. Cc: "Andreas Herrmann" <andreas.herrmann3@amd.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Previously it was only run for Intel CPUs, but AMD Fam10h implements MWAIT too. This matches 64bit behaviour. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Choose a less generic name for such a special case. Add a comment explaining the odd use in X86_32. Change the one user of stack_pointer. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Leave definition of pt_regs in its own section, move all kernel code to section afterwards, unify prototype definitions, has some conditional prototypes to make it clear what was only defined in 32 and 64 bit. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Unify the definiton of: v8086_mode user_mode user_mode_vm stack_pointer instruction_pointer frame_pointer in ptrace.h to make it clear where the differences are between 32 and 64 bit. Changes macros to static inlines as well. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Hiroshi Shimamoto authored
kdump needs ELF_CORE_COPY_REGS in crash_save_cpu(). This lack of the macro causes the following BUG. SysRq : Trigger a crashdump ------------[ cut here ]------------ kernel BUG at include/linux/elfcore.h:105! invalid opcode: 0000 [1] PREEMPT SMP Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Dmitri Vorobiev authored
This is against current x86.git. The size of the system call table for 32-bit x86 kernels is obtained by compile-time calculation of the sys_call_table array, not from the value, which the NR_syscalls macro expands to. This trivial patch removes the fossil macro. Manually tested by grepping the x86 files for the "NR_syscalls" string. No relevant use cases found. Build-tested using allyesconfig, allnoconfig and a couple of randconfig instances. All builds successfully finished. Runtime test performed using a stripped-down Debian-ish config. The system booted successfully. Signed-off-by: Dmitri Vorobiev <dmitri.vorobiev@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Kyle McMartin authored
PSE, PGE, XMM, XMM2, and FXSR are defined as required features, and will be optimized to a constant at compile time. Remove their redundant definitions. Signed-off-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
travis@sgi.com authored
This patch removes the EXPORT_SYMBOL for: x86_cpu_to_node_map_init x86_cpu_to_node_map_early_ptr ... thus fixing the section mismatch problem. Also, the mem -> node hash lookup is fixed. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate set_pud()s. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate set_pmd()s. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate set_pte* operations. PAE still needs to have special variants of some of these because it can't atomically update a 64-bit pte, so there's still some duplication. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate __pmd/pmd_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate __pgd/pgd_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Remove duplicate __pte/pte_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Rearrange the various pagetable mmu_ops to remove duplication. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andrew Morton authored
WARNING: vmlinux.o(__ksymtab+0x670): Section mismatch: reference to .init.data:x86_cpu_to_node_map_init (between '__ksymtab_x86_cpu_to_node_map_init' and '__ksymtab_node_data') Cc: Matthew Dobson <colpatch@us.ibm.com> Cc: Mike Travis <travis@sgi.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Mike Travis authored
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jan Beulich authored
Based on patch from Jan Beulich <jbeulich@novell.com>. Don't rely on kmalloc(PAGE_SIZE) returning PAGE_SIZE aligned memory (Xen requires GDT *and* LDT to be page-aligned). Using the page allocator interface also removes the (albeit small) slab allocator overhead. The same change being done for 64-bits for consistency. Further, the Xen hypercall interface expects the LDT address to be virtual, not machine. [ Adjusted to unified ldt.c - Jeremy ] Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jan Beulich authored
Remove the dead .text.lock. Move _etext and __{start,stop}___ex_table into their sections. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Alan Cox authored
I can find no reason for the _p on the serverworks IRQ routing logic, and a review of the documentation contains no indication that any such delay is needed so lets try this Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Alan Cox authored
Rather than remove and/or mangle inb_p/outb_p we want to remove the use of them from inappropriate places. For the PIC/PIT this may eventually depend on 32/64bitism or similar so start by adding inb/outb_pit and inb/outb_pic so that we can make them use any scheme we settle on without disturbing the existing, correct (for ISA), port 0x80 usage. (eg we can make inb_pit use udelay without messing up inb_p). Floppy already does this for the fdc. That really only leaves the CMOS as a core logic item to tackle, and bits of parallel port handling in the chipset layers. Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-