- 30 Jan, 2008 40 commits
-
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199319657 28800 # Node ID bba9287641ff90e836d090d80b5c0a846aab7162 # Parent d617b72a0cc9d14bde2087d065c36d4ed3265761 x86: page.h: move remaining bits and pieces Move the remaining odds and ends into page.h. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199319656 28800 # Node ID d617b72a0cc9d14bde2087d065c36d4ed3265761 # Parent 3bd7db6e85e66e7f3362874802df26a82fcb2d92 x86: page.h: move pa and va related things Move and unify the virtual<->physical address space conversion functions. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199319654 28800 # Node ID 3bd7db6e85e66e7f3362874802df26a82fcb2d92 # Parent f7e7db3facd9406545103164f9be8f9ba1a2b549 x86: page.h: move and unify types for pagetable entry definitions This patch: 1. Defines arch-specific types for the contents of a pagetable entry. That is, 32-bit entries for 32-bit non-PAE, and 64-bit entries for 32-bit PAE and 64-bit. However, even though the latter two are the same size, they're defined with different types in order to retain compatibility with printk format strings, etc. 2. Defines arch-specific pte_t. This is different because 32-bit PAE defines it in two halves, whereas 32-bit PAE and 64-bit define it as a single entry. All the other pagetable levels can be defined in a common way. This also defines arch-specific pte_val/make_pte functions. 3. Define PAGETABLE_LEVELS for each architecture variation, for later use. 4. Define common pagetable entry accessors in a paravirt-compatible way. (64-bit does not yet use paravirt-ops in any way). 5. Convert a few instances of using a *_val() as an lvalue where it is no longer a macro. There are still places in the 64-bit code which use pte_val() as an lvalue. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199317452 28800 # Node ID f7e7db3facd9406545103164f9be8f9ba1a2b549 # Parent 4d9a413a0f4c1d98dbea704f0366457b5117045d x86: add _AT() macro to conditionally cast Define _AT(type, value) to conditionally cast a value when compiling C code, but not when used in assembler. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199317362 28800 # Node ID 4d9a413a0f4c1d98dbea704f0366457b5117045d # Parent ba0ec40a50a7aef1a3153cea124c35e261f5a2df x86: page.h: unify page copying and clearing Move, and to some extent unify, the various page copying and clearing functions. The only unification here is that both architectures use the same function for copying/clearing user and kernel pages. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199317360 28800 # Node ID ba0ec40a50a7aef1a3153cea124c35e261f5a2df # Parent c45c263179cb78284b6b869c574457df088027d1 x86: page.h: unify constants There are many constants which are shared by 32 and 64-bit. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andreas Herrmann authored
Commits - c52f61fcbdb2aa84f0e4d831ef07f375e6b99b2c (x86: allow TSC clock source on AMD Fam10h and some cleanup) - e30436f05d456efaff77611e4494f607b14c2782 (x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection) are supposed to fix the detection of contant TSC for AMD CPUs. Unfortunately on x86_64 it does still not work with current x86/mm. For a Phenom I still get: ... TSC calibrated against PM_TIMER Marking TSC unstable due to TSCs unsynchronized time.c: Detected 2288.366 MHz processor. ... We have to set c->x86_power in early_identify_cpu to properly detect the CONSTANT_TSC bit in early_init_amd. Attached patch fixes this issue. Following the relevant boot messages when the fix is used: ... TSC calibrated against PM_TIMER time.c: Detected 2288.279 MHz processor. ... Initializing CPU#1 ... checking TSC synchronization [CPU#0 -> CPU#1]: passed. ... Initializing CPU#2 ... checking TSC synchronization [CPU#0 -> CPU#2]: passed. ... Booting processor 3/4 APIC 0x3 ... checking TSC synchronization [CPU#0 -> CPU#3]: passed. Brought up 4 CPUs ... Patch is against x86/mm (v2.6.24-rc8-672-ga9f7faa). Please apply. Set c->x86_power in early_identify_cpu. This ensures that X86_FEATURE_CONSTANT_TSC can properly be set in early_init_amd. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
The ACPI code currently disables TSC use in any C2 and C3 states. But the AMD Fam10h BKDG documents that the TSC will never stop in any C states when the CONSTANT_TSC bit is set. Make this disabling conditional on CONSTANT_TSC not set on AMD. I actually think this is true on Intel too for C2 states on CPUs with p-state invariant TSC, but this needs further discussions with Len to really confirm :-) So far it is only enabled on AMD. Cc: lenb@kernel.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Trust the ACPI code to disable TSC instead when C3 is used. AMD Fam10h does not disable TSC in any C states so the check was incorrect there anyways after the change to handle this like Intel on AMD too. This allows to use the TSC when C3 is disabled in software (acpi.max_c_state=2), but the BIOS supports it anyways. Match i386 behaviour. Cc: lenb@kernel.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
After a lot of discussions with AMD it turns out that TSC on Fam10h CPUs is synchronized when the CONSTANT_TSC cpuid bit is set. Or rather that if there are ever systems where that is not true it would be their BIOS' task to disable the bit. So finally use TSC gettimeofday on Fam10h by default. Or rather it is always used now on CPUs where the AMD specific CONSTANT_TSC bit is set. This gives a nice speed bost for gettimeofday() on these systems which tends to be by far the most common v/syscall. On a Fam10h system here TSC gtod uses about 20% of the CPU time of acpi_pm based gtod(). This was measured on 32bit, on 64bit it is even better because TSC gtod() can use a vsyscall and stay in ring 3, which acpi_pm doesn't. The Intel check simply checks for CONSTANT_TSC too without hardcoding Intel vendor. This is equivalent on 64bit because all 64bit capable Intel CPUs will have CONSTANT_TSC set. On Intel there is no CPU supplied CONSTANT_TSC bit currently, but we synthesize one based on hardcoded knowledge which steppings have p-state invariant TSC. So the new logic is now: On CPUs which have the AMD specific CONSTANT_TSC bit set or on Intel CPUs which are new enough to be known to have p-state invariant TSC always use TSC based gettimeofday() Cc: lenb@kernel.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
we need to know whether RDTSC is synchronous or not. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
[ andi@firstfloor.org: build fix ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
rdtsc is now speculation-safe, so no need for the sync variants of the APIs. [ mingo@elte.hu: removed the nsec_barrier() complication. ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
make native_read_tsc() always non-speculative. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
map vsyscalls early enough. This is important if a __vsyscall_fn function is used by other kernel code too. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
move native_read_tsc() offline. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
rdtsc_barrier() is a new barrier primitive that stops RDTSC speculation to avoid races with timer interrupts on other CPUs. It expands either to LFENCE (for Intel CPUs) or MFENCE (for AMD CPUs) which stops RDTSC on all currently known microarchitectures that implement SSE. On CPUs without SSE there is generally no RDTSC speculation. [ mingo@elte.hu: renamed it to rdtsc_barrier() and made it x86-only ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
WANG Cong authored
Acked-by: Jeff Dike <jdike@addtoit.com> Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Moving things out of processor.h is always a good thing. Also needed to avoid include loop in later patch. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher... this caused boot failures on XMM1 & !XMM1 capable CPUs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
According to Intel RDTSC can be always synchronized with LFENCE on all current CPUs. Implement the necessary CPUID bit for that. It is unclear yet if that is true for all future CPUs too, but if there's another way the kernel can be always updated. Cc: asit.k.mallick@intel.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
According to AMD RDTSC can be synchronized through MFENCE. Implement the necessary CPUID bit for that. Cc: andreas.herrmann3@amd.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Carlos R. Mafra authored
This patch fixes all errors pointed out by checkpatch.pl. errors lines of code errors/KLOC arch/x86/mm/k8topology_64.c (before) 72 185 389.1 arch/x86/mm/k8topology_64.c (after) 0 185 0 No code changed. text data bss dec hex filename 1506 0 0 1506 5e2 k8topology_64.o.after 1506 0 0 1506 5e2 k8topology_64.o.before md5sum: f9f48331a7eca4fc60d2a03369dc5f53 k8topology_64.o.after f9f48331a7eca4fc60d2a03369dc5f53 k8topology_64.o.before Signed-off-by: Carlos R. Mafra <crmafra@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Hiroshi Shimamoto authored
More white space and coding style clean up. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
temporary debugging - remove before this hits v2.6.25. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Yinghai Lu authored
empty_zero_page is in .bss section, and it is cleared in clear_bss by x86_64_start_kernel(). So don't clear that again in mem_init Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Hiroshi Shimamoto authored
White space and coding style clean up. Make apic_32/64.c similar. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Use the force_sig_info_fault helper from X86_32 in X86_64. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Move X86_32 only get_segment_eip to X86_64 Move X86_64 only is_errata93 to X86_32 Change X86_32 loop in is_prefetch to highlight the differences between them. Fold the logic from __is_prefetch in as well on X86_32. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
We get die() from kdebug.h, no need for forward declaration. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Carlos R. Mafra authored
This patch fixes most errors detected by checkpatch.pl. errors lines of code errors/KLOC arch/x86/oprofile/nmi_int.c (after) 1 461 2.1 arch/x86/oprofile/nmi_int.c (before) 60 477 125.7 No code changed. size: text data bss dec hex filename 2675 264 472 3411 d53 nmi_int.o.after 2675 264 472 3411 d53 nmi_int.o.before md5sum: 847aea0cc68fe1a2b5e7019439f3b4dd nmi_int.o.after 847aea0cc68fe1a2b5e7019439f3b4dd nmi_int.o.before Signed-off-by: Carlos R. Mafra <crmafra@gmail.com> Reviewed-by: Jesper Juhl <jesper.juhl@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Quentin Barnes authored
When developing the Kprobes arch code for ARM, I ran across some code found in x86 and s390 Kprobes arch code which I didn't consider as good as it could be. Once I figured out what the code was doing, I changed the code for ARM Kprobes to work the way I felt was more appropriate. I've tested the code this way in ARM for about a year and would like to push the same change to the other affected architectures. The code in question is in kprobe_exceptions_notify() which does: ==== /* kprobe_running() needs smp_processor_id() */ preempt_disable(); if (kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; preempt_enable(); ==== For the moment, ignore the code having the preempt_disable()/ preempt_enable() pair in it. The problem is that kprobe_running() needs to call smp_processor_id() which will assert if preemption is enabled. That sanity check by smp_processor_id() makes perfect sense since calling it with preemption enabled would return an unreliable result. But the function kprobe_exceptions_notify() can be called from a context where preemption could be enabled. If that happens, the assertion in smp_processor_id() happens and we're dead. So what the original author did (speculation on my part!) is put in the preempt_disable()/preempt_enable() pair to simply defeat the check. Once I figured out what was going on, I considered this an inappropriate approach. If kprobe_exceptions_notify() is called from a preemptible context, we can't be in a kprobe processing context at that time anyways since kprobes requires preemption to already be disabled, so just check for preemption enabled, and if so, blow out before ever calling kprobe_running(). I wrote the ARM kprobe code like this: ==== /* To be potentially processing a kprobe fault and to * trust the result from kprobe_running(), we have * be non-preemptible. */ if (!preemptible() && kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; ==== The above code has been working fine for ARM Kprobes for a year. So I changed the x86 code (2.6.24-rc6) to be the same way and ran the Systemtap tests on that kernel. As on ARM, Systemtap on x86 comes up with the same test results either way, so it's a neutral external functional change (as expected). This issue has been discussed previously on linux-arm-kernel and the Systemtap mailing lists. Pointers to the by base for the two discussions: http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html http://sourceware.org/ml/systemtap/2007-q1/msg00251.htmlSigned-off-by: Quentin Barnes <qbarnes@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com> Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
-