- 30 Jan, 2008 40 commits
-
-
Andi Kleen authored
Intel recommends to first flush the TLBs and then the caches on caching attribute changes. c_p_a() previously did it the other way round. Reorder that. The procedure is still not fully compliant to the Intel documentation because Intel recommends a all CPU synchronization step between the TLB flushes and the cache flushes. However on all new Intel CPUs this is now meaningless anyways because they support Self-Snoop and can skip the cache flush step anyway. [ mingo@elte.hu: decoupled from clflush and ported it to x86.git ] Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
fix: > hm, i just found a failing 64-bit .config while testing your CPA > patchset: > > [ 1.916541] CPA mapping 4k 0 large 2048 gb 0 x 0[0-0] miss 0 > [ 1.919874] Unable to handle kernel paging request at 000000000335aea8 RIP: > [ 1.919874] [<ffffffff8021d2d3>] change_page_attr+0x3/0x61 > [ 1.919874] PGD 0 > [ 1.919874] Oops: 0000 [1] > [ 1.919874] CPU 0 This handles addresses which don't have a mem_map entry. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
The pte_* modifier functions that cleared bits dropped the NX bit on 32bit PAE because they only worked in int, but NX is in bit 63. Fix that by adding appropiate casts so that the arithmetic happens as long long on PAE kernels. I decided to just use 64bit arithmetic instead of open coding like pte_modify() because gcc should generate good enough code for that now. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
64bit already had it. Needed for later patches. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
No need to make it 64bit there. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
virt_to_page does not care about the bits below the page granuality. So don't mask them. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
enhance the debug warning in early_ioremap_init(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
fix a long-standing weakness of the early-ioremap allocator: it uses a single pgd entry for the boot mappings, and was not properly protecting itself against crossing a 2MB (4MB) boundary. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Huang, Ying authored
This patch makes "early_ioremap_debug" a early parameter, because "early_ioreamp/early_iounmap" is only used during early boot stage. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
add early_ioremap() debug printouts via the early_ioremap_debug boot option. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
increase max early_ioremap() remapping size from 64K to 256K. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
- allow nesting of up to 4 levels Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Huang, Ying authored
This patch fixes a bug of early_ioremap_reset. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Huang, Ying authored
This patch renames bt_ioremap to early_ioremap, which is used in x86_64. This makes it easier to merge i386 and x86_64 usage. [ mingo@elte.hu: fix ] Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Huang, Ying authored
This patch replaces boot_ioremap invokation with bt_ioremap and removes the boot_ioremap implementation. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Huang, Ying authored
This patch makes it possible for bt_ioremap() to be used before paging_init(), via providing an early implementation of set_fixmap() that can be used before paging_init(). This way boot_ioremap() can be replaced by bt_ioremap(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Siddha, Suresh B authored
Also use _PAGE_PWT for all the mappings which need uncache mapping. Instead of existing PAT2 which is UC- (and can be overwritten by MTRRs), we now use PAT3 which is strong uncacheable. This makes it consistent with pgprot_noncached() Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Joerg Roedel authored
This patch replaces the manual permission setup for pages in ioremap_64.c with the pre-defined __PAGE_KERNEL_EXEC value. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Coding style cleanup before modifying the file. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
fix 15 checkpatch warnings. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Since change_page_attr() is tricky code it is good to have some regression test code. This patch maps and unmaps some random pages in the direct mapping at boot and then dumps the state and does some simple sanity checks. Add it with a CONFIG option. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
based on this patch from Andi Kleen: | Subject: CPA: Return the page table level in lookup_address() | From: Andi Kleen <ak@suse.de> | | Needed for the next change. | | And change all the callers. and ported it to x86.git. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
Needed for some test code. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
- Rename it to pte_exec() from pte_exec_kernel(). There is nothing kernel specific in there. - Move it into the common file because _PAGE_NX is 0 on !PAE and then pte_exec() will be always evaluate to true. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Harvey Harrison authored
Similar to x86 64-bit. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Andi Kleen authored
When CONFIG_DEBUG_RODATA is enabled undo the ro mapping and redo it again. This gives some simple testing for change_page_attr(). Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
clean up arch/x86/mm/pageattr_64.c. no code changed: text data bss dec hex filename 1751 16 0 1767 6e7 pageattr_64.o.before 1751 16 0 1767 6e7 pageattr_64.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
clean up arch/x86/mm/pageattr_32.c. no code changed: text data bss dec hex filename 1255 40 0 1295 50f pageattr_32.o.before 1255 40 0 1295 50f pageattr_32.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Prepare ioremap() to default to uncached. This will be the safest - but first we have to fix CPA. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Yinghai Lu authored
remove the recursion from this function. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
If SHARED_KERNEL_PMD is false, then we need to allocate and initialize the kernel pmd. We can easily piggy-back this onto the existing pmd prepopulation code. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
In PAE mode, an update to the pgd requires a cr3 reload to make sure the processor notices the changes. Since this also has the side-effect of flushing the tlb, its an expensive operation which we want to avoid where possible. This patch mitigates the cost of installing the initial set of pmds on process creation by preallocating them when the pgd is allocated. This avoids up to three tlb flushes during exec, as it creates the new process address space while the pagetable is in active use. The pmds will be freed as part of the normal pagetable teardown in free_pgtables, which is called in munmap and process exit. However, free_pgtables will only free parts of the pagetable which actually contain mappings, so stray pmds may still be attached to the pgd at pgd_free time. We must mop them up to prevent a memory leak. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Andi Kleen <ak@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: William Irwin <wli@holomorphy.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Deal properly with pmd-level pages being allocated and freed dynamically. We can handle them more or less the same as pte pages. Also, deal with early_ioremap pagetable manipulations. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Convert macros into inline functions, for better type-checking. This patch required a little bit of fiddling with headers in order to make __(pte|pmd)_free_tlb inline rather than macros. asm-generic/tlb.h includes asm/pgalloc.h, though it doesn't directly use any pgalloc definitions. I removed this include to avoid an include cycle, but it may cause secondary compile failures by things depending on the indirect inclusion; arch/x86/mm/hugetlbpage.c was one such place; there may be others. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Add mm to paravirt_alloc_pd, partly to make it consistent with paravirt_alloc_pt, and because later changes will make use of it. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Jeremy Fitzhardinge authored
Looks like a mismerge/misapply dropped one of the cases of pte flag masking for Xen. Also, only mask the flags for present ptes. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-