Commit 24603fdd authored by Vineet Gupta's avatar Vineet Gupta

ARC: [mm] optimise icache flush for user mappings

ARC icache doesn't snoop dcache thus executable pages need to be made
coherent before mapping into userspace in flush_icache_page().

However ARC700 CDU (hardware cache flush module) requires both vaddr
(index in cache) as well as paddr (tag match) to correctly identify a
line in the VIPT cache. A typical ARC700 SoC has aliasing icache, thus
the paddr only based flush_icache_page() API couldn't be implemented
efficiently. It had to loop thru all possible alias indexes and perform
the invalidate operation (ofcourse the cache op would only succeed at
the index(es) where tag matches - typically only 1, but the cost of
visiting all the cache-bins needs to paid nevertheless).

Turns out however that the vaddr (along with paddr) is available in
update_mmu_cache() hence better suits ARC icache flush semantics.
With both vaddr+paddr, exactly one flush operation per line is done.
Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
parent 8d56bec2
...@@ -20,12 +20,20 @@ ...@@ -20,12 +20,20 @@
#include <linux/mm.h> #include <linux/mm.h>
/*
* Semantically we need this because icache doesn't snoop dcache/dma.
* However ARC Cache flush requires paddr as well as vaddr, latter not available
* in the flush_icache_page() API. So we no-op it but do the equivalent work
* in update_mmu_cache()
*/
#define flush_icache_page(vma, page)
void flush_cache_all(void); void flush_cache_all(void);
void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_range(unsigned long start, unsigned long end);
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
void flush_icache_range_vaddr(unsigned long paddr, unsigned long u_vaddr, void flush_icache_range_vaddr(unsigned long paddr, unsigned long u_vaddr,
int len); int len);
void __inv_icache_page(unsigned long paddr, unsigned long vaddr);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
......
...@@ -716,18 +716,10 @@ void flush_icache_range_vaddr(unsigned long paddr, unsigned long u_vaddr, ...@@ -716,18 +716,10 @@ void flush_icache_range_vaddr(unsigned long paddr, unsigned long u_vaddr,
__dc_line_op(paddr, len, OP_FLUSH); __dc_line_op(paddr, len, OP_FLUSH);
} }
/* /* wrapper to compile time eliminate alignment checks in flush loop */
* XXX: This also needs to be optim using pg_arch_1 void __inv_icache_page(unsigned long paddr, unsigned long vaddr)
* This is called when a page-cache page is about to be mapped into a
* user process' address space. It offers an opportunity for a
* port to ensure d-cache/i-cache coherency if necessary.
*/
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
{ {
if (!(vma->vm_flags & VM_EXEC)) __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE);
return;
__ic_line_inv((unsigned long)page_address(page), PAGE_SIZE);
} }
void flush_icache_all(void) void flush_icache_all(void)
......
...@@ -422,12 +422,18 @@ void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) ...@@ -422,12 +422,18 @@ void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
* when a new PTE is entered in Page Tables or an existing one * when a new PTE is entered in Page Tables or an existing one
* is modified. We aggresively pre-install a TLB entry * is modified. We aggresively pre-install a TLB entry
*/ */
void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned,
void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddress,
pte_t *ptep) pte_t *ptep)
{ {
unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
create_tlb(vma, vaddr, ptep);
create_tlb(vma, vaddress, ptep); /* icache doesn't snoop dcache, thus needs to be made coherent here */
if (vma->vm_flags & VM_EXEC) {
unsigned long paddr = pte_val(*ptep) & PAGE_MASK;
__inv_icache_page(paddr, vaddr);
}
} }
/* Read the Cache Build Confuration Registers, Decode them and save into /* Read the Cache Build Confuration Registers, Decode them and save into
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment