Commit 3a255267 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Andrew Morton

mm: add generic flush_icache_pages() and documentation

flush_icache_page() is deprecated but not yet removed, so add a range
version of it.  Change the documentation to refer to
update_mmu_cache_range() instead of update_mmu_cache().

Link: https://lkml.kernel.org/r/20230802151406.3735276-4-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent a3793220
...@@ -88,13 +88,17 @@ changes occur: ...@@ -88,13 +88,17 @@ changes occur:
This is used primarily during fault processing. This is used primarily during fault processing.
5) ``void update_mmu_cache(struct vm_area_struct *vma, 5) ``void update_mmu_cache_range(struct vm_fault *vmf,
unsigned long address, pte_t *ptep)`` struct vm_area_struct *vma, unsigned long address, pte_t *ptep,
unsigned int nr)``
At the end of every page fault, this routine is invoked to At the end of every page fault, this routine is invoked to tell
tell the architecture specific code that a translation the architecture specific code that translations now exists
now exists at virtual address "address" for address space in the software page tables for address space "vma->vm_mm"
"vma->vm_mm", in the software page tables. at virtual address "address" for "nr" consecutive pages.
This routine is also invoked in various other places which pass
a NULL "vmf".
A port may use this information in any way it so chooses. A port may use this information in any way it so chooses.
For example, it could use this event to pre-load TLB For example, it could use this event to pre-load TLB
...@@ -306,17 +310,18 @@ maps this page at its virtual address. ...@@ -306,17 +310,18 @@ maps this page at its virtual address.
private". The kernel guarantees that, for pagecache pages, it will private". The kernel guarantees that, for pagecache pages, it will
clear this bit when such a page first enters the pagecache. clear this bit when such a page first enters the pagecache.
This allows these interfaces to be implemented much more efficiently. This allows these interfaces to be implemented much more
It allows one to "defer" (perhaps indefinitely) the actual flush if efficiently. It allows one to "defer" (perhaps indefinitely) the
there are currently no user processes mapping this page. See sparc64's actual flush if there are currently no user processes mapping this
flush_dcache_page and update_mmu_cache implementations for an example page. See sparc64's flush_dcache_page and update_mmu_cache_range
of how to go about doing this. implementations for an example of how to go about doing this.
The idea is, first at flush_dcache_page() time, if page_file_mapping() The idea is, first at flush_dcache_page() time, if
returns a mapping, and mapping_mapped on that mapping returns %false, page_file_mapping() returns a mapping, and mapping_mapped on that
just mark the architecture private page flag bit. Later, in mapping returns %false, just mark the architecture private page
update_mmu_cache(), a check is made of this flag bit, and if set the flag bit. Later, in update_mmu_cache_range(), a check is made
flush is done and the flag bit is cleared. of this flag bit, and if set the flush is done and the flag bit
is cleared.
.. important:: .. important::
...@@ -369,7 +374,7 @@ maps this page at its virtual address. ...@@ -369,7 +374,7 @@ maps this page at its virtual address.
``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
All the functionality of flush_icache_page can be implemented in All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In the future, the hope flush_dcache_page and update_mmu_cache_range. In the future, the hope
is to remove this interface completely. is to remove this interface completely.
The final category of APIs is for I/O to deliberately aliased address The final category of APIs is for I/O to deliberately aliased address
......
...@@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) ...@@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
#endif #endif
#ifndef flush_icache_page #ifndef flush_icache_page
static inline void flush_icache_pages(struct vm_area_struct *vma,
struct page *page, unsigned int nr)
{
}
static inline void flush_icache_page(struct vm_area_struct *vma, static inline void flush_icache_page(struct vm_area_struct *vma,
struct page *page) struct page *page)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment