Commit edf20d3a authored by Andrew Morton's avatar Andrew Morton Committed by James Bottomley

[PATCH] Remove flush_page_to_ram()

From: Hugh Dickins <hugh@veritas.com>

This patch removes the long deprecated flush_page_to_ram.  We have
two different schemes for doing this cache flushing stuff, the old
flush_page_to_ram way and the not so old flush_dcache_page etc. way:
see DaveM's Documentation/cachetlb.txt.  Keeping flush_page_to_ram
around is confusing, and makes it harder to get this done right.

All architectures are updated, but the only ones where it amounts
to more than deleting a line or two are m68k, mips, mips64 and v850.

I followed a prescription from DaveM (though not to the letter), that
those arches with non-nop flush_page_to_ram need to do what it did
in their clear_user_page and copy_user_page and flush_dcache_page.

Dave is consterned that, in the v850 nb85e case, this patch leaves its
flush_dcache_page as was, uses it in clear_user_page and copy_user_page,
instead of making them all flush icache as well.  That may be wrong:
I'm just hesitant to add cruft blindly, changing a flush_dcache macro
to flush icache too; and naively hope that the necessary flush_icache
calls are already in place.  Miles, please let us know which way is
right for v850 nb85e - thanks.
parent 831cbe24
......@@ -75,7 +75,7 @@ changes occur:
Platform developers note that generic code will always
invoke this interface with mm->page_table_lock held.
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove the PAGE_SIZE sized translation
from the TLB. The 'vma' is the backing structure used by
......@@ -87,9 +87,9 @@ changes occur:
After running, this interface must make sure that any previous
page table modification for address space 'vma->vm_mm' for
user virtual address 'page' will be visible to the cpu. That
user virtual address 'addr' will be visible to the cpu. That
is, after running, there will be no entries in the TLB for
'vma->vm_mm' for virtual address 'page'.
'vma->vm_mm' for virtual address 'addr'.
This is used primarily during fault processing.
......@@ -144,9 +144,9 @@ the sequence will be in one of the following forms:
change_range_of_page_tables(mm, start, end);
flush_tlb_range(vma, start, end);
3) flush_cache_page(vma, page);
3) flush_cache_page(vma, addr);
set_pte(pte_pointer, new_pte_val);
flush_tlb_page(vma, page);
flush_tlb_page(vma, addr);
The cache level flush will always be first, because this allows
us to properly handle systems whose caches are strict and require
......@@ -200,7 +200,7 @@ Here are the routines, one by one:
call flush_cache_page (see below) for each entry which may be
modified.
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by
......@@ -211,7 +211,7 @@ Here are the routines, one by one:
"Harvard" type cache layouts).
After running, there will be no entries in the cache for
'vma->vm_mm' for virtual address 'page'.
'vma->vm_mm' for virtual address 'addr'.
This is used primarily during fault processing.
......@@ -235,7 +235,7 @@ this value.
NOTE: This does not fix shared mmaps, check out the sparc64 port for
one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
Next, you have two methods to solve the D-cache aliasing issue for all
Next, you have to solve the D-cache aliasing issue for all
other cases. Please keep in mind that fact that, for a given page
mapped into some user address space, there is always at least one more
mapping, that of the kernel in it's linear mapping starting at
......@@ -244,35 +244,8 @@ physical page into its address space, by implication the D-cache
aliasing problem has the potential to exist since the kernel already
maps this page at its virtual address.
First, I describe the old method to deal with this problem. I am
describing it for documentation purposes, but it is deprecated and the
latter method I describe next should be used by all new ports and all
existing ports should move over to the new mechanism as well.
flush_page_to_ram(struct page *page)
The physical page 'page' is about to be place into the
user address space of a process. If it is possible for
stores done recently by the kernel into this physical
page, to not be visible to an arbitrary mapping in userspace,
you must flush this page from the D-cache.
If the D-cache is writeback in nature, the dirty data (if
any) for this physical page must be written back to main
memory before the cache lines are invalidated.
Admittedly, the author did not think very much when designing this
interface. It does not give the architecture enough information about
what exactly is going on, and there is no context to base a judgment
on about whether an alias is possible at all. The new interfaces to
deal with D-cache aliasing are meant to address this by telling the
architecture specific code exactly which is going on at the proper points
in time.
Here is the new interface:
void copy_user_page(void *to, void *from, unsigned long address)
void clear_user_page(void *to, unsigned long address)
void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
void clear_user_page(void *to, unsigned long addr, struct page *page)
These two routines store data in user anonymous or COW
pages. It allows a port to efficiently avoid D-cache alias
......@@ -285,8 +258,9 @@ Here is the new interface:
of the same "color" as the user mapping of the page. Sparc64
for example, uses this technique.
The "address" parameter tells the virtual address where the
user will ultimately have this page mapped.
The 'addr' parameter tells the virtual address where the
user will ultimately have this page mapped, and the 'page'
parameter gives a pointer to the struct page of the target.
If D-cache aliasing is not an issue, these two routines may
simply call memcpy/memset directly and do nothing more.
......@@ -363,5 +337,5 @@ Here is the new interface:
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.5 the hope is to
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
......@@ -251,7 +251,6 @@ put_gate_page (struct page *page, unsigned long address)
pte_unmap(pte);
goto out;
}
flush_page_to_ram(page);
set_pte(pte, mk_pte(page, PAGE_GATE));
pte_unmap(pte);
}
......
......@@ -195,7 +195,7 @@ int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
}
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
flush_dcache_page(page);
kunmap(page);
if (err)
......
......@@ -183,7 +183,6 @@ static int copy_strings32(int argc, u32 *argv, struct linux_binprm *bprm)
}
err = copy_from_user(kaddr + offset, (char *)A(str), bytes_to_copy);
flush_dcache_page(page);
flush_page_to_ram(page);
kunmap(page);
if (err)
......
......@@ -2077,7 +2077,6 @@ static int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
kunmap((unsigned long)kaddr);
if (err)
......
......@@ -1888,7 +1888,6 @@ static int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
kunmap(page);
if (err)
......
......@@ -1378,7 +1378,6 @@ static int elf_core_dump(long signr, struct pt_regs * regs, struct file * file)
flush_cache_page(vma, addr);
kaddr = kmap(page);
DUMP_WRITE(kaddr, PAGE_SIZE);
flush_page_to_ram(page);
kunmap(page);
}
page_cache_release(page);
......
......@@ -314,7 +314,6 @@ void put_dirty_page(struct task_struct * tsk, struct page *page, unsigned long a
}
lru_cache_add_active(page);
flush_dcache_page(page);
flush_page_to_ram(page);
set_pte(pte, pte_mkdirty(pte_mkwrite(mk_pte(page, PAGE_COPY))));
pte_chain = page_add_rmap(page, pte, pte_chain);
pte_unmap(pte);
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
/* Note that the following two definitions are _highly_ dependent
......
......@@ -13,7 +13,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma,start,end) do { } while (0)
#define flush_cache_page(vma,vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define invalidate_dcache_range(start,end) do { } while (0)
#define clean_dcache_range(start,end) do { } while (0)
......
......@@ -70,13 +70,6 @@
cpu_cache_clean_invalidate_range((unsigned long)start, \
((unsigned long)start) + size, 0);
/*
* This is an obsolete interface; the functionality that was provided by this
* function is now merged into our flush_dcache_page, flush_icache_page,
* copy_user_page and clear_user_page functions.
*/
#define flush_page_to_ram(page) do { } while (0)
/*
* flush_dcache_page is used when the kernel has written to the page
* cache page at virtual address page->virtual.
......
......@@ -121,7 +121,6 @@ extern void paging_init(void);
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -20,7 +20,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma,page) do { } while (0)
#define flush_dcache_page(page) \
......
......@@ -106,7 +106,6 @@ extern inline void flush_cache_page(struct vm_area_struct *vma,
/* Push the page at kernel virtual address and clear the icache */
/* RZ: use cpush %bc instead of cpush %dc, cinv %ic */
#define flush_page_to_ram(page) __flush_page_to_ram(page_address(page))
extern inline void __flush_page_to_ram(void *vaddr)
{
if (CPU_IS_040_OR_060) {
......@@ -125,7 +124,7 @@ extern inline void __flush_page_to_ram(void *vaddr)
}
}
#define flush_dcache_page(page) do { } while (0)
#define flush_dcache_page(page) __flush_page_to_ram(page_address(page))
#define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
......
......@@ -79,8 +79,14 @@ static inline void clear_page(void *page)
#define copy_page(to,from) memcpy((to), (from), PAGE_SIZE)
#endif
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -10,7 +10,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_range(start,len) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start,len) __flush_cache_all()
......
......@@ -25,8 +25,15 @@ extern void (*_copy_page)(void * to, void * from);
#define clear_page(page) _clear_page(page)
#define copy_page(to, from) _copy_page(to, from)
#define clear_user_page(page, vaddr) clear_page(page)
#define copy_user_page(to, from, vaddr) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -24,7 +24,6 @@
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
* - flush_cache_page(mm, vmaddr) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
* - flush_page_to_ram(page) write back kernel page to ram
* - flush_icache_range(start, end) flush a range of instructions
*/
extern void (*_flush_cache_all)(void);
......@@ -39,15 +38,13 @@ extern void (*_flush_icache_range)(unsigned long start, unsigned long end);
extern void (*_flush_icache_page)(struct vm_area_struct *vma,
struct page *page);
#define flush_dcache_page(page) do { } while (0)
#define flush_cache_all() _flush_cache_all()
#define __flush_cache_all() ___flush_cache_all()
#define flush_cache_mm(mm) _flush_cache_mm(mm)
#define flush_cache_range(vma,start,end) _flush_cache_range(vma,start,end)
#define flush_cache_page(vma,page) _flush_cache_page(vma, page)
#define flush_cache_sigtramp(addr) _flush_cache_sigtramp(addr)
#define flush_page_to_ram(page) _flush_page_to_ram(page)
#define flush_dcache_page(page) _flush_page_to_ram(page)
#define flush_icache_range(start, end) _flush_icache_range(start,end)
#define flush_icache_page(vma, page) _flush_icache_page(vma, page)
......
......@@ -25,8 +25,15 @@ extern void (*_copy_page)(void * to, void * from);
#define clear_page(page) _clear_page(page)
#define copy_page(to, from) _copy_page(to, from)
#define clear_user_page(page, vaddr) clear_page(page)
#define copy_user_page(to, from, vaddr) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -25,7 +25,6 @@
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
* - flush_cache_page(mm, vmaddr) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
* - flush_page_to_ram(page) write back kernel page to ram
*/
extern void (*_flush_cache_mm)(struct mm_struct *mm);
extern void (*_flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
......@@ -34,14 +33,12 @@ extern void (*_flush_cache_page)(struct vm_area_struct *vma, unsigned long page)
extern void (*_flush_page_to_ram)(struct page * page);
#define flush_cache_all() do { } while(0)
#define flush_dcache_page(page) do { } while (0)
#ifndef CONFIG_CPU_R10000
#define flush_cache_mm(mm) _flush_cache_mm(mm)
#define flush_cache_range(vma,start,end) _flush_cache_range(vma,start,end)
#define flush_cache_page(vma,page) _flush_cache_page(vma, page)
#define flush_page_to_ram(page) _flush_page_to_ram(page)
#define flush_dcache_page(page) _flush_page_to_ram(page)
#define flush_icache_range(start, end) _flush_cache_l1()
#define flush_icache_user_range(vma, page, addr, len) \
flush_icache_page((vma), (page))
......@@ -66,7 +63,7 @@ extern void andes_flush_icache_page(unsigned long);
#define flush_cache_mm(mm) do { } while(0)
#define flush_cache_range(vma,start,end) do { } while(0)
#define flush_cache_page(vma,page) do { } while(0)
#define flush_page_to_ram(page) do { } while(0)
#define flush_dcache_page(page) do { } while(0)
#define flush_icache_range(start, end) _flush_cache_l1()
#define flush_icache_user_range(vma, page, addr, len) \
flush_icache_page((vma), (page))
......
......@@ -18,11 +18,6 @@
#define flush_kernel_dcache_range(start,size) \
flush_kernel_dcache_range_asm((start), (start)+(size));
static inline void
flush_page_to_ram(struct page *page)
{
}
extern void flush_cache_all_local(void);
static inline void cacheflush_h_tmp_function(void *dummy)
......
......@@ -23,7 +23,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, a, b) do { } while (0)
#define flush_cache_page(vma, p) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
extern void flush_dcache_page(struct page *page);
......
......@@ -13,7 +13,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
extern void flush_dcache_page(struct page *page);
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -26,7 +26,6 @@ extern void paging_init(void);
* - flush_cache_range(vma, start, end) flushes a range of pages
*
* - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
* - flush_page_to_ram(page) write back kernel page to ram
* - flush_icache_range(start, end) flushes(invalidates) a range for icache
* - flush_icache_page(vma, pg) flushes(invalidates) a page for icache
*
......@@ -37,7 +36,6 @@ extern void paging_init(void);
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......@@ -63,7 +61,6 @@ extern void flush_dcache_page(struct page *pg);
extern void flush_icache_range(unsigned long start, unsigned long end);
extern void flush_cache_sigtramp(unsigned long addr);
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
......
......@@ -64,7 +64,6 @@ BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long)
extern void sparc_flush_page_to_ram(struct page *page);
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) sparc_flush_page_to_ram(page)
#endif /* _SPARC_CACHEFLUSH_H */
......@@ -50,7 +50,4 @@ extern void smp_flush_cache_all(void);
extern void flush_dcache_page(struct page *page);
/* This is unnecessary on the SpitFire since D-CACHE is write-through. */
#define flush_page_to_ram(page) do { } while (0)
#endif /* _SPARC64_CACHEFLUSH_H */
......@@ -29,7 +29,6 @@
#define flush_cache_mm(mm) ((void)0)
#define flush_cache_range(vma, start, end) ((void)0)
#define flush_cache_page(vma, vmaddr) ((void)0)
#define flush_page_to_ram(page) ((void)0)
#define flush_dcache_page(page) ((void)0)
#define flush_icache() ((void)0)
#define flush_icache_range(start, end) ((void)0)
......
......@@ -62,7 +62,6 @@ extern void nb85e_cache_flush_icache_user_range (struct vm_area_struct *vma,
unsigned long adr, int len);
extern void nb85e_cache_flush_sigtramp (unsigned long addr);
#define flush_page_to_ram(x) ((void)0)
#define flush_cache_all nb85e_cache_flush_all
#define flush_cache_mm nb85e_cache_flush_mm
#define flush_cache_range nb85e_cache_flush_range
......
......@@ -40,8 +40,14 @@
#define clear_page(page) memset ((void *)(page), 0, PAGE_SIZE)
#define copy_page(to, from) memcpy ((void *)(to), (void *)from, PAGE_SIZE)
#define clear_user_page(page, vaddr, pg) clear_page (page)
#define copy_user_page(to, from, vaddr,pg) copy_page (to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
#ifdef STRICT_MM_TYPECHECKS
/*
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -67,7 +67,6 @@ static inline void memclear_highpage_flush(struct page *page, unsigned int offse
kaddr = kmap_atomic(page, KM_USER0);
memset((char *)kaddr + offset, 0, size);
flush_dcache_page(page);
flush_page_to_ram(page);
kunmap_atomic(kaddr, KM_USER0);
}
......
......@@ -179,14 +179,18 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, in
flush_cache_page(vma, addr);
/*
* FIXME! We used to have flush_page_to_ram() in here, but
* that was wrong. davem says we need a new per-arch primitive
* to handle this correctly.
*/
maddr = kmap(page);
if (write) {
memcpy(maddr + offset, buf, bytes);
flush_page_to_ram(page);
flush_icache_user_range(vma, page, addr, bytes);
} else {
memcpy(buf, maddr + offset, bytes);
flush_page_to_ram(page);
}
kunmap(page);
page_cache_release(page);
......
......@@ -1008,11 +1008,9 @@ struct page * filemap_nopage(struct vm_area_struct * area, unsigned long address
success:
/*
* Found the page and have a reference on it, need to check sharing
* and possibly copy it over to another page..
* Found the page and have a reference on it.
*/
mark_page_accessed(page);
flush_page_to_ram(page);
return page;
no_cached_page:
......@@ -1124,12 +1122,9 @@ static struct page * filemap_getpage(struct file *file, unsigned long pgoff,
success:
/*
* Found the page and have a reference on it, need to check sharing
* and possibly copy it over to another page..
* Found the page and have a reference on it.
*/
mark_page_accessed(page);
flush_page_to_ram(page);
return page;
no_cached_page:
......
......@@ -78,7 +78,6 @@ int install_page(struct mm_struct *mm, struct vm_area_struct *vma,
flush = zap_pte(mm, vma, addr, pte);
mm->rss++;
flush_page_to_ram(page);
flush_icache_page(vma, page);
set_pte(pte, mk_pte(page, prot));
pte_chain = page_add_rmap(page, pte, pte_chain);
......
......@@ -916,7 +916,6 @@ static inline void break_cow(struct vm_area_struct * vma, struct page * new_page
pte_t *page_table)
{
invalidate_vcache(address, vma->vm_mm, new_page);
flush_page_to_ram(new_page);
flush_cache_page(vma, address);
establish_pte(vma, address, page_table, pte_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot))));
}
......@@ -1206,7 +1205,6 @@ static int do_swap_page(struct mm_struct * mm,
pte = pte_mkdirty(pte_mkwrite(pte));
unlock_page(page);
flush_page_to_ram(page);
flush_icache_page(vma, page);
set_pte(page_table, pte);
pte_chain = page_add_rmap(page, page_table, pte_chain);
......@@ -1271,7 +1269,6 @@ do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
goto out;
}
mm->rss++;
flush_page_to_ram(page);
entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
lru_cache_add_active(page);
mark_page_accessed(page);
......@@ -1365,7 +1362,6 @@ do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
/* Only go through if we didn't race with anybody else... */
if (pte_none(*page_table)) {
++mm->rss;
flush_page_to_ram(new_page);
flush_icache_page(vma, new_page);
entry = mk_pte(new_page, vma->vm_page_prot);
if (write_access)
......
......@@ -832,7 +832,6 @@ static int shmem_getpage(struct inode *inode, unsigned long idx, struct page **p
shmem_swp_unmap(entry);
delete_from_swap_cache(swappage);
spin_unlock(&info->lock);
flush_page_to_ram(swappage);
copy_highpage(filepage, swappage);
unlock_page(swappage);
page_cache_release(swappage);
......@@ -953,7 +952,6 @@ struct page *shmem_nopage(struct vm_area_struct *vma, unsigned long address, int
return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
mark_page_accessed(page);
flush_page_to_ram(page);
return page;
}
......@@ -981,7 +979,6 @@ static int shmem_populate(struct vm_area_struct *vma,
return err;
if (page) {
mark_page_accessed(page);
flush_page_to_ram(page);
err = install_page(mm, vma, addr, page, prot);
if (err) {
page_cache_release(page);
......
......@@ -641,7 +641,6 @@ static int try_to_unuse(unsigned int type)
shmem = 0;
swcount = *swap_map;
if (swcount > 1) {
flush_page_to_ram(page);
if (start_mm == &init_mm)
shmem = shmem_unuse(entry, page);
else
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment