Commit a2536452 authored by Andrew Morton's avatar Andrew Morton Committed by Arnaldo Carvalho de Melo

[PATCH] remove PG_launder

Removal of PG_launder.

It's not obvious (to me) why this ever existed.  If it's to prevent
deadlocks then I'd like to know who was performing __GFP_FS allocations
while holding a page lock?

But in 2.5, the only memory allocations which are performed when the
caller holds PG_writeback against an unsubmitted page are those which
occur inside submit_bh().  There will be no __GFS_FS allocations in
that call chain.

Removing PG_launder means that memory allocators can block on any
PageWriteback() page at all, which reduces the risk of very long list
walks inside pagemap_lru_lock in shrink_cache().
parent 5409c2b5
......@@ -62,9 +62,8 @@
#define PG_arch_1 10
#define PG_reserved 11
#define PG_launder 12 /* written out by VM pressure.. */
#define PG_private 13 /* Has something at ->private */
#define PG_writeback 14 /* Page is under writeback */
#define PG_private 12 /* Has something at ->private */
#define PG_writeback 13 /* Page is under writeback */
/*
* Global page accounting. One instance per CPU.
......@@ -172,10 +171,6 @@ extern void get_page_state(struct page_state *ret);
#define SetPageReserved(page) set_bit(PG_reserved, &(page)->flags)
#define ClearPageReserved(page) clear_bit(PG_reserved, &(page)->flags)
#define PageLaunder(page) test_bit(PG_launder, &(page)->flags)
#define SetPageLaunder(page) set_bit(PG_launder, &(page)->flags)
#define ClearPageLaunder(page) clear_bit(PG_launder, &(page)->flags)
#define SetPagePrivate(page) set_bit(PG_private, &(page)->flags)
#define ClearPagePrivate(page) clear_bit(PG_private, &(page)->flags)
#define PagePrivate(page) test_bit(PG_private, &(page)->flags)
......
......@@ -432,7 +432,7 @@ void invalidate_inode_pages2(struct address_space * mapping)
int fail_writepage(struct page *page)
{
/* Only activate on memory-pressure, not fsync.. */
if (PageLaunder(page)) {
if (current->flags & PF_MEMALLOC) {
activate_page(page);
SetPageReferenced(page);
}
......@@ -652,7 +652,6 @@ void unlock_page(struct page *page)
void end_page_writeback(struct page *page)
{
wait_queue_head_t *waitqueue = page_waitqueue(page);
ClearPageLaunder(page);
smp_mb__before_clear_bit();
if (!TestClearPageWriteback(page))
BUG();
......
......@@ -373,8 +373,6 @@ int generic_writeback_mapping(struct address_space *mapping, int *nr_to_write)
}
spin_unlock(&pagemap_lru_lock);
}
if (current->flags & PF_MEMALLOC)
SetPageLaunder(page);
err = writepage(page);
if (!ret)
ret = err;
......
......@@ -438,7 +438,8 @@ static int shmem_writepage(struct page * page)
if (!PageLocked(page))
BUG();
if (!PageLaunder(page))
if (!(current->flags & PF_MEMALLOC))
return fail_writepage(page);
mapping = page->mapping;
......
......@@ -424,11 +424,10 @@ static int shrink_cache(int nr_pages, zone_t * classzone, unsigned int gfp_mask,
goto page_mapped;
/*
* The page is locked. IO in progress?
* Move it to the back of the list.
* IO in progress? Leave it at the back of the list.
*/
if (unlikely(PageWriteback(page))) {
if (PageLaunder(page) && (gfp_mask & __GFP_FS)) {
if (gfp_mask & __GFP_FS) {
page_cache_get(page);
spin_unlock(&pagemap_lru_lock);
wait_on_page_writeback(page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment