Commit 44260240 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] deferred and batched addition of pages to the LRU

The remaining source of page-at-a-time activity against
pagemap_lru_lock is the anonymous pagefault path, which cannot be
changed to operate against multiple pages at a time.

But what we can do is to batch up just its adding of pages to the LRU,
via buffering and deferral.

This patch is based on work from Bill Irwin.

The patch changes lru_cache_add to put the pages into a per-CPU
pagevec.  They are added to the LRU 16-at-a-time.

And in the page reclaim code, purge the local CPU's buffer before
starting.  This is mainly to decrease the chances of pages staying off
the LRU for very long periods: if the machine is under memory pressure,
CPUs will spill their pages onto the LRU promptly.

A consequence of this change is that we can have up to 15*num_cpus
pages which are not on the LRU.  Which could have a slight effect on VM
accuracy, but I find that doubtful.  If the system is under memory
pressure the pages will be added to the LRU promptly, and these pages
are the most-recently-touched ones - the VM isn't very interested in
them anyway.

This optimisation could be made SMP-specific, but I felt it best to
turn it on for UP as well for consistency and better testing coverage.
parent eed29d66
......@@ -19,6 +19,7 @@ void __pagevec_release_nonlru(struct pagevec *pvec);
void __pagevec_free(struct pagevec *pvec);
void __pagevec_lru_add(struct pagevec *pvec);
void __pagevec_lru_del(struct pagevec *pvec);
void lru_add_drain(void);
void pagevec_deactivate_inactive(struct pagevec *pvec);
static inline void pagevec_init(struct pagevec *pvec)
......
......@@ -50,14 +50,25 @@ void activate_page(struct page * page)
* lru_cache_add: add a page to the page lists
* @page: the page to add
*/
void lru_cache_add(struct page * page)
static struct pagevec lru_add_pvecs[NR_CPUS];
void lru_cache_add(struct page *page)
{
if (!PageLRU(page)) {
spin_lock_irq(&_pagemap_lru_lock);
if (!TestSetPageLRU(page))
add_page_to_inactive_list(page);
spin_unlock_irq(&_pagemap_lru_lock);
}
struct pagevec *pvec = &lru_add_pvecs[get_cpu()];
page_cache_get(page);
if (!pagevec_add(pvec, page))
__pagevec_lru_add(pvec);
put_cpu();
}
void lru_add_drain(void)
{
struct pagevec *pvec = &lru_add_pvecs[get_cpu()];
if (pagevec_count(pvec))
__pagevec_lru_add(pvec);
put_cpu();
}
/*
......
......@@ -290,6 +290,7 @@ shrink_cache(int nr_pages, zone_t *classzone,
pagevec_init(&pvec);
lru_add_drain();
spin_lock_irq(&_pagemap_lru_lock);
while (max_scan > 0 && nr_pages > 0) {
struct page *page;
......@@ -380,6 +381,7 @@ static /* inline */ void refill_inactive(const int nr_pages_in)
struct page *page;
struct pagevec pvec;
lru_add_drain();
spin_lock_irq(&_pagemap_lru_lock);
while (nr_pages && !list_empty(&active_list)) {
page = list_entry(active_list.prev, struct page, lru);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment