Commit cd016d80 authored by Andrew Morton's avatar Andrew Morton Committed by Arnaldo Carvalho de Melo

[PATCH] reduce lock contention in do_pagecache_readahead

Anton Blanchard has a workload (the SDET benchmark) which is showing some
moderate lock contention in do_pagecache_readahead().

Seems that SDET has many threads performing seeky reads against a
cached file.  The average number of pagecache probes in a single
do_pagecache_readahead() is six, which seems reasonable.

The patch (from Anton) flips the locking around to optimise for the
fast case (page was present).  So the kernel takes the lock less often,
and does more work once it has been acquired.
parent f5737b71
......@@ -117,25 +117,27 @@ void do_page_cache_readahead(struct file *file,
/*
* Preallocate as many pages as we will need.
*/
read_lock(&mapping->page_lock);
for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
unsigned long page_offset = offset + page_idx;
if (page_offset > end_index)
break;
read_lock(&mapping->page_lock);
page = radix_tree_lookup(&mapping->page_tree, page_offset);
read_unlock(&mapping->page_lock);
if (page)
continue;
read_unlock(&mapping->page_lock);
page = page_cache_alloc(mapping);
read_lock(&mapping->page_lock);
if (!page)
break;
page->index = page_offset;
list_add(&page->list, &page_pool);
nr_to_really_read++;
}
read_unlock(&mapping->page_lock);
/*
* Now start the IO. We ignore I/O errors - if the page is not
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment