Commit c4fa6309 authored by Huang Ying's avatar Huang Ying Committed by Linus Torvalds

mm, swap: fix swap readahead marking

In the original implementation, it is possible that the existing pages
in the swap cache (not newly readahead) could be marked as the readahead
pages.  This will cause the statistics of swap readahead be wrong and
influence the swap readahead algorithm too.

This is fixed via marking a page as the readahead page only if it is
newly allocated and read from the disk.

When testing with linpack, after the fixing the swap readahead hit rate
increased from ~66% to ~86%.

Link: http://lkml.kernel.org/r/20170807054038.1843-3-ying.huang@intel.comSigned-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent cbc65df2
...@@ -498,7 +498,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, ...@@ -498,7 +498,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
unsigned long start_offset, end_offset; unsigned long start_offset, end_offset;
unsigned long mask; unsigned long mask;
struct blk_plug plug; struct blk_plug plug;
bool do_poll = true; bool do_poll = true, page_allocated;
mask = swapin_nr_pages(offset) - 1; mask = swapin_nr_pages(offset) - 1;
if (!mask) if (!mask)
...@@ -514,14 +514,18 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, ...@@ -514,14 +514,18 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
blk_start_plug(&plug); blk_start_plug(&plug);
for (offset = start_offset; offset <= end_offset ; offset++) { for (offset = start_offset; offset <= end_offset ; offset++) {
/* Ok, do the async read-ahead now */ /* Ok, do the async read-ahead now */
page = read_swap_cache_async(swp_entry(swp_type(entry), offset), page = __read_swap_cache_async(
gfp_mask, vma, addr, false); swp_entry(swp_type(entry), offset),
gfp_mask, vma, addr, &page_allocated);
if (!page) if (!page)
continue; continue;
if (offset != entry_offset && if (page_allocated) {
likely(!PageTransCompound(page))) { swap_readpage(page, false);
SetPageReadahead(page); if (offset != entry_offset &&
count_vm_event(SWAP_RA); likely(!PageTransCompound(page))) {
SetPageReadahead(page);
count_vm_event(SWAP_RA);
}
} }
put_page(page); put_page(page);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment