Commit 5441d490 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Andrew Morton

vmscan: move initialisation of mapping down

Now that we don't interrogate the BDI for congestion, we can delay looking
up the folio's mapping until we've got further through the function,
reducing register pressure and saving a call to folio_mapping for folios
we're adding to the swap cache.

Link: https://lkml.kernel.org/r/20220504182857.4013401-13-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 64daa5d8
......@@ -1588,12 +1588,11 @@ static unsigned int shrink_page_list(struct list_head *page_list,
stat->nr_unqueued_dirty += nr_pages;
/*
* Treat this page as congested if the underlying BDI is or if
* Treat this page as congested if
* pages are cycling through the LRU so quickly that the
* pages marked for immediate reclaim are making it to the
* end of the LRU a second time.
*/
mapping = page_mapping(page);
if (writeback && PageReclaim(page))
stat->nr_congested += nr_pages;
......@@ -1744,9 +1743,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
if (!add_to_swap(folio))
goto activate_locked_split;
}
/* Adding to swap updated mapping */
mapping = page_mapping(page);
}
} else if (PageSwapBacked(page) && PageTransHuge(page)) {
/* Split shmem THP */
......@@ -1787,6 +1783,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
}
}
mapping = folio_mapping(folio);
if (folio_test_dirty(folio)) {
/*
* Only kswapd can writeback filesystem folios
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment