Commit 12eab428 authored by Miaohe Lin's avatar Miaohe Lin Committed by Linus Torvalds

mm/swap.c: fix incomplete comment in lru_cache_add_inactive_or_unevictable()

Since commit 9c4e6b1a ("mm, mlock, vmscan: no more skipping
pagevecs"), unevictable pages do not goes directly back onto zone's
unevictable list.
Signed-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Shakeel Butt <shakeelb@google.com>
Link: https://lkml.kernel.org/r/20200927122209.59328-1-linmiaohe@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 548d9782
...@@ -481,9 +481,7 @@ EXPORT_SYMBOL(lru_cache_add); ...@@ -481,9 +481,7 @@ EXPORT_SYMBOL(lru_cache_add);
* @vma: vma in which page is mapped for determining reclaimability * @vma: vma in which page is mapped for determining reclaimability
* *
* Place @page on the inactive or unevictable LRU list, depending on its * Place @page on the inactive or unevictable LRU list, depending on its
* evictability. Note that if the page is not evictable, it goes * evictability.
* directly back onto it's zone's unevictable list, it does NOT use a
* per cpu pagevec.
*/ */
void lru_cache_add_inactive_or_unevictable(struct page *page, void lru_cache_add_inactive_or_unevictable(struct page *page,
struct vm_area_struct *vma) struct vm_area_struct *vma)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment