Commit 33fc80e2 authored by Minchan Kim's avatar Minchan Kim Committed by Linus Torvalds

mm: remove SWAP_AGAIN in ttu

In 2002, [1] introduced SWAP_AGAIN.  At that time, try_to_unmap_one used
spin_trylock(&mm->page_table_lock) so it's really easy to contend and
fail to hold a lock so SWAP_AGAIN to keep LRU status makes sense.

However, now we changed it to mutex-based lock and be able to block
without skip pte so there is few of small window to return SWAP_AGAIN so
remove SWAP_AGAIN and just return SWAP_FAIL.

[1] c48c43e, minimal rmap

Link: http://lkml.kernel.org/r/1489555493-14659-7-git-send-email-minchan@kernel.orgSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ad6b6704
...@@ -1504,13 +1504,10 @@ static int page_mapcount_is_zero(struct page *page) ...@@ -1504,13 +1504,10 @@ static int page_mapcount_is_zero(struct page *page)
* Return values are: * Return values are:
* *
* SWAP_SUCCESS - we succeeded in removing all mappings * SWAP_SUCCESS - we succeeded in removing all mappings
* SWAP_AGAIN - we missed a mapping, try again later
* SWAP_FAIL - the page is unswappable * SWAP_FAIL - the page is unswappable
*/ */
int try_to_unmap(struct page *page, enum ttu_flags flags) int try_to_unmap(struct page *page, enum ttu_flags flags)
{ {
int ret;
struct rmap_walk_control rwc = { struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one, .rmap_one = try_to_unmap_one,
.arg = (void *)flags, .arg = (void *)flags,
...@@ -1530,13 +1527,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) ...@@ -1530,13 +1527,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
rwc.invalid_vma = invalid_migration_vma; rwc.invalid_vma = invalid_migration_vma;
if (flags & TTU_RMAP_LOCKED) if (flags & TTU_RMAP_LOCKED)
ret = rmap_walk_locked(page, &rwc); rmap_walk_locked(page, &rwc);
else else
ret = rmap_walk(page, &rwc); rmap_walk(page, &rwc);
if (!page_mapcount(page)) return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL;
ret = SWAP_SUCCESS;
return ret;
} }
static int page_not_mapped(struct page *page) static int page_not_mapped(struct page *page)
......
...@@ -1150,8 +1150,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -1150,8 +1150,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
case SWAP_FAIL: case SWAP_FAIL:
nr_unmap_fail++; nr_unmap_fail++;
goto activate_locked; goto activate_locked;
case SWAP_AGAIN:
goto keep_locked;
case SWAP_SUCCESS: case SWAP_SUCCESS:
; /* try to free the page below */ ; /* try to free the page below */
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment