Commit 5cedf721 authored by Dave Chinner's avatar Dave Chinner Committed by Al Viro

list_lru: fix broken LRU_RETRY behaviour

The LRU_RETRY code assumes that the list traversal status after we have
dropped and regained the list lock.  Unfortunately, this is not a valid
assumption, and that can lead to racing traversals isolating objects that
the other traversal expects to be the next item on the list.

This is causing problems with the inode cache shrinker isolation, with
races resulting in an inode on a dispose list being "isolated" because a
racing traversal still thinks it is on the LRU.  The inode is then never
reclaimed and that causes hangs if a subsequent lookup on that inode
occurs.

Fix it by always restarting the list walk on a LRU_RETRY return from the
isolate callback.  Avoid the possibility of livelocks the current code was
trying to avoid by always decrementing the nr_to_walk counter on retries
so that even if we keep hitting the same item on the list we'll eventually
stop trying to walk and exit out of the situation causing the problem.
Reported-by: default avatarMichal Hocko <mhocko@suse.cz>
Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
Cc: Glauber Costa <glommer@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
parent 3b1d58a4
...@@ -73,19 +73,19 @@ list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, ...@@ -73,19 +73,19 @@ list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate,
struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_node *nlru = &lru->node[nid];
struct list_head *item, *n; struct list_head *item, *n;
unsigned long isolated = 0; unsigned long isolated = 0;
/*
* If we don't keep state of at which pass we are, we can loop at
* LRU_RETRY, since we have no guarantees that the caller will be able
* to do something other than retry on the next pass. We handle this by
* allowing at most one retry per object. This should not be altered
* by any condition other than LRU_RETRY.
*/
bool first_pass = true;
spin_lock(&nlru->lock); spin_lock(&nlru->lock);
restart: restart:
list_for_each_safe(item, n, &nlru->list) { list_for_each_safe(item, n, &nlru->list) {
enum lru_status ret; enum lru_status ret;
/*
* decrement nr_to_walk first so that we don't livelock if we
* get stuck on large numbesr of LRU_RETRY items
*/
if (--(*nr_to_walk) == 0)
break;
ret = isolate(item, &nlru->lock, cb_arg); ret = isolate(item, &nlru->lock, cb_arg);
switch (ret) { switch (ret) {
case LRU_REMOVED: case LRU_REMOVED:
...@@ -100,19 +100,14 @@ list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, ...@@ -100,19 +100,14 @@ list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate,
case LRU_SKIP: case LRU_SKIP:
break; break;
case LRU_RETRY: case LRU_RETRY:
if (!first_pass) { /*
first_pass = true; * The lru lock has been dropped, our list traversal is
break; * now invalid and so we have to restart from scratch.
} */
first_pass = false;
goto restart; goto restart;
default: default:
BUG(); BUG();
} }
if ((*nr_to_walk)-- == 0)
break;
} }
spin_unlock(&nlru->lock); spin_unlock(&nlru->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment