Commit fe6a043c authored by Thomas Graf's avatar Thomas Graf Committed by David S. Miller

rhashtable: rhashtable_remove() must unlink in both tbl and future_tbl

As removals can occur during resizes, entries may be referred to from
both tbl and future_tbl when the removal is requested. Therefore
rhashtable_remove() must unlink the entry in both tables if this is
the case. The existing code did search both tables but stopped when it
hit the first match.

Failing to unlink in both tables resulted in use after free.

Fixes: 97defe1e ("rhashtable: Per bucket locks & deferred expansion/shrinking")
Reported-by: default avatarYing Xue <ying.xue@windriver.com>
Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 1dc7b90f
......@@ -585,6 +585,7 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
struct rhash_head *he;
spinlock_t *lock;
unsigned int hash;
bool ret = false;
rcu_read_lock();
tbl = rht_dereference_rcu(ht->tbl, ht);
......@@ -602,17 +603,16 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
}
rcu_assign_pointer(*pprev, obj->next);
atomic_dec(&ht->nelems);
spin_unlock_bh(lock);
rhashtable_wakeup_worker(ht);
rcu_read_unlock();
return true;
ret = true;
break;
}
/* The entry may be linked in either 'tbl', 'future_tbl', or both.
* 'future_tbl' only exists for a short period of time during
* resizing. Thus traversing both is fine and the added cost is
* very rare.
*/
if (tbl != rht_dereference_rcu(ht->future_tbl, ht)) {
spin_unlock_bh(lock);
......@@ -625,9 +625,15 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
}
spin_unlock_bh(lock);
if (ret) {
atomic_dec(&ht->nelems);
rhashtable_wakeup_worker(ht);
}
rcu_read_unlock();
return false;
return ret;
}
EXPORT_SYMBOL_GPL(rhashtable_remove);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment