Commit ca56489c authored by Domenico Cerasuolo's avatar Domenico Cerasuolo Committed by Andrew Morton

mm: zswap: fix potential memory corruption on duplicate store

While stress-testing zswap a memory corruption was happening when writing
back pages.  __frontswap_store used to check for duplicate entries before
attempting to store a page in zswap, this was because if the store fails
the old entry isn't removed from the tree.  This change removes duplicate
entries in zswap_store before the actual attempt.

[cerasuolodomenico@gmail.com: add a warning and a comment, per Johannes]
  Link: https://lkml.kernel.org/r/20230925130002.1929369-1-cerasuolodomenico@gmail.com
Link: https://lkml.kernel.org/r/20230922172211.1704917-1-cerasuolodomenico@gmail.com
Fixes: 42c06a0e ("mm: kill frontswap")
Signed-off-by: default avatarDomenico Cerasuolo <cerasuolodomenico@gmail.com>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarNhat Pham <nphamcs@gmail.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Cc: Seth Jennings <sjenning@redhat.com>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 6f1bace9
...@@ -1218,6 +1218,19 @@ bool zswap_store(struct folio *folio) ...@@ -1218,6 +1218,19 @@ bool zswap_store(struct folio *folio)
if (!zswap_enabled || !tree) if (!zswap_enabled || !tree)
return false; return false;
/*
* If this is a duplicate, it must be removed before attempting to store
* it, otherwise, if the store fails the old page won't be removed from
* the tree, and it might be written back overriding the new data.
*/
spin_lock(&tree->lock);
dupentry = zswap_rb_search(&tree->rbroot, offset);
if (dupentry) {
zswap_duplicate_entry++;
zswap_invalidate_entry(tree, dupentry);
}
spin_unlock(&tree->lock);
/* /*
* XXX: zswap reclaim does not work with cgroups yet. Without a * XXX: zswap reclaim does not work with cgroups yet. Without a
* cgroup-aware entry LRU, we will push out entries system-wide based on * cgroup-aware entry LRU, we will push out entries system-wide based on
...@@ -1333,7 +1346,14 @@ bool zswap_store(struct folio *folio) ...@@ -1333,7 +1346,14 @@ bool zswap_store(struct folio *folio)
/* map */ /* map */
spin_lock(&tree->lock); spin_lock(&tree->lock);
/*
* A duplicate entry should have been removed at the beginning of this
* function. Since the swap entry should be pinned, if a duplicate is
* found again here it means that something went wrong in the swap
* cache.
*/
while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
WARN_ON(1);
zswap_duplicate_entry++; zswap_duplicate_entry++;
zswap_invalidate_entry(tree, dupentry); zswap_invalidate_entry(tree, dupentry);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment