Commit 25cd2414 authored by Johannes Weiner's avatar Johannes Weiner Committed by Andrew Morton

mm: zswap: fix data loss on SWP_SYNCHRONOUS_IO devices

Zhongkun He reports data corruption when combining zswap with zram.

The issue is the exclusive loads we're doing in zswap. They assume
that all reads are going into the swapcache, which can assume
authoritative ownership of the data and so the zswap copy can go.

However, zram files are marked SWP_SYNCHRONOUS_IO, and faults will try to
bypass the swapcache.  This results in an optimistic read of the swap data
into a page that will be dismissed if the fault fails due to races.  In
this case, zswap mustn't drop its authoritative copy.

Link: https://lore.kernel.org/all/CACSyD1N+dUvsu8=zV9P691B9bVq33erwOXNTmEaUbi9DrDeJzw@mail.gmail.com/
Fixes: b9c91c43 ("mm: zswap: support exclusive loads")
Link: https://lkml.kernel.org/r/20240324210447.956973-1-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Reported-by: default avatarZhongkun He <hezhongkun.hzk@bytedance.com>
Tested-by: default avatarZhongkun He <hezhongkun.hzk@bytedance.com>
Acked-by: default avatarYosry Ahmed <yosryahmed@google.com>
Acked-by: default avatarBarry Song <baohua@kernel.org>
Reviewed-by: default avatarChengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: default avatarNhat Pham <nphamcs@gmail.com>
Acked-by: default avatarChris Li <chrisl@kernel.org>
Cc: <stable@vger.kernel.org>	[6.5+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 8c864371
...@@ -1636,6 +1636,7 @@ bool zswap_load(struct folio *folio) ...@@ -1636,6 +1636,7 @@ bool zswap_load(struct folio *folio)
swp_entry_t swp = folio->swap; swp_entry_t swp = folio->swap;
pgoff_t offset = swp_offset(swp); pgoff_t offset = swp_offset(swp);
struct page *page = &folio->page; struct page *page = &folio->page;
bool swapcache = folio_test_swapcache(folio);
struct zswap_tree *tree = swap_zswap_tree(swp); struct zswap_tree *tree = swap_zswap_tree(swp);
struct zswap_entry *entry; struct zswap_entry *entry;
u8 *dst; u8 *dst;
...@@ -1648,7 +1649,20 @@ bool zswap_load(struct folio *folio) ...@@ -1648,7 +1649,20 @@ bool zswap_load(struct folio *folio)
spin_unlock(&tree->lock); spin_unlock(&tree->lock);
return false; return false;
} }
zswap_rb_erase(&tree->rbroot, entry); /*
* When reading into the swapcache, invalidate our entry. The
* swapcache can be the authoritative owner of the page and
* its mappings, and the pressure that results from having two
* in-memory copies outweighs any benefits of caching the
* compression work.
*
* (Most swapins go through the swapcache. The notable
* exception is the singleton fault on SWP_SYNCHRONOUS_IO
* files, which reads into a private page and may free it if
* the fault fails. We remain the primary owner of the entry.)
*/
if (swapcache)
zswap_rb_erase(&tree->rbroot, entry);
spin_unlock(&tree->lock); spin_unlock(&tree->lock);
if (entry->length) if (entry->length)
...@@ -1663,9 +1677,10 @@ bool zswap_load(struct folio *folio) ...@@ -1663,9 +1677,10 @@ bool zswap_load(struct folio *folio)
if (entry->objcg) if (entry->objcg)
count_objcg_event(entry->objcg, ZSWPIN); count_objcg_event(entry->objcg, ZSWPIN);
zswap_entry_free(entry); if (swapcache) {
zswap_entry_free(entry);
folio_mark_dirty(folio); folio_mark_dirty(folio);
}
return true; return true;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment