Commit 644c8c92 authored by Eric Biggers's avatar Eric Biggers Committed by Jaegeuk Kim

f2fs: fix deadlock allocating bio_post_read_ctx from mempool

Without any form of coordination, any case where multiple allocations
from the same mempool are needed at a time to make forward progress can
deadlock under memory pressure.

This is the case for struct bio_post_read_ctx, as one can be allocated
to decrypt a Merkle tree page during fsverity_verify_bio(), which itself
is running from a post-read callback for a data bio which has its own
struct bio_post_read_ctx.

Fix this by freeing first bio_post_read_ctx before calling
fsverity_verify_bio().  This works because verity (if enabled) is always
the last post-read step.

This deadlock can be reproduced by trying to read from an encrypted
verity file after reducing NUM_PREALLOC_POST_READ_CTXS to 1 and patching
mempool_alloc() to pretend that pool->alloc() always fails.

Note that since NUM_PREALLOC_POST_READ_CTXS is actually 128, to actually
hit this bug in practice would require reading from lots of encrypted
verity files at the same time.  But it's theoretically possible, as N
available objects doesn't guarantee forward progress when > N/2 threads
each need 2 objects at a time.

Fixes: 95ae251f ("f2fs: add fs-verity support")
Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
parent e8ce5749
...@@ -204,19 +204,32 @@ static void f2fs_verity_work(struct work_struct *work) ...@@ -204,19 +204,32 @@ static void f2fs_verity_work(struct work_struct *work)
{ {
struct bio_post_read_ctx *ctx = struct bio_post_read_ctx *ctx =
container_of(work, struct bio_post_read_ctx, work); container_of(work, struct bio_post_read_ctx, work);
struct bio *bio = ctx->bio;
#ifdef CONFIG_F2FS_FS_COMPRESSION
unsigned int enabled_steps = ctx->enabled_steps;
#endif
/*
* fsverity_verify_bio() may call readpages() again, and while verity
* will be disabled for this, decryption may still be needed, resulting
* in another bio_post_read_ctx being allocated. So to prevent
* deadlocks we need to release the current ctx to the mempool first.
* This assumes that verity is the last post-read step.
*/
mempool_free(ctx, bio_post_read_ctx_pool);
bio->bi_private = NULL;
#ifdef CONFIG_F2FS_FS_COMPRESSION #ifdef CONFIG_F2FS_FS_COMPRESSION
/* previous step is decompression */ /* previous step is decompression */
if (ctx->enabled_steps & (1 << STEP_DECOMPRESS)) { if (enabled_steps & (1 << STEP_DECOMPRESS)) {
f2fs_verify_bio(bio);
f2fs_verify_bio(ctx->bio); f2fs_release_read_bio(bio);
f2fs_release_read_bio(ctx->bio);
return; return;
} }
#endif #endif
fsverity_verify_bio(ctx->bio); fsverity_verify_bio(bio);
__f2fs_read_end_io(ctx->bio, false, false); __f2fs_read_end_io(bio, false, false);
} }
static void f2fs_post_read_work(struct work_struct *work) static void f2fs_post_read_work(struct work_struct *work)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment