Commit 71aa147b authored by Naohiro Aota's avatar Naohiro Aota Committed by David Sterba

btrfs: fix error handling of fallback uncompress write

When cow_file_range() fails in the middle of the allocation loop, it
unlocks the pages but leaves the ordered extents intact. Thus, we need
to call btrfs_cleanup_ordered_extents() to finish the created ordered
extents.

Also, we need to call end_extent_writepage() if locked_page is available
because btrfs_cleanup_ordered_extents() never processes the region on
the locked_page.

Furthermore, we need to set the mapping as error if locked_page is
unavailable before unlocking the pages, so that the errno is properly
propagated to the user space.

CC: stable@vger.kernel.org # 5.18+
Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 99826e4c
...@@ -928,8 +928,18 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, ...@@ -928,8 +928,18 @@ static int submit_uncompressed_range(struct btrfs_inode *inode,
goto out; goto out;
} }
if (ret < 0) { if (ret < 0) {
if (locked_page) btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1);
if (locked_page) {
const u64 page_start = page_offset(locked_page);
const u64 page_end = page_start + PAGE_SIZE - 1;
btrfs_page_set_error(inode->root->fs_info, locked_page,
page_start, PAGE_SIZE);
set_page_writeback(locked_page);
end_page_writeback(locked_page);
end_extent_writepage(locked_page, ret, page_start, page_end);
unlock_page(locked_page); unlock_page(locked_page);
}
goto out; goto out;
} }
...@@ -1378,9 +1388,12 @@ static noinline int cow_file_range(struct btrfs_inode *inode, ...@@ -1378,9 +1388,12 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
* However, in case of unlock == 0, we still need to unlock the pages * However, in case of unlock == 0, we still need to unlock the pages
* (except @locked_page) to ensure all the pages are unlocked. * (except @locked_page) to ensure all the pages are unlocked.
*/ */
if (!unlock && orig_start < start) if (!unlock && orig_start < start) {
if (!locked_page)
mapping_set_error(inode->vfs_inode.i_mapping, ret);
extent_clear_unlock_delalloc(inode, orig_start, start - 1, extent_clear_unlock_delalloc(inode, orig_start, start - 1,
locked_page, 0, page_ops); locked_page, 0, page_ops);
}
/* /*
* For the range (2). If we reserved an extent for our delalloc range * For the range (2). If we reserved an extent for our delalloc range
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment