Commit b4c5d8fd authored by Qu Wenruo's avatar Qu Wenruo Committed by David Sterba

btrfs: qgroup: fix wrong qgroup metadata reserve for delayed inode

For delayed inode facility, qgroup metadata is reserved for it, and
later freed.

However we're freeing more bytes than we reserved.
In btrfs_delayed_inode_reserve_metadata():

	num_bytes = btrfs_calc_metadata_size(fs_info, 1);
	...
		ret = btrfs_qgroup_reserve_meta_prealloc(root,
				fs_info->nodesize, true);
		...
		if (!ret) {
			node->bytes_reserved = num_bytes;

But in btrfs_delayed_inode_release_metadata():

	if (qgroup_free)
		btrfs_qgroup_free_meta_prealloc(node->root,
				node->bytes_reserved);
	else
		btrfs_qgroup_convert_reserved_meta(node->root,
				node->bytes_reserved);

This means, we're always releasing more qgroup metadata rsv than we have
reserved.

This won't trigger selftest warning, as btrfs qgroup metadata rsv has
extra protection against cases like quota enabled half-way.

But we still need to fix this problem any way.

This patch will use the same num_bytes for qgroup metadata rsv so we
could handle it correctly.

Fixes: f218ea6c ("btrfs: delayed-inode: Remove wrong qgroup meta reservation calls")
CC: stable@vger.kernel.org # 4.19+
Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 425c6ed6
...@@ -627,8 +627,7 @@ static int btrfs_delayed_inode_reserve_metadata( ...@@ -627,8 +627,7 @@ static int btrfs_delayed_inode_reserve_metadata(
*/ */
if (!src_rsv || (!trans->bytes_reserved && if (!src_rsv || (!trans->bytes_reserved &&
src_rsv->type != BTRFS_BLOCK_RSV_DELALLOC)) { src_rsv->type != BTRFS_BLOCK_RSV_DELALLOC)) {
ret = btrfs_qgroup_reserve_meta_prealloc(root, ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
fs_info->nodesize, true);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = btrfs_block_rsv_add(root, dst_rsv, num_bytes, ret = btrfs_block_rsv_add(root, dst_rsv, num_bytes,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment