Commit 2f3165ec authored by Zygo Blaxell's avatar Zygo Blaxell Committed by David Sterba

btrfs: don't force mounts to wait for cleaner_kthread to delete one or more subvolumes

During a mount, we start the cleaner kthread first because the transaction
kthread wants to wake up the cleaner kthread.  We start the transaction
kthread next because everything in btrfs wants transactions.  We do reloc
recovery in the thread that was doing the original mount call once the
transaction kthread is running.  This means that the cleaner kthread
could already be running when reloc recovery happens (e.g. if a snapshot
delete was started before a crash).

Relocation does not play well with the cleaner kthread, so a mutex was
added in commit 5f316481 "Btrfs: fix
race between balance recovery and root deletion" to prevent both from
being active at the same time.

If the cleaner kthread is already holding the mutex by the time we get
to btrfs_recover_relocation, the mount will be blocked until at least
one deleted subvolume is cleaned (possibly more if the mount process
doesn't get the lock right away).  During this time (which could be an
arbitrarily long time on a large/slow filesystem), the mount process is
stuck and the filesystem is unnecessarily inaccessible.

Fix this by locking cleaner_mutex before we start cleaner_kthread, and
unlocking the mutex after mount no longer requires it.  This ensures
that the mounting process will not be blocked by the cleaner kthread.
The cleaner kthread is already prepared for mutex contention and will
just go to sleep until the mutex is available.
Signed-off-by: default avatarZygo Blaxell <ce3g8jdj@umail.furryterror.org>
Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 58d7bbf8
......@@ -2517,6 +2517,7 @@ int open_ctree(struct super_block *sb,
int num_backups_tried = 0;
int backup_index = 0;
int max_active;
bool cleaner_mutex_locked = false;
tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
......@@ -2997,6 +2998,13 @@ int open_ctree(struct super_block *sb,
goto fail_sysfs;
}
/*
* Hold the cleaner_mutex thread here so that we don't block
* for a long time on btrfs_recover_relocation. cleaner_kthread
* will wait for us to finish mounting the filesystem.
*/
mutex_lock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = true;
fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
"btrfs-cleaner");
if (IS_ERR(fs_info->cleaner_kthread))
......@@ -3056,10 +3064,8 @@ int open_ctree(struct super_block *sb,
ret = btrfs_cleanup_fs_roots(fs_info);
if (ret)
goto fail_qgroup;
mutex_lock(&fs_info->cleaner_mutex);
/* We locked cleaner_mutex before creating cleaner_kthread. */
ret = btrfs_recover_relocation(tree_root);
mutex_unlock(&fs_info->cleaner_mutex);
if (ret < 0) {
printk(KERN_WARNING
"BTRFS: failed to recover relocation\n");
......@@ -3067,6 +3073,8 @@ int open_ctree(struct super_block *sb,
goto fail_qgroup;
}
}
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
location.objectid = BTRFS_FS_TREE_OBJECTID;
location.type = BTRFS_ROOT_ITEM_KEY;
......@@ -3180,6 +3188,10 @@ int open_ctree(struct super_block *sb,
filemap_write_and_wait(fs_info->btree_inode->i_mapping);
fail_sysfs:
if (cleaner_mutex_locked) {
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
}
btrfs_sysfs_remove_mounted(fs_info);
fail_fsdev_sysfs:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment