Commit 27694091 authored by David Sterba's avatar David Sterba

btrfs: clear defragmented inodes using postorder in btrfs_cleanup_defrag_inodes()

btrfs_cleanup_defrag_inodes() is not called frequently, only in remount
or unmount, but the way it frees the inodes in fs_info->defrag_inodes
is inefficient. Each time it needs to locate first node, remove it,
potentially rebalance tree until it's done. This allows to do a
conditional reschedule.

For cleanups the rbtree_postorder_for_each_entry_safe() iterator is
convenient but we can't reschedule and restart iteration because some of
the tree nodes would be already freed.

The cleanup operation is kmem_cache_free() which will likely take the
fast path for most objects so rescheduling should not be necessary.
Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent ffc53165
...@@ -212,20 +212,14 @@ static struct inode_defrag *btrfs_pick_defrag_inode( ...@@ -212,20 +212,14 @@ static struct inode_defrag *btrfs_pick_defrag_inode(
void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info) void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info)
{ {
struct inode_defrag *defrag; struct inode_defrag *defrag, *next;
struct rb_node *node;
spin_lock(&fs_info->defrag_inodes_lock); spin_lock(&fs_info->defrag_inodes_lock);
node = rb_first(&fs_info->defrag_inodes);
while (node) {
rb_erase(node, &fs_info->defrag_inodes);
defrag = rb_entry(node, struct inode_defrag, rb_node);
kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
cond_resched_lock(&fs_info->defrag_inodes_lock); rbtree_postorder_for_each_entry_safe(defrag, next,
&fs_info->defrag_inodes, rb_node)
kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
node = rb_first(&fs_info->defrag_inodes);
}
spin_unlock(&fs_info->defrag_inodes_lock); spin_unlock(&fs_info->defrag_inodes_lock);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment