• Filipe Manana's avatar
    btrfs: make hole and data seeking a lot more efficient · b6e83356
    Filipe Manana authored
    The current implementation of hole and data seeking for llseek does not
    scale well in regards to the number of extents and the distance between
    the start offset and the next hole or extent. This is due to a very high
    algorithmic complexity. Often we also get reports of btrfs' hole and data
    seeking (llseek) being too slow, such as at 2017's LSFMM (see the Link
    tag at the bottom).
    
    In order to better understand it, lets consider the case where the start
    offset is 0, we are seeking for a hole and the file size is 16G. Between
    file offset 0 and the first hole in the file there are 100K extents - this
    is common for large files, specially if we have compression enabled, since
    the maximum extent size is limited to 128K. The steps take by the main
    loop of the current algorithm are the following:
    
    1) We start by calling btrfs_get_extent_fiemap(), for file offset 0, which
       calls btrfs_get_extent(). This will first lookup for an extent map in
       the inode's extent map tree (a red black tree). If the extent map is
       not loaded in memory, then it will do a lookup for the corresponding
       file extent item in the subvolume's b+tree, create an extent map based
       on the contents of the file extent item and then add the extent map to
       the extent map tree of the inode;
    
    2) The second iteration calls btrfs_get_extent_fiemap() again, this time
       with a start offset matching the end offset of the previous extent.
       Again, btrfs_get_extent() will first search the extent map tree, and
       if it doesn't find an extent map there, it will again search in the
       b+tree of the subvolume for a matching file extent item, build an
       extent map based on the file extent item, and add the extent map to
       to the extent map tree of the inode;
    
    3) This repeats over and over until we find the first hole (when seeking
       for holes) or until we find the first extent (when seeking for data).
    
       If there no extent maps loaded in memory for each iteration, then on
       each iteration we do 1 extent map tree search, 1 b+tree search, plus
       1 more extent map tree traversal to insert an extent map - plus we
       allocate memory for the extent map.
    
       On each iteration we are growing the size of the extent map tree,
       making each future search slower, and also visiting the same b+tree
       leaves over and over again - taking into account with the default leaf
       size of 16K we can fit more than 200 file extent items in a leaf - so
       we can visit the same b+tree leaf 200+ times, on each visit walking
       down a path from the root to the leaf.
    
    So it's easy to see that what we have now doesn't scale well. Also, it
    loads an extent map for every file extent item into memory, which is not
    efficient - we should add extents maps only when doing IO (writing or
    reading file data).
    
    This change implements a new algorithm which scales much better, and
    works like this:
    
    1) We iterate over the subvolume's b+tree, visiting each leaf that has
       file extent items once and only once;
    
    2) For any file extent items found, that don't represent holes or prealloc
       extents, it will not search the extent map tree - there's no need at
       all for that - an extent map is just an in-memory representation of a
       file extent item;
    
    3) When a hole is found, or a prealloc extent, it will check if there's
       delalloc for its range. For this it will search for EXTENT_DELALLOC
       bits in the inode's io tree and check the extent map tree - this is
       for accounting for unflushed delalloc and for flushed delalloc (the
       period between running delalloc and ordered extent completion),
       respectively. This is similar to what the current implementation does
       when it finds a hole or prealloc extent, but without creating extent
       maps and adding them to the extent map tree in case they are not
       loaded in memory;
    
    4) It never allocates extent maps, or adds extent maps to the inode's
       extent map tree. This not only saves memory and time (from the tree
       insertions and allocations), but also eliminates the possibility of
       -ENOMEM due to allocating too many extent maps.
    
    Part of this new code will also be used later for fiemap (which also
    suffers similar scalability problems).
    
    The following test example can be used to quickly measure the efficiency
    before and after this patch:
    
        $ cat test-seek-hole.sh
        #!/bin/bash
    
        DEV=/dev/sdi
        MNT=/mnt/sdi
    
        mkfs.btrfs -f $DEV
    
        mount -o compress=lzo $DEV $MNT
    
        # 16G file -> 131073 compressed extents.
        xfs_io -f -c "pwrite -S 0xab -b 1M 0 16G" $MNT/foobar
    
        # Leave a 1M hole at file offset 15G.
        xfs_io -c "fpunch 15G 1M" $MNT/foobar
    
        # Unmount and mount again, so that we can test when there's no
        # metadata cached in memory.
        umount $MNT
        mount -o compress=lzo $DEV $MNT
    
        # Test seeking for hole from offset 0 (hole is at offset 15G).
    
        start=$(date +%s%N)
        xfs_io -c "seek -h 0" $MNT/foobar
        end=$(date +%s%N)
        dur=$(( (end - start) / 1000000 ))
        echo "Took $dur milliseconds to seek first hole (metadata not cached)"
        echo
    
        start=$(date +%s%N)
        xfs_io -c "seek -h 0" $MNT/foobar
        end=$(date +%s%N)
        dur=$(( (end - start) / 1000000 ))
        echo "Took $dur milliseconds to seek first hole (metadata cached)"
        echo
    
        umount $MNT
    
    Before this change:
    
        $ ./test-seek-hole.sh
        (...)
        Whence	Result
        HOLE	16106127360
        Took 176 milliseconds to seek first hole (metadata not cached)
    
        Whence	Result
        HOLE	16106127360
        Took 17 milliseconds to seek first hole (metadata cached)
    
    After this change:
    
        $ ./test-seek-hole.sh
        (...)
        Whence	Result
        HOLE	16106127360
        Took 43 milliseconds to seek first hole (metadata not cached)
    
        Whence	Result
        HOLE	16106127360
        Took 13 milliseconds to seek first hole (metadata cached)
    
    That's about 4x faster when no metadata is cached and about 30% faster
    when all metadata is cached.
    
    In practice the differences may often be significantly higher, either due
    to a higher number of extents in a file or because the subvolume's b+tree
    is much bigger than in this example, where we only have one file.
    
    Link: https://lwn.net/Articles/718805/Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
    Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
    Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
    b6e83356
file.c 116 KB