• Filipe Manana's avatar
    btrfs: use single bulk copy operations when logging directories · da1b811f
    Filipe Manana authored
    When logging a directory and inserting a batch of directory items, we are
    copying the data of each item from a leaf in the fs/subvolume tree to a
    leaf in a log tree, separately. This is not really needed, since we are
    copying from a contiguous memory area into another one, so we can use a
    single copy operation to copy all items at once.
    
    This patch is part of a small patchset that is comprised of the following
    patches:
    
      btrfs: loop only once over data sizes array when inserting an item batch
      btrfs: unexport setup_items_for_insert()
      btrfs: use single bulk copy operations when logging directories
    
    This is patch 3/3.
    
    The following test was used to compare performance of a branch without the
    patchset versus one branch that has the whole patchset applied:
    
      $ cat dir-fsync-test.sh
      #!/bin/bash
    
      DEV=/dev/nvme0n1
      MNT=/mnt/nvme0n1
    
      NUM_NEW_FILES=1000000
      NUM_FILE_DELETES=1000
      LEAF_SIZE=16K
    
      mkfs.btrfs -f -n $LEAF_SIZE $DEV
      mount -o ssd $DEV $MNT
    
      mkdir $MNT/testdir
    
      for ((i = 1; i <= $NUM_NEW_FILES; i++)); do
          echo -n > $MNT/testdir/file_$i
      done
    
      # Fsync the directory, this will log the new dir items and the inodes
      # they point to, because these are new inodes.
      start=$(date +%s%N)
      xfs_io -c "fsync" $MNT/testdir
      end=$(date +%s%N)
    
      dur=$(( (end - start) / 1000000 ))
      echo "dir fsync took $dur ms after adding $NUM_NEW_FILES files"
    
      # sync to force transaction commit and wipeout the log.
      sync
    
      del_inc=$(( $NUM_NEW_FILES / $NUM_FILE_DELETES ))
      for ((i = 1; i <= $NUM_NEW_FILES; i += $del_inc)); do
          rm -f $MNT/testdir/file_$i
      done
    
      # Fsync the directory, this will only log dir items, there are no
      # dentries pointing to new inodes.
      start=$(date +%s%N)
      xfs_io -c "fsync" $MNT/testdir
      end=$(date +%s%N)
    
      dur=$(( (end - start) / 1000000 ))
      echo "dir fsync took $dur ms after deleting $NUM_FILE_DELETES files"
    
      umount $MNT
    
    The tests were run on a non-debug kernel (Debian's default kernel config)
    and were the following:
    
    *** with a leaf size of 16K, before patchset ***
    
    dir fsync took 8482 ms after adding 1000000 files
    dir fsync took 166 ms after deleting 1000 files
    
    *** with a leaf size of 16K, after patchset ***
    
    dir fsync took 8196 ms after adding 1000000 files  (-3.4%)
    dir fsync took 143 ms after deleting 1000 files    (-14.9%)
    
    *** with a leaf size of 64K, before patchset ***
    
    dir fsync took 12851 ms after adding 1000000 files
    dir fsync took 466 ms after deleting 1000 files
    
    *** with a leaf size of 64K, after  patchset ***
    
    dir fsync took 12287 ms after adding 1000000 files (-4.5%)
    dir fsync took 414 ms after deleting 1000 files    (-11.8%)
    Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
    Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
    da1b811f
tree-log.c 186 KB