1. 26 Oct, 2021 40 commits
    • Qu Wenruo's avatar
      btrfs: unexport repair_io_failure() · 38d5e541
      Qu Wenruo authored
      Function repair_io_failure() is no longer used out of extent_io.c since
      commit 8b9b6f25 ("btrfs: scrub: cleanup the remaining nodatasum
      fixup code"), which removes the last external caller.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      38d5e541
    • Filipe Manana's avatar
      btrfs: do not commit delayed inode when logging a file in full sync mode · f6df27dd
      Filipe Manana authored
      When logging a regular file in full sync mode, we are currently committing
      its delayed inode item. This is to ensure that we never miss copying the
      inode item, with its most up to date data, into the log tree.
      
      However that is not necessary since commit e4545de5 ("Btrfs: fix fsync
      data loss after append write"), because even if we don't find the leaf
      with the inode item when looking for leaves that changed in the current
      transaction, we end up logging the inode item later using the in-memory
      content. In case we find the leaf containing the inode item, we already
      end up using the in-memory inode for filling the inode item in the log
      tree, and not the inode item that is in the fs/subvolume tree, as it
      might be not up to date (copy_items() -> fill_inode_item()).
      
      So don't commit the delayed inode item, which brings a couple of benefits:
      
      1) Avoid writing the inode item to the fs/subvolume btree, saving time and
         reducing lock contention on the btree;
      
      2) In case no other item for the inode was changed, added or deleted in
         the same leaf where the inode item is located, we ended up copying
         all the items in that leaf to the log tree - it's harmless from a
         functional point of view, but it wastes time and log tree space.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 10/10 and the following test results compare a branch with
      the whole patch set applied versus a branch without any of the patches
      applied.
      
      The following script was used to test dbench with 8 and 16 jobs on a
      machine with 12 cores, 64G of RAM, a NVME device and using a non-debug
      kernel config (Debian's default):
      
        $ cat test.sh
        #!/bin/bash
      
        if [ $# -ne 1 ]; then
            echo "Use $0 NUM_JOBS"
            exit 1
        fi
      
        NUM_JOBS=$1
      
        DEV=/dev/nvme0n1
        MNT=/mnt/nvme0n1
        MOUNT_OPTIONS="-o ssd"
        MKFS_OPTIONS="-m single -d single"
      
        echo "performance" | \
            tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        dbench -D $MNT -t 120 $NUM_JOBS
      
        umount $MNT
      
      The results were the following:
      
      8 jobs, before patchset:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    4113896     0.009   238.665
       Close        3021699     0.001     0.590
       Rename        174215     0.082   238.733
       Unlink        830977     0.049   238.642
       Deltree           96     2.232     8.022
       Mkdir             48     0.003     0.005
       Qpathinfo    3729013     0.005     2.672
       Qfileinfo     653206     0.001     0.152
       Qfsinfo       683866     0.002     0.526
       Sfileinfo     335055     0.004     1.571
       Find         1441800     0.016     4.288
       WriteX       2049644     0.010     3.982
       ReadX        6449786     0.003     0.969
       LockX          13400     0.002     0.043
       UnlockX        13400     0.001     0.075
       Flush         288349     2.521   245.516
      
      Throughput 1075.73 MB/sec  8 clients  8 procs  max_latency=245.520 ms
      
      8 jobs, after patchset:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    4154282     0.009   156.675
       Close        3051450     0.001     0.843
       Rename        175912     0.072     4.444
       Unlink        839067     0.048    66.050
       Deltree           96     2.131     5.979
       Mkdir             48     0.002     0.004
       Qpathinfo    3765575     0.005     3.079
       Qfileinfo     659582     0.001     0.099
       Qfsinfo       690474     0.002     0.155
       Sfileinfo     338366     0.004     1.419
       Find         1455816     0.016     3.423
       WriteX       2069538     0.010     4.328
       ReadX        6512429     0.003     0.840
       LockX          13530     0.002     0.078
       UnlockX        13530     0.001     0.051
       Flush         291158     2.500   163.468
      
      Throughput 1105.45 MB/sec  8 clients  8 procs  max_latency=163.474 ms
      
      +2.7% throughput, -40.1% max latency
      
      16 jobs, before patchset:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    5457602     0.033   337.098
       Close        4008979     0.002     2.018
       Rename        231051     0.323   254.054
       Unlink       1102209     0.202   337.243
       Deltree          160     6.521    31.720
       Mkdir             80     0.003     0.007
       Qpathinfo    4946147     0.014     6.988
       Qfileinfo     867440     0.001     1.642
       Qfsinfo       907081     0.003     1.821
       Sfileinfo     444433     0.005     2.053
       Find         1912506     0.067     7.854
       WriteX       2724852     0.018     7.428
       ReadX        8553883     0.003     2.059
       LockX          17770     0.003     0.350
       UnlockX        17770     0.002     0.627
       Flush         382533     2.810   353.691
      
      Throughput 1413.09 MB/sec  16 clients  16 procs  max_latency=353.696 ms
      
      16 jobs, after patchset:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    5393156     0.034   303.181
       Close        3961986     0.002     1.502
       Rename        228359     0.320   253.379
       Unlink       1088920     0.206   303.409
       Deltree          160     6.419    30.088
       Mkdir             80     0.003     0.004
       Qpathinfo    48879676     0.015     7.722
       Qfileinfo     857408     0.001     1.651
       Qfsinfo       896343     0.002     2.147
       Sfileinfo     439317     0.005     4.298
       Find         1890018     0.073     8.347
       WriteX       2693356     0.018     6.373
       ReadX        8453485     0.003     3.836
       LockX          17562     0.003     0.486
       UnlockX        17562     0.002     0.635
       Flush         378023     2.802   315.904
      
      Throughput 1454.46 MB/sec  16 clients  16 procs  max_latency=315.910 ms
      
      +2.9% throughput, -11.3% max latency
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f6df27dd
    • Filipe Manana's avatar
      btrfs: avoid attempt to drop extents when logging inode for the first time · 5328b2a7
      Filipe Manana authored
      When logging an extent, in the fast fsync path, we always attempt do drop
      or trim any existing extents with a range that match or overlap the range
      of the extent we are about to log. We do that through a call to
      btrfs_drop_extents().
      
      However this is not needed when we are logging the inode for the first
      time in the current transaction, since we have no inode items of the
      inode in the log tree. Calling btrfs_drop_extents() does a deletion search
      on the log tree, which is expensive when we have concurrent tasks
      accessing the log tree because a deletion search always acquires a write
      lock on the extent buffers at levels 2, 1 and 0, adding significant lock
      contention, specially taking into account the height of a log tree rarely
      (if ever) goes beyond 2 or 3, due to its short life.
      
      So skip the call to btrfs_drop_extents() when the inode was not previously
      logged in the current transaction.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 9/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5328b2a7
    • Filipe Manana's avatar
      btrfs: avoid search for logged i_size when logging inode if possible · a5c733a4
      Filipe Manana authored
      If we are logging that an inode exists and the inode was not logged
      before, we can avoid searching in the log tree for the inode item since we
      know it does not exists. That wastes time and adds more lock contention on
      the extent buffers of the log tree when there are other tasks that are
      logging other inodes.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 8/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a5c733a4
    • Filipe Manana's avatar
      btrfs: avoid expensive search when truncating inode items from the log · 4934a815
      Filipe Manana authored
      Whenever we are logging a file inode in full sync mode we call
      btrfs_truncate_inode_items() to delete items of the inode we may have
      previously logged.
      
      That results in doing a btree search for deletion, which is expensive
      because it always acquires write locks for extent buffers at levels 2, 1
      and 0, and it balances any node that is less than half full. Acquiring
      the write locks can block the task if the extent buffers are already
      locked by another task or block other tasks attempting to lock them,
      which is specially bad in case of log trees since they are small due to
      their short life, with a root node at a level typically not greater than
      level 2.
      
      If we know that we are logging the inode for the first time in the current
      transaction, we can skip the call to btrfs_truncate_inode_items(), avoiding
      the deletion search. This change does that.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 7/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4934a815
    • Filipe Manana's avatar
      btrfs: add helper to truncate inode items when logging inode · 8a2b3da1
      Filipe Manana authored
      Move the call to btrfs_truncate_inode_items(), and the surrounding retry
      loop, into a local helper function. This avoids some repetition and avoids
      making the next change a bit awkward due to a bit of too much indentation.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 6/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8a2b3da1
    • Filipe Manana's avatar
      btrfs: avoid expensive search when dropping inode items from log · 88e221cd
      Filipe Manana authored
      Whenever we are logging a directory inode, logging that an inode exists or
      logging an inode that has changes in its references or xattrs, we attempt
      to delete items of this inode we may have previously logged (through calls
      to drop_objectid_items()).
      
      That attempt does a btree search for deletion, which is expensive because
      it always acquires write locks for extent buffers at levels 2, 1 and 0,
      and it balances any node that is less than half full. Acquiring the write
      locks can block the task if the extent buffers are already locked or block
      other tasks attempting to lock them, which is specially bad in case of log
      trees since they are small due to their short life, with a root node at a
      level typically not greater than level 2.
      
      If we know that we are logging the inode for the first time in the current
      transaction, we can skip the search. This change does that.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 5/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      88e221cd
    • Filipe Manana's avatar
      btrfs: always update the logged transaction when logging new names · 130341be
      Filipe Manana authored
      When we are logging a new name for an inode, due to a link or rename
      operation, if the inode has ancestor inodes that are new, created in the
      current transaction, we need to log that these inodes exist. To ensure
      that a subsequent explicit fsync on one of these ancestor inodes does
      sync the log, we don't set the logged_trans field of these inodes.
      This was done in commit 75b463d2 ("btrfs: do not commit logs and
      transactions during link and rename operations"), to avoid syncing a
      log after a rename or link operation.
      
      In order to allow for future changes to do some optimizations, change
      this behaviour to always update the logged_trans of any logged inode
      and don't update the last_log_commit of the inode if we are logging
      that it exists. This accomplishes that same objective with simpler
      logic, allowing for some optimizations in the next patches.
      
      So just do that simplification.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 4/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      130341be
    • Filipe Manana's avatar
      btrfs: do not log new dentries when logging that a new name exists · c48792c6
      Filipe Manana authored
      When logging a new name for an inode, due to a link or rename operation,
      we don't need to log all new dentries of the parent directories and their
      subdirectories. We only want to log the names of the inode and that any
      new parent directories exist. So in this case don't trigger logging of
      the new dentries, that is only need when doing an explicit fsync on a
      directory or on a file which requires logging its parent directories.
      
      This avoids unnecessary work and reduces contention on the extent buffers
      of a log tree.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 3/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c48792c6
    • Filipe Manana's avatar
      btrfs: remove no longer needed checks for NULL log context · 289cffcb
      Filipe Manana authored
      Since commit 75b463d2 ("btrfs: do not commit logs and transactions
      during link and rename operations"), we always pass a non-NULL log context
      to btrfs_log_inode_parent() and therefore to all the functions that it
      calls. So remove the checks we have all over the place that test for a
      NULL log context, making the code shorter and easier to read, as well as
      reducing the size of the generated code.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 2/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      289cffcb
    • Filipe Manana's avatar
      btrfs: check if a log tree exists at inode_logged() · 1e0860f3
      Filipe Manana authored
      In case an inode was never logged since it was loaded from disk and was
      modified in the current transaction (its ->last_trans matches the ID of
      the current transaction), inode_logged() returns true even if there's no
      existing log tree. In this case we can simply check if a log tree exists
      and return false if it does not. This avoids a caller of inode_logged()
      doing some unnecessary, but harmless, work.
      
      For btrfs_log_new_name() it avoids it logging an inode in case it was
      never logged since it was loaded from disk and there is currently no log
      tree for the inode's root. For the remaining callers of inode_logged(),
      btrfs_del_dir_entries_in_log() and btrfs_del_inode_ref_in_log(), it has
      no effect since they already check if a log tree exists through their
      calls to join_running_log_trans().
      
      So just add a check to inode_logged() to verify if a log tree exists, and
      return false if it does not.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 1/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1e0860f3
    • Anand Jain's avatar
      btrfs: remove stale comment about the btrfs_show_devname · cdccc03a
      Anand Jain authored
      There were few lockdep warnings because btrfs_show_devname() was using
      device_list_mutex as recorded in the commits:
      
        0ccd0528 ("btrfs: fix a possible umount deadlock")
        779bf3fe ("btrfs: fix lock dep warning, move scratch dev out of device_list_mutex and uuid_mutex")
      
      And finally, commit 88c14590 ("btrfs: use RCU in btrfs_show_devname
      for device list traversal") removed the device_list_mutex from
      btrfs_show_devname for performance reasons.
      
      This patch removes a stale comment about the function
      btrfs_show_devname and device_list_mutex.
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cdccc03a
    • Anand Jain's avatar
      btrfs: update latest_dev when we create a sprout device · b7cb29e6
      Anand Jain authored
      When we add a device to the seed filesystem (sprouting) it is a new
      filesystem (and fsid) on the device added. Update the latest_dev so
      that /proc/self/mounts shows the correct device.
      
      Example:
      
        $ btrfstune -S1 /dev/vg/seed
        $ mount /dev/vg/seed /btrfs
        mount: /btrfs: WARNING: device write-protected, mounted read-only.
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-seed /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      
        $ btrfs dev add -f /dev/vg/new /btrfs
      
      Before:
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-seed /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      
      After:
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-new /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b7cb29e6
    • Anand Jain's avatar
      btrfs: use latest_dev in btrfs_show_devname · 6605fd2f
      Anand Jain authored
      The test case btrfs/238 reports the warning below:
      
       WARNING: CPU: 3 PID: 481 at fs/btrfs/super.c:2509 btrfs_show_devname+0x104/0x1e8 [btrfs]
       CPU: 2 PID: 1 Comm: systemd Tainted: G        W  O 5.14.0-rc1-custom #72
       Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
       Call trace:
         btrfs_show_devname+0x108/0x1b4 [btrfs]
         show_mountinfo+0x234/0x2c4
         m_show+0x28/0x34
         seq_read_iter+0x12c/0x3c4
         vfs_read+0x29c/0x2c8
         ksys_read+0x80/0xec
         __arm64_sys_read+0x28/0x34
         invoke_syscall+0x50/0xf8
         do_el0_svc+0x88/0x138
         el0_svc+0x2c/0x8c
         el0t_64_sync_handler+0x84/0xe4
         el0t_64_sync+0x198/0x19c
      
      Reason:
      While btrfs_prepare_sprout() moves the fs_devices::devices into
      fs_devices::seed_list, the btrfs_show_devname() searches for the devices
      and found none, leading to the warning as in above.
      
      Fix:
      latest_dev is updated according to the changes to the device list.
      That means we could use the latest_dev->name to show the device name in
      /proc/self/mounts, the pointer will be always valid as it's assigned
      before the device is deleted from the list in remove or replace.
      The RCU protection is sufficient as the device structure is freed after
      synchronization.
      Reported-by: default avatarSu Yue <l@damenly.su>
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6605fd2f
    • Anand Jain's avatar
      btrfs: convert latest_bdev type to btrfs_device and rename · d24fa5c1
      Anand Jain authored
      In preparation to fix a bug in btrfs_show_devname().
      
      Convert fs_devices::latest_bdev type from struct block_device to struct
      btrfs_device and, rename the member to fs_devices::latest_dev.
      So that btrfs_show_devname() can use fs_devices::latest_dev::name.
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d24fa5c1
    • Naohiro Aota's avatar
      btrfs: zoned: finish relocating block group · 7ae9bd18
      Naohiro Aota authored
      We will no longer write to a relocating block group. So, we can finish it
      now.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7ae9bd18
    • Naohiro Aota's avatar
      btrfs: zoned: finish fully written block group · be1a1d7a
      Naohiro Aota authored
      If we have written to the zone capacity, the device automatically
      deactivates the zone. Sync up block group side (the active BG list and
      zone_is_active flag) with it.
      
      We need to do it both on data BGs and metadata BGs. On data side, we add a
      hook to btrfs_finish_ordered_io(). On metadata side, we use
      end_extent_buffer_writeback().
      
      To reduce excess lookup of a block group, we mark the last extent buffer in
      a block group with EXTENT_BUFFER_ZONE_FINISH flag. This cannot be done for
      data (ordered_extent), because the address may change due to
      REQ_OP_ZONE_APPEND.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      be1a1d7a
    • Naohiro Aota's avatar
      btrfs: zoned: avoid chunk allocation if active block group has enough space · a85f05e5
      Naohiro Aota authored
      The current extent allocator tries to allocate a new block group when the
      existing block groups do not have enough space. On a ZNS device, a new
      block group means a new active zone. If the number of active zones has
      already reached the max_active_zones, activating a new zone needs to finish
      an existing zone, leading to wasting the free space there.
      
      So, instead, it should reuse the existing active block groups as much as
      possible when we can't activate any other zones without sacrificing an
      already activated block group.
      
      While at it, I converted find_free_extent_update_loop() to check the
      found_extent() case early and made the other conditions simpler.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a85f05e5
    • Naohiro Aota's avatar
      btrfs: move ffe_ctl one level up · a12b0dc0
      Naohiro Aota authored
      We are passing too many variables as it is from btrfs_reserve_extent() to
      find_free_extent(). The next commit will add min_alloc_size to ffe_ctl, and
      that means another pass-through argument. Take this opportunity to move
      ffe_ctl one level up and drop the redundant arguments.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a12b0dc0
    • Naohiro Aota's avatar
      btrfs: zoned: activate new block group · eb66a010
      Naohiro Aota authored
      Activate new block group at btrfs_make_block_group(). We do not check the
      return value. If failed, we can try again later at the actual extent
      allocation phase.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      eb66a010
    • Naohiro Aota's avatar
      btrfs: zoned: activate block group on allocation · 2e654e4b
      Naohiro Aota authored
      Activate a block group when trying to allocate an extent from it. We check
      read-only case and no space left case before trying to activate a block
      group not to consume the number of active zones uselessly.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2e654e4b
    • Naohiro Aota's avatar
      btrfs: zoned: load active zone info for block group · 68a384b5
      Naohiro Aota authored
      Load activeness of underlying zones of a block group. When underlying zones
      are active, we add the block group to the fs_info->zone_active_bgs list.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      68a384b5
    • Naohiro Aota's avatar
      btrfs: zoned: implement active zone tracking · afba2bc0
      Naohiro Aota authored
      Add zone_is_active flag to btrfs_block_group. This flag indicates the
      underlying zones are all active. Such zone active block groups are tracked
      by fs_info->active_bg_list.
      
      btrfs_dev_{set,clear}_active_zone() take responsibility for the underlying
      device part. They set/clear the bitmap to indicate zone activeness and
      count the number of zones we can activate left.
      
      btrfs_zone_{activate,finish}() take responsibility for the logical part and
      the list management. In addition, btrfs_zone_finish() wait for any writes
      on it and send REQ_OP_ZONE_FINISH to the zone.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      afba2bc0
    • Naohiro Aota's avatar
      btrfs: zoned: introduce physical_map to btrfs_block_group · dafc340d
      Naohiro Aota authored
      We will use a block group's physical location to track active zones and
      finish fully written zones in the following commits. Since the zone
      activation is done in the extent allocation context which already holding
      the tree locks, we can't query the chunk tree for the physical locations.
      So, copy the location info into a block group and use it for activation.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      dafc340d
    • Naohiro Aota's avatar
      btrfs: zoned: load active zone information from devices · ea6f8ddc
      Naohiro Aota authored
      The ZNS specification defines a limit on the number of zones that can be in
      the implicit open, explicit open or closed conditions. Any zone with such
      condition is defined as an active zone and correspond to any zone that is
      being written or that has been only partially written. If the maximum
      number of active zones is reached, we must either reset or finish some
      active zones before being able to chose other zones for storing data.
      
      Load queue_max_active_zones() and track the number of active zones left on
      the device.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ea6f8ddc
    • Naohiro Aota's avatar
      btrfs: zoned: finish superblock zone once no space left for new SB · 8376d9e1
      Naohiro Aota authored
      If there is no more space left for a new superblock in a superblock zone,
      then it is better to ZONE_FINISH the zone and frees up the active zone
      count.
      
      Since btrfs_advance_sb_log() can now issue REQ_OP_ZONE_FINISH, we also need
      to convert it to return int for the error case.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8376d9e1
    • Naohiro Aota's avatar
      btrfs: zoned: locate superblock position using zone capacity · 9658b72e
      Naohiro Aota authored
      sb_write_pointer() returns the write position of next superblock. For READ,
      we need a previous location. When the pointer is at the head, the previous
      one is the last one of the other zone. Calculate the last one's position
      from zone capacity.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9658b72e
    • Naohiro Aota's avatar
      btrfs: zoned: consider zone as full when no more SB can be written · 5daaf552
      Naohiro Aota authored
      We cannot write beyond zone capacity. So, we should consider a zone as
      "full" when the write pointer goes beyond capacity - the size of super
      info.
      
      Also, take this opportunity to replace a subtle duplicated code with a loop
      and fix a typo in comment.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5daaf552
    • Naohiro Aota's avatar
      btrfs: zoned: tweak reclaim threshold for zone capacity · d8da0e85
      Naohiro Aota authored
      With the introduction of zone capacity, the range [capacity, length] is
      always zone unusable. Counting this region as a reclaim target will
      cause reclaiming too early. Reclaim block groups based on bytes that can
      be usable after resetting.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d8da0e85
    • Naohiro Aota's avatar
      btrfs: zoned: calculate free space from zone capacity · 98173255
      Naohiro Aota authored
      Now that we introduced capacity in a block group, we need to calculate free
      space using the capacity instead of the length. Thus, bytes we account
      capacity - alloc_pointer as free, and account bytes [capacity, length] as
      zone unusable.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      98173255
    • Naohiro Aota's avatar
      btrfs: zoned: move btrfs_free_excluded_extents out of btrfs_calc_zone_unusable · c46c4247
      Naohiro Aota authored
      btrfs_free_excluded_extents() is not neccessary for
      btrfs_calc_zone_unusable() and it makes btrfs_calc_zone_unusable()
      difficult to reuse. Move it out and call btrfs_free_excluded_extents()
      in proper context.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c46c4247
    • Naohiro Aota's avatar
      btrfs: zoned: load zone capacity information from devices · 8eae532b
      Naohiro Aota authored
      The ZNS specification introduces the concept of a Zone Capacity.  A zone
      capacity is an additional per-zone attribute that indicates the number of
      usable logical blocks within each zone, starting from the first logical
      block of each zone. It is always smaller or equal to the zone size.
      
      With the SINGLE profile, we can set a block group's "capacity" as the same
      as the underlying zone's Zone Capacity. We will limit the allocation not
      to exceed in a following commit.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8eae532b
    • Qu Wenruo's avatar
      btrfs: defrag: enable defrag for subpage case · c22a3572
      Qu Wenruo authored
      With the new infrastructure which has taken subpage into consideration,
      now we should be safe to allow defrag to work for subpage case.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c22a3572
    • Qu Wenruo's avatar
      btrfs: defrag: remove the old infrastructure · c6357573
      Qu Wenruo authored
      Now the old infrastructure can all be removed, defrag
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c6357573
    • Qu Wenruo's avatar
      btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file() · 7b508037
      Qu Wenruo authored
      The function defrag_one_cluster() is able to defrag one range well
      enough, we only need to do preparation for it, including:
      
      - Clamp and align the defrag range
      - Exclude invalid cases
      - Proper inode locking
      
      The old infrastructures will not be removed in this patch, as it would
      be too noisy to review.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7b508037
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag one cluster · b18c3ab2
      Qu Wenruo authored
      This new helper, defrag_one_cluster(), will defrag one cluster (at most
      256K):
      
      - Collect all initial targets
      
      - Kick in readahead when possible
      
      - Call defrag_one_range() on each initial target
        With some extra range clamping.
      
      - Update @sectors_defragged parameter
      
      This involves one behavior change, the defragged sectors accounting is
      no longer as accurate as old behavior, as the initial targets are not
      consistent.
      
      We can have new holes punched inside the initial target, and we will
      skip such holes later.
      But the defragged sectors accounting doesn't need to be that accurate
      anyway, thus I don't want to pass those extra accounting burden into
      defrag_one_range().
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b18c3ab2
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag a range · e9eec721
      Qu Wenruo authored
      A new helper, defrag_one_range(), is introduced to defrag one range.
      
      This function will mostly prepare the needed pages and extent status for
      defrag_one_locked_target().
      
      As we can only have a consistent view of extent map with page and extent
      bits locked, we need to re-check the range passed in to get a real
      target list for defrag_one_locked_target().
      
      Since defrag_collect_targets() will call defrag_lookup_extent() and lock
      extent range, we also need to teach those two functions to skip extent
      lock.  Thus new parameter, @locked, is introduced to skip extent lock if
      the caller has already locked the range.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e9eec721
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag a contiguous prepared range · 22b398ee
      Qu Wenruo authored
      A new helper, defrag_one_locked_target(), introduced to do the real part
      of defrag.
      
      The caller needs to ensure both page and extents bits are locked, and no
      ordered extent exists for the range, and all writeback is finished.
      
      The core defrag part is pretty straight-forward:
      
      - Reserve space
      - Set extent bits to defrag
      - Update involved pages to be dirty
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      22b398ee
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to collect target file extents · eb793cf8
      Qu Wenruo authored
      Introduce a helper, defrag_collect_targets(), to collect all possible
      targets to be defragged.
      
      This function will not consider things like max_sectors_to_defrag, thus
      caller should be responsible to ensure we don't exceed the limit.
      
      This function will be the first stage of later defrag rework.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      eb793cf8
    • Qu Wenruo's avatar
      btrfs: defrag: factor out page preparation into a helper · 5767b50c
      Qu Wenruo authored
      In cluster_pages_for_defrag(), we have complex code block inside one
      for() loop.
      
      The code block is to prepare one page for defrag, this will ensure:
      
      - The page is locked and set up properly.
      - No ordered extent exists in the page range.
      - The page is uptodate.
      
      This behavior is pretty common and will be reused by later defrag
      rework.
      
      So factor out the code into its own helper, defrag_prepare_one_page(),
      for later usage, and cleanup the code by a little.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5767b50c