1. 21 Aug, 2023 40 commits
    • Josef Bacik's avatar
      btrfs: tests: add extent_map tests for dropping with odd layouts · 89c37604
      Josef Bacik authored
      While investigating weird problems with the extent_map I wrote a self
      test testing the various edge cases of btrfs_drop_extent_map_range.
      This can split in different ways and behaves different in each case, so
      test the various edge cases to make sure everything is functioning
      properly.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      89c37604
    • Qu Wenruo's avatar
      btrfs: scrub: move write back of repaired sectors to scrub_stripe_read_repair_worker() · 4fe44f9d
      Qu Wenruo authored
      Currently the scrub_stripe_read_repair_worker() only does reads to
      rebuild the corrupted sectors, it doesn't do any writeback.
      
      The design is mostly to put writeback into a more ordered manner, to
      co-operate with dev-replace with zoned mode, which requires every write
      to be submitted in their bytenr order.
      
      However the writeback for repaired sectors into the original mirror
      doesn't need such strong sync requirement, as it can only happen for
      non-zoned devices.
      
      This patch would move the writeback for repaired sectors into
      scrub_stripe_read_repair_worker(), which removes two calls sites for
      repaired sectors writeback. (one from flush_scrub_stripes(), one from
      scrub_raid56_parity_stripe())
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4fe44f9d
    • Qu Wenruo's avatar
      btrfs: scrub: don't go ordered workqueue for dev-replace · 39dc7bd9
      Qu Wenruo authored
      The workqueue fs_info->scrub_worker would go ordered workqueue if it's a
      device replace operation.
      
      However the scrub is relying on multiple workers to do data csum
      verification, and we always submit several read requests in a row.
      
      Thus there is no need to use ordered workqueue just for dev-replace.
      We have extra synchronization (the main thread will always
      submit-and-wait for dev-replace writes) to handle it for zoned devices.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      39dc7bd9
    • Qu Wenruo's avatar
      btrfs: scrub: fix grouping of read IO · ae76d8e3
      Qu Wenruo authored
      [REGRESSION]
      There are several regression reports about the scrub performance with
      v6.4 kernel.
      
      On a PCIe 3.0 device, the old v6.3 kernel can go 3GB/s scrub speed, but
      v6.4 can only go 1GB/s, an obvious 66% performance drop.
      
      [CAUSE]
      Iostat shows a very different behavior between v6.3 and v6.4 kernel:
      
        Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
        nvme0n1p3  9731.00 3425544.00 17237.00  63.92    2.18   352.02  21.18 100.00
        nvme0n1p3 15578.00  993616.00     5.00   0.03    0.09    63.78   1.32 100.00
      
      The upper one is v6.3 while the lower one is v6.4.
      
      There are several obvious differences:
      
      - Very few read merges
        This turns out to be a behavior change that we no longer do bio
        plug/unplug.
      
      - Very low aqu-sz
        This is due to the submit-and-wait behavior of flush_scrub_stripes(),
        and extra extent/csum tree search.
      
      Both behaviors are not that obvious on SATA SSDs, as SATA SSDs have NCQ
      to merge the reads, while SATA SSDs can not handle high queue depth well
      either.
      
      [FIX]
      For now this patch focuses on the read speed fix. Dev-replace replace
      speed needs more work.
      
      For the read part, we go two directions to fix the problems:
      
      - Re-introduce blk plug/unplug to merge read requests
        This is pretty simple, and the behavior is pretty easy to observe.
      
        This would enlarge the average read request size to 512K.
      
      - Introduce multi-group reads and no longer wait for each group
        Instead of the old behavior, which submits 8 stripes and waits for
        them, here we would enlarge the total number of stripes to 16 * 8.
        Which is 8M per device, the same limit as the old scrub in-flight
        bios size limit.
      
        Now every time we fill a group (8 stripes), we submit them and
        continue to next stripes.
      
        Only when the full 16 * 8 stripes are all filled, we submit the
        remaining ones (the last group), and wait for all groups to finish.
        Then submit the repair writes and dev-replace writes.
      
        This should enlarge the queue depth.
      
      This would greatly improve the merge rate (thus read block size) and
      queue depth:
      
      Before (with regression, and cached extent/csum path):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 20666.00 1318240.00    10.00   0.05    0.08    63.79   1.63 100.00
      
      After (with all patches applied):
      
       nvme0n1p3  5165.00 2278304.00 30557.00  85.54    0.55   441.10   2.81 100.00
      
      i.e. 1287 to 2224 MB/s.
      
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ae76d8e3
    • Qu Wenruo's avatar
      btrfs: scrub: avoid unnecessary csum tree search preparing stripes · 3c771c19
      Qu Wenruo authored
      One of the bottleneck of the new scrub code is the extra csum tree
      search.
      
      The old code would only do the csum tree search for each scrub bio,
      which can be as large as 512KiB, thus they can afford to allocate a new
      path each time.
      
      But the new scrub code is doing csum tree search for each stripe, which
      is only 64KiB, this means we'd better re-use the same csum path during
      each search.
      
      This patch would introduce a per-sctx path for csum tree search, as we
      don't need to re-allocate the path every time we need to do a csum tree
      search.
      
      With this change we can further improve the queue depth and improve the
      scrub read performance:
      
      Before (with regression and cached extent tree path):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 15875.00 1013328.00    12.00   0.08    0.08    63.83   1.35 100.00
      
      After (with both cached extent/csum tree path):
      
       nvme0n1p3 17759.00 1133280.00    10.00   0.06    0.08    63.81   1.50 100.00
      
      Fixes: e02ee89b ("btrfs: scrub: switch scrub_simple_mirror() to scrub_stripe infrastructure")
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3c771c19
    • Qu Wenruo's avatar
      btrfs: scrub: avoid unnecessary extent tree search preparing stripes · 1dc4888e
      Qu Wenruo authored
      Since commit e02ee89b ("btrfs: scrub: switch scrub_simple_mirror()
      to scrub_stripe infrastructure"), scrub no longer re-use the same path
      for extent tree search.
      
      This can lead to unnecessary extent tree search, especially for the new
      stripe based scrub, as we have way more stripes to prepare.
      
      This patch would re-introduce a shared path for extent tree search, and
      properly release it when the block group is scrubbed.
      
      This change alone can improve scrub performance slightly by reducing the
      time spend preparing the stripe thus improving the queue depth.
      
      Before (with regression):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 15578.00  993616.00     5.00   0.03    0.09    63.78   1.32 100.00
      
      After (with this patch):
      
       nvme0n1p3 15875.00 1013328.00    12.00   0.08    0.08    63.83   1.35 100.00
      
      Fixes: e02ee89b ("btrfs: scrub: switch scrub_simple_mirror() to scrub_stripe infrastructure")
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1dc4888e
    • Lee Trager's avatar
      btrfs: copy dir permission and time when creating a stub subvolume · 94628ad9
      Lee Trager authored
      btrfs supports creating nested subvolumes however snapshots are not
      recursive.  When a snapshot is taken of a volume which contains a
      subvolume the subvolume is replaced with a stub subvolume which has the
      same name and uses inode number 2[1]. The stub subvolume kept the
      directory name but did not set the time or permissions of the stub
      subvolume. This resulted in all time information being the current time
      and ownership defaulting to root. When subvolumes and snapshots are
      created using unshare this results in a snapshot directory the user
      created but has no permissions for.
      
      Test case:
      
        [vmuser@archvm ~]# sudo -i
        [root@archvm ~]# mkdir -p /mnt/btrfs/test
        [root@archvm ~]# chown vmuser:users /mnt/btrfs/test/
        [root@archvm ~]# exit
        logout
        [vmuser@archvm ~]$ cd /mnt/btrfs/test
        [vmuser@archvm test]$ unshare --user --keep-caps --map-auto --map-root-user
        [root@archvm test]# btrfs subvolume create subvolume
        Create subvolume './subvolume'
        [root@archvm test]# btrfs subvolume create subvolume/subsubvolume
        Create subvolume 'subvolume/subsubvolume'
        [root@archvm test]# btrfs subvolume snapshot subvolume snapshot
        Create a snapshot of 'subvolume' in './snapshot'
        [root@archvm test]# exit
        logout
        [vmuser@archvm test]$ tree -ug
        [vmuser   users   ]  .
        ├── [vmuser   users   ]  snapshot
        │   └── [vmuser   users   ]  subsubvolume  <-- Without patch perm is root:root
        └── [vmuser   users   ]  subvolume
            └── [vmuser   users   ]  subsubvolume
      
        5 directories, 0 files
      
      [1] https://btrfs.readthedocs.io/en/latest/btrfs-subvolume.html#nested-subvolumesSigned-off-by: default avatarLee Trager <lee@trager.us>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      94628ad9
    • Filipe Manana's avatar
      btrfs: remove pointless empty list check when reading delayed dir indexes · 6b604c9a
      Filipe Manana authored
      At btrfs_readdir_delayed_dir_index(), called when reading a directory, we
      have this check for an empty list to return immediately, but it's not
      needed since list_for_each_entry_safe(), called immediately after, is
      prepared to deal with an empty list, it simply does nothing. So remove
      the empty list check.
      
      Besides shorter source code, it also slightly reduces the binary text
      size:
      
        Before this change:
      
          $ size fs/btrfs/btrfs.ko
             text	   data	    bss	    dec	    hex	filename
          1609408	 167269	  16864	1793541	 1b5e05	fs/btrfs/btrfs.ko
      
        After this change:
      
          $ size fs/btrfs/btrfs.ko
             text	   data	    bss	    dec	    hex	filename
          1609392	 167269	  16864	1793525	 1b5df5	fs/btrfs/btrfs.ko
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6b604c9a
    • Anand Jain's avatar
      btrfs: drop redundant check to use fs_devices::metadata_uuid · 67bc5ad0
      Anand Jain authored
      fs_devices::metadata_uuid value is already updated based on the
      super_block::METADATA_UUID flag for either fsid or metadata_uuid as
      appropriate. So, fs_devices::metadata_uuid can be used directly.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarGuilherme G. Piccoli <gpiccoli@igalia.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      67bc5ad0
    • Anand Jain's avatar
      btrfs: compare the correct fsid/metadata_uuid in btrfs_validate_super · 6bfe3959
      Anand Jain authored
      The function btrfs_validate_super() should verify the metadata_uuid in
      the provided superblock argument. Because, all its callers expect it to
      do that.
      
      Such as in the following stacks:
      
        write_all_supers()
         sb = fs_info->super_for_commit;
         btrfs_validate_write_super(.., sb)
           btrfs_validate_super(.., sb, ..)
      
        scrub_one_super()
      	btrfs_validate_super(.., sb, ..)
      
      And
         check_dev_super()
      	btrfs_validate_super(.., sb, ..)
      
      However, it currently verifies the fs_info::super_copy::metadata_uuid
      instead.  Fix this using the correct metadata_uuid in the superblock
      argument.
      
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarGuilherme G. Piccoli <gpiccoli@igalia.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6bfe3959
    • Anand Jain's avatar
      btrfs: use the correct superblock to compare fsid in btrfs_validate_super · d167aa76
      Anand Jain authored
      The function btrfs_validate_super() should verify the fsid in the provided
      superblock argument. Because, all its callers expect it to do that.
      
      Such as in the following stack:
      
         write_all_supers()
             sb = fs_info->super_for_commit;
             btrfs_validate_write_super(.., sb)
               btrfs_validate_super(.., sb, ..)
      
         scrub_one_super()
      	btrfs_validate_super(.., sb, ..)
      
      And
         check_dev_super()
      	btrfs_validate_super(.., sb, ..)
      
      However, it currently verifies the fs_info::super_copy::fsid instead,
      which is not correct.  Fix this using the correct fsid in the superblock
      argument.
      
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarGuilherme G. Piccoli <gpiccoli@igalia.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d167aa76
    • Anand Jain's avatar
      btrfs: simplify memcpy either of metadata_uuid or fsid · 319baafc
      Anand Jain authored
      There is a helper which provides either metadata_uuid or fsid as per
      METADATA_UUID flag. So use it.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarGuilherme G. Piccoli <gpiccoli@igalia.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      319baafc
    • Anand Jain's avatar
      btrfs: add a helper to read the superblock metadata_uuid · 4844c366
      Anand Jain authored
      In some cases, we need to read the FSID from the superblock when the
      metadata_uuid is not set, and otherwise, read the metadata_uuid. So,
      add a helper.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarGuilherme G. Piccoli <gpiccoli@igalia.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4844c366
    • Qu Wenruo's avatar
      btrfs: remove v0 extent handling · 182741d2
      Qu Wenruo authored
      The v0 extent item has been deprecated for a long time, and we don't have
      any report from the community either.
      
      So it's time to remove the v0 extent specific error handling, and just
      treat them as regular extent tree corruption.
      
      This patch would remove the btrfs_print_v0_err() helper, and enhance the
      involved error handling to treat them just as any extent tree
      corruption. No reports regarding v0 extents have been seen since the
      graceful handling was added in 2018.
      
      This involves:
      
      - btrfs_backref_add_tree_node()
        This change is a little tricky, the new code is changed to only handle
        BTRFS_TREE_BLOCK_REF_KEY and BTRFS_SHARED_BLOCK_REF_KEY.
      
        But this is safe, as we have rejected any unknown inline refs through
        btrfs_get_extent_inline_ref_type().
        For keyed backrefs, we're safe to skip anything we don't know (that's
        if it can pass tree-checker in the first place).
      
      - btrfs_lookup_extent_info()
      - lookup_inline_extent_backref()
      - run_delayed_extent_op()
      - __btrfs_free_extent()
      - add_tree_block()
        Regular error handling of unexpected extent tree item, and abort
        transaction (if we have a trans handle).
      
      - remove_extent_data_ref()
        It's pretty much the same as the regular rejection of unknown backref
        key.
        But for this particular case, we can also remove a BUG_ON().
      
      - extent_data_ref_count()
        We can remove the BTRFS_EXTENT_REF_V0_KEY BUG_ON(), as it would be
        rejected by the only caller.
      
      - btrfs_print_leaf()
        Remove the handling for BTRFS_EXTENT_REF_V0_KEY.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      182741d2
    • Qu Wenruo's avatar
      btrfs: output extra debug info if we failed to find an inline backref · 7f72f505
      Qu Wenruo authored
      [BUG]
      Syzbot reported several warning triggered inside
      lookup_inline_extent_backref().
      
      [CAUSE]
      As usual, the reproducer doesn't reliably trigger locally here, but at
      least we know the WARN_ON() is triggered when an inline backref can not
      be found, and it can only be triggered when @insert is true. (I.e.
      inserting a new inline backref, which means the backref should already
      exist)
      
      [ENHANCEMENT]
      After the WARN_ON(), dump all the parameters and the extent tree
      leaf to help debug.
      
      Link: https://syzkaller.appspot.com/bug?extid=d6f9ff86c1d804ba2bc6Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7f72f505
    • Christoph Hellwig's avatar
      btrfs: move the !zoned assert into run_delalloc_cow · 76c5126e
      Christoph Hellwig authored
      Having the assert in the actual helper documents the pre-conditions
      much better than having it in the caller, so move it.
      Reviewed-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      76c5126e
    • Christoph Hellwig's avatar
      btrfs: consolidate the error handling in run_delalloc_nocow · 38dc8889
      Christoph Hellwig authored
      Share the calls to extent_clear_unlock_delalloc for btrfs_path allocation
      failure handling and the normal exit path.
      
      This relies on btrfs_free_path ignoring a NULL pointer, and the
      initialization of cur_offset to start at the beginning of the function.
      Reviewed-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      38dc8889
    • Christoph Hellwig's avatar
      btrfs: cleanup the COW fallback logic in run_delalloc_nocow · 18f62b86
      Christoph Hellwig authored
      Use the block group pointer used to track the outstanding NOCOW writes as
      a boolean to remove the duplicate nocow variable, and keep it contained
      in the main loop to simplify the logic.
      Reviewed-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      18f62b86
    • Christoph Hellwig's avatar
      btrfs: fix error handling when in a COW window in run_delalloc_nocow · 953fa5ce
      Christoph Hellwig authored
      When run_delalloc_nocow has cow_start set to a value other than (u64)-1,
      it has delayed COW writeback pending behind cur_offset.  When an error
      occurs in such a window, the range going back to cow_start and not just
      cur_offset needs to be unlocked, but only two error cases handle this
      correctly  Move the code to handle unlock the COW range to the common
      error handling label and document the logic.
      
      To make things even more complicated, cow_file_range as called by
      fallback_to_cow will unlock the range it is operating on when it fails as
      well, so we need to reset cow_start right after caling fallback_to_cow
      instead of only when it succeeded.
      Reviewed-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      953fa5ce
    • Naohiro Aota's avatar
      btrfs: zoned: do not zone finish data relocation block group · 332581bd
      Naohiro Aota authored
      When multiple writes happen at once, we may need to sacrifice a currently
      active block group to be zone finished for a new allocation. We choose a
      block group with the least free space left, and zone finish it.
      
      To do the finishing, we need to send IOs for already allocated region
      and wait for them and on-going IOs. Otherwise, these IOs fail because the
      zone is already finished at the time the IO reach a device.
      
      However, if a block group dedicated to the data relocation is zone
      finished, there is a chance that finishing it before an ongoing write IO
      reaches the device. That is because there is timing gap between an
      allocation is done (block_group->reservations == 0, as pre-allocation is
      done) and an ordered extent is created when the relocation IO starts.
      Thus, if we finish the zone between them, we can fail the IOs.
      
      We cannot simply use "fs_info->data_reloc_bg == block_group->start" to
      avoid the zone finishing. Because, the data_reloc_bg may already switch to
      a new block group, while there are still ongoing write IOs to the old
      data_reloc_bg.
      
      So, this patch reworks the BLOCK_GROUP_FLAG_ZONED_DATA_RELOC bit to
      indicate there is a data relocation allocation and/or ongoing write to the
      block group. The bit is set on allocation and cleared in end_io function of
      the last IO for the currently allocated region.
      
      To change the timing of the bit setting also solves the issue that the bit
      being left even after there is no IO going on. With the current code, if
      the data_reloc_bg switches after the last IO to the current data_reloc_bg,
      the bit is set at this timing and there is no one clearing that bit. As a
      result, that block group is kept unallocatable for anything.
      
      Fixes: 343d8a30 ("btrfs: zoned: prevent allocation from previous data relocation BG")
      Fixes: 74e91b12 ("btrfs: zoned: zone finish unused block group")
      CC: stable@vger.kernel.org # 6.1+
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      332581bd
    • Josef Bacik's avatar
      btrfs: set page extent mapped after read_folio in relocate_one_page · e7f1326c
      Josef Bacik authored
      One of the CI runs triggered the following panic
      
        assertion failed: PagePrivate(page) && page->private, in fs/btrfs/subpage.c:229
        ------------[ cut here ]------------
        kernel BUG at fs/btrfs/subpage.c:229!
        Internal error: Oops - BUG: 00000000f2000800 [#1] SMP
        CPU: 0 PID: 923660 Comm: btrfs Not tainted 6.5.0-rc3+ #1
        pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
        pc : btrfs_subpage_assert+0xbc/0xf0
        lr : btrfs_subpage_assert+0xbc/0xf0
        sp : ffff800093213720
        x29: ffff800093213720 x28: ffff8000932138b4 x27: 000000000c280000
        x26: 00000001b5d00000 x25: 000000000c281000 x24: 000000000c281fff
        x23: 0000000000001000 x22: 0000000000000000 x21: ffffff42b95bf880
        x20: ffff42b9528e0000 x19: 0000000000001000 x18: ffffffffffffffff
        x17: 667274622f736620 x16: 6e69202c65746176 x15: 0000000000000028
        x14: 0000000000000003 x13: 00000000002672d7 x12: 0000000000000000
        x11: ffffcd3f0ccd9204 x10: ffffcd3f0554ae50 x9 : ffffcd3f0379528c
        x8 : ffff800093213428 x7 : 0000000000000000 x6 : ffffcd3f091771e8
        x5 : ffff42b97f333948 x4 : 0000000000000000 x3 : 0000000000000000
        x2 : 0000000000000000 x1 : ffff42b9556cde80 x0 : 000000000000004f
        Call trace:
         btrfs_subpage_assert+0xbc/0xf0
         btrfs_subpage_set_dirty+0x38/0xa0
         btrfs_page_set_dirty+0x58/0x88
         relocate_one_page+0x204/0x5f0
         relocate_file_extent_cluster+0x11c/0x180
         relocate_data_extent+0xd0/0xf8
         relocate_block_group+0x3d0/0x4e8
         btrfs_relocate_block_group+0x2d8/0x490
         btrfs_relocate_chunk+0x54/0x1a8
         btrfs_balance+0x7f4/0x1150
         btrfs_ioctl+0x10f0/0x20b8
         __arm64_sys_ioctl+0x120/0x11d8
         invoke_syscall.constprop.0+0x80/0xd8
         do_el0_svc+0x6c/0x158
         el0_svc+0x50/0x1b0
         el0t_64_sync_handler+0x120/0x130
         el0t_64_sync+0x194/0x198
        Code: 91098021 b0007fa0 91346000 97e9c6d2 (d4210000)
      
      This is the same problem outlined in 17b17fcd ("btrfs:
      set_page_extent_mapped after read_folio in btrfs_cont_expand") , and the
      fix is the same.  I originally looked for the same pattern elsewhere in
      our code, but mistakenly skipped over this code because I saw the page
      cache readahead before we set_page_extent_mapped, not realizing that
      this was only in the !page case, that we can still end up with a
      !uptodate page and then do the btrfs_read_folio further down.
      
      The fix here is the same as the above mentioned patch, move the
      set_page_extent_mapped call to after the btrfs_read_folio() block to
      make sure that we have the subpage blocksize stuff setup properly before
      using the page.
      
      CC: stable@vger.kernel.org # 6.1+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e7f1326c
    • Josef Bacik's avatar
      btrfs: wait on uncached block groups on every allocation loop · cd361199
      Josef Bacik authored
      My initial fix for the generic/475 hangs was related to metadata, but
      our CI testing uncovered another case where we hang for similar reasons.
      We again have a task with a plug that is holding an outstanding request
      that is keeping the dm device from finishing it's suspend, and that task
      is stuck in the allocator.
      
      This time it is stuck trying to allocate data, but we do not have a
      block group that matches the size class.  The larger loop in the
      allocator looks like this (simplified of course)
      
        find_free_extent
          for_each_block_group {
            ffe_ctl->cached == btrfs_block_group_cache_done(bg)
            if (!ffe_ctl->cached)
      	ffe_ctl->have_caching_bg = true;
            do_allocation()
      	btrfs_wait_block_group_cache_progress();
          }
      
          if (loop == LOOP_CACHING_WAIT && ffe_ctl->have_caching_bg)
            go search again;
      
      In my earlier fix we were trying to allocate from the block group, but
      we weren't waiting for the progress because we were only waiting for the
      free space to be >= the amount of free space we wanted.  My fix made it
      so we waited for forward progress to be made as well, so we would be
      sure to wait.
      
      This time however we did not have a block group that matched our size
      class, so what was happening was this
      
        find_free_extent
          for_each_block_group {
            ffe_ctl->cached == btrfs_block_group_cache_done(bg)
            if (!ffe_ctl->cached)
      	ffe_ctl->have_caching_bg = true;
            if (size_class_doesn't_match())
      	goto loop;
            do_allocation()
      	btrfs_wait_block_group_cache_progress();
        loop:
            release_block_group(block_group);
          }
      
          if (loop == LOOP_CACHING_WAIT && ffe_ctl->have_caching_bg)
            go search again;
      
      The size_class_doesn't_match() part was true, so we'd just skip this
      block group and never wait for caching, and then because we found a
      caching block group we'd just go back and do the loop again.  We never
      sleep and thus never flush the plug and we have the same deadlock.
      
      Fix the logic for waiting on the block group caching to instead do it
      unconditionally when we goto loop.  This takes the logic out of the
      allocation step, so now the loop looks more like this
      
        find_free_extent
          for_each_block_group {
            ffe_ctl->cached == btrfs_block_group_cache_done(bg)
            if (!ffe_ctl->cached)
      	ffe_ctl->have_caching_bg = true;
            if (size_class_doesn't_match())
      	goto loop;
            do_allocation()
      	btrfs_wait_block_group_cache_progress();
        loop:
            if (loop > LOOP_CACHING_NOWAIT && !ffe_ctl->retry_uncached &&
      	  !ffe_ctl->cached) {
      	 ffe_ctl->retry_uncached = true;
      	 btrfs_wait_block_group_cache_progress();
            }
      
            release_block_group(block_group);
          }
      
          if (loop == LOOP_CACHING_WAIT && ffe_ctl->have_caching_bg)
            go search again;
      
      This simplifies the logic a lot, and makes sure that if we're hitting
      uncached block groups we're always waiting on them at some point.
      
      I ran this through 100 iterations of generic/475, as this particular
      case was harder to hit than the previous one.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cd361199
    • Ruan Jinjie's avatar
      btrfs: use LIST_HEAD() to initialize the list_head · 84af994b
      Ruan Jinjie authored
      Use LIST_HEAD() to initialize the list_head instead of open-coding it.
      Signed-off-by: default avatarRuan Jinjie <ruanjinjie@huawei.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      84af994b
    • Qu Wenruo's avatar
      btrfs: handle errors properly in update_inline_extent_backref() · 25761430
      Qu Wenruo authored
      [PROBLEM]
      Inside function update_inline_extent_backref(), we have several
      BUG_ON()s along with some ASSERT()s which can be triggered by corrupted
      filesystem.
      
      [ANAYLYSE]
      Most of those BUG_ON()s and ASSERT()s are just a way of handling
      unexpected on-disk data.
      
      Although we have tree-checker to rule out obviously incorrect extent
      tree blocks, it's not enough for these ones.  Thus we need proper error
      handling for them.
      
      [FIX]
      Thankfully all the callers of update_inline_extent_backref() would
      eventually handle the errror by aborting the current transaction.
      So this patch would do the proper error handling by:
      
      - Make update_inline_extent_backref() to return int
        The return value would be either 0 or -EUCLEAN.
      
      - Replace BUG_ON()s and ASSERT()s with proper error handling
        This includes:
        * Dump the bad extent tree leaf
        * Output an error message for the cause
          This would include the extent bytenr, num_bytes (if needed), the bad
          values and expected good values.
        * Return -EUCLEAN
      
        Note here we remove all the WARN_ON()s, as eventually the transaction
        would be aborted, thus a backtrace would be triggered anyway.
      
      - Better comments on why we expect refs == 1 and refs_to_mode == -1 for
        tree blocks
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      25761430
    • Naohiro Aota's avatar
      btrfs: zoned: re-enable metadata over-commit for zoned mode · 5b135b38
      Naohiro Aota authored
      Now that, we can re-enable metadata over-commit. As we moved the activation
      from the reservation time to the write time, we no longer need to ensure
      all the reserved bytes is properly activated.
      
      Without the metadata over-commit, it suffers from lower performance because
      it needs to flush the delalloc items more often and allocate more block
      groups. Re-enabling metadata over-commit will solve the issue.
      
      Fixes: 79417d04 ("btrfs: zoned: disable metadata overcommit for zoned")
      CC: stable@vger.kernel.org # 6.1+
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5b135b38
    • Naohiro Aota's avatar
      btrfs: zoned: don't activate non-DATA BG on allocation · 5a7d107e
      Naohiro Aota authored
      Now that a non-DATA block group is activated at write time, don't
      activate it on allocation time.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5a7d107e
    • Naohiro Aota's avatar
      btrfs: zoned: no longer count fresh BG region as zone unusable · 6a8ebc77
      Naohiro Aota authored
      Now that we switched to write time activation, we no longer need to (and
      must not) count the fresh region as zone unusable. This commit is similar
      to revert of commit fa2068d7 ("btrfs: zoned: count fresh BG
      region as zone unusable").
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6a8ebc77
    • Naohiro Aota's avatar
      btrfs: zoned: activate metadata block group on write time · 13bb483d
      Naohiro Aota authored
      In the current implementation, block groups are activated at reservation
      time to ensure that all reserved bytes can be written to an active metadata
      block group. However, this approach has proven to be less efficient, as it
      activates block groups more frequently than necessary, putting pressure on
      the active zone resource and leading to potential issues such as early
      ENOSPC or hung_task.
      
      Another drawback of the current method is that it hampers metadata
      over-commit, and necessitates additional flush operations and block group
      allocations, resulting in decreased overall performance.
      
      To address these issues, this commit introduces a write-time activation of
      metadata and system block group. This involves reserving at least one
      active block group specifically for a metadata and system block group.
      
      Since metadata write-out is always allocated sequentially, when we need to
      write to a non-active block group, we can wait for the ongoing IOs to
      complete, activate a new block group, and then proceed with writing to the
      new block group.
      
      Fixes: b0931513 ("btrfs: zoned: activate metadata block group on flush_space")
      CC: stable@vger.kernel.org # 6.1+
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      13bb483d
    • Naohiro Aota's avatar
      btrfs: zoned: reserve zones for an active metadata/system block group · a7e1ac7b
      Naohiro Aota authored
      Ensure a metadata and system block group can be activated on write time, by
      leaving a certain number of active zones when trying to activate a data
      block group.
      
      Zones for two metadata block groups (normal and tree-log) and one system
      block group are reserved, according to the profile type: two zones per
      block group on the DUP profile and one zone per block group otherwise.
      
      The reservation must be freed once a non-data block group is allocated. If
      not, we over-reserve the active zones and data block group activation will
      suffer. For the dynamic reservation count, we need to manage the
      reservation count per device.
      
      The reservation count variable is protected by
      fs_info->zone_active_bgs_lock.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a7e1ac7b
    • Naohiro Aota's avatar
      btrfs: zoned: update meta write pointer on zone finish · c1c3c2bc
      Naohiro Aota authored
      On finishing a zone, the meta_write_pointer should be set of the end of the
      zone to reflect the actual write pointer position.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c1c3c2bc
    • Naohiro Aota's avatar
      btrfs: zoned: defer advancing meta write pointer · 0356ad41
      Naohiro Aota authored
      We currently advance the meta_write_pointer in
      btrfs_check_meta_write_pointer(). That makes it necessary to revert it
      when locking the buffer failed. Instead, we can advance it just before
      sending the buffer.
      
      Also, this is necessary for the following commit. In the commit, it needs
      to release the zoned_meta_io_lock to allow IOs to come in and wait for them
      to fill the currently active block group. If we advance the
      meta_write_pointer before locking the extent buffer, the following extent
      buffer can pass the meta_write_pointer check, resulting in an unaligned
      write failure.
      
      Advancing the pointer is still thread-safe as the extent buffer is locked.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0356ad41
    • Naohiro Aota's avatar
      btrfs: zoned: return int from btrfs_check_meta_write_pointer · 2ad8c051
      Naohiro Aota authored
      Now that we have writeback_control passed to
      btrfs_check_meta_write_pointer(), we can move the wbc condition in
      submit_eb_page() to btrfs_check_meta_write_pointer() and return int.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2ad8c051
    • Naohiro Aota's avatar
      btrfs: zoned: introduce block group context to btrfs_eb_write_context · 7db94301
      Naohiro Aota authored
      For metadata write out on the zoned mode, we call
      btrfs_check_meta_write_pointer() to check if an extent buffer to be written
      is aligned to the write pointer.
      
      We look up a block group containing the extent buffer for every extent
      buffer, which takes unnecessary effort as the writing extent buffers are
      mostly contiguous.
      
      Introduce "zoned_bg" to cache the block group working on.  Also, while
      at it, rename "cache" to "block_group".
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7db94301
    • Naohiro Aota's avatar
      btrfs: introduce struct to consolidate extent buffer write context · 861093ef
      Naohiro Aota authored
      Introduce btrfs_eb_write_context to consolidate writeback_control and the
      exntent buffer context.  This will help adding a block group context as
      well.
      
      While at it, move the eb context setting before
      btrfs_check_meta_write_pointer(). We can set it here because we anyway need
      to skip pages in the same eb if that eb is rejected by
      btrfs_check_meta_write_pointer().
      Suggested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      861093ef
    • Filipe Manana's avatar
      btrfs: avoid start and commit empty transaction when flushing qgroups · 9c93c238
      Filipe Manana authored
      When flushing qgroups, we try to join a running transaction, with
      btrfs_join_transaction(), and then commit the transaction. However using
      btrfs_join_transaction() will result in creating a new transaction in case
      there isn't any running or if there's an existing one already committing.
      This is pointless as we only need to attach to an existing one that is
      not committing and in case there's an existing one committing, wait for
      its commit to complete. Creating and committing an empty transaction is
      wasteful, pointless IO and unnecessary rotation of the backup roots.
      
      So use btrfs_attach_transaction_barrier() instead, to avoid creating and
      committing empty transactions.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9c93c238
    • Filipe Manana's avatar
      btrfs: avoid start and commit empty transaction when starting qgroup rescan · 6705b48a
      Filipe Manana authored
      When starting a qgroup rescan, we try to join a running transaction, with
      btrfs_join_transaction(), and then commit the transaction. However using
      btrfs_join_transaction() will result in creating a new transaction in case
      there isn't any running or if there's an existing one already committing.
      This is pointless as we only need to attach to an existing one that is
      not committing and in case there's an existing one committing, wait for
      its commit to complete. Creating and committing an empty transaction is
      wasteful, pointless IO and unnecessary rotation of the backup roots.
      
      So use btrfs_attach_transaction_barrier() instead, to avoid creating and
      committing empty transactions.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6705b48a
    • Filipe Manana's avatar
      btrfs: avoid starting and committing empty transaction when flushing space · 2ee70ed1
      Filipe Manana authored
      When flushing space and we are in the COMMIT_TRANS state, we join a
      transaction with btrfs_join_transaction() and then commit the returned
      transaction. However btrfs_join_transaction() starts a new transaction if
      there is none currently open, which is pointless since comitting a new,
      empty transaction, doesn't achieve anything, it only wastes time, IO and
      creates an unnecessary rotation of the backup roots.
      
      So use btrfs_attach_transaction_barrier() to avoid starting a new
      transaction. This also waits for any ongoing transaction that is
      committing (state >= TRANS_STATE_COMMIT_DOING) to fully complete, and
      therefore wait for all the extents that were pinned during the
      transaction's lifetime to be unpinned.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2ee70ed1
    • Filipe Manana's avatar
      btrfs: avoid starting new transaction when flushing delayed items and refs · 2391245a
      Filipe Manana authored
      When flushing space we join a transaction to flush delayed items and
      delayed references, in order to try to release space. However using
      btrfs_join_transaction() not only joins an existing transaction as well
      as it starts a new transaction if there is none open. If there is no
      transaction open, we don't have neither delayed items nor delayed
      references, so creating a new transaction is a waste of time, IO and
      creates an unnecessary rotation of the backup roots without gaining any
      benefits (including releasing space).
      
      So use btrfs_join_transaction_nostart() when attempting to flush delayed
      items and references.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2391245a
    • Filipe Manana's avatar
      btrfs: merge find_free_dev_extent() and find_free_dev_extent_start() · ed8947bc
      Filipe Manana authored
      There is no point in having find_free_dev_extent() because it's just a
      simple wrapper around find_free_dev_extent_start() which always passes a
      value of 0 for the search_start argument. Since there are no other callers
      of find_free_dev_extent_start(), remove find_free_dev_extent() and rename
      find_free_dev_extent_start() to find_free_dev_extent(), removing its
      search_start argument because it's always 0.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ed8947bc
    • Filipe Manana's avatar
      btrfs: make find_free_dev_extent() static · 883647f4
      Filipe Manana authored
      The function find_free_dev_extent() is only used within volumes.c, so make
      it static and remove its prototype from volumes.h.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      883647f4