1. 22 Feb, 2024 7 commits
    • Darrick J. Wong's avatar
      xfs: create a static name for the dot entry too · e99bfc9e
      Darrick J. Wong authored
      Create an xfs_name_dot object so that upcoming scrub code can compare
      against that.  Offline repair already has such an object, so we're
      really just hoisting it to the kernel.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      e99bfc9e
    • Darrick J. Wong's avatar
      xfs: iscan batching should handle unallocated inodes too · 82334a79
      Darrick J. Wong authored
      The inode scanner tries to reduce contention on the AGI header buffer
      lock by grabbing references to consecutive allocated inodes.  Batching
      stops as soon as we encounter an unallocated inode.  This is unfortunate
      because in the worst case performance collapses to the old "one at a
      time" behavior if every other inode is free.
      
      This is correct behavior, but we could do better.  Unallocated inodes by
      definition have nothing to scan, which means the iscan can ignore them
      as long as someone ensures that the scan data will reflect another
      thread allocating the inode and adding interesting metadata to that
      inode.  That mechanism is, of course, the live update hooks.
      
      Therefore, extend the batching mechanism to track unallocated inodes
      adjacent to the scan cursor.  The _want_live_update predicate can tell
      the caller's live update hook to incorporate all live updates to what
      the scanner thinks is an unallocated inode if (after dropping the AGI)
      some other thread allocates one of those inodes and begins using it.
      
      Note that we cannot just copy the ir_free bitmap into the scan cursor
      because the batching stops if iget says the inode is in an intermediate
      state (e.g. on the inactivation list) and cannot be igrabbed.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      82334a79
    • Darrick J. Wong's avatar
      xfs: cache a bunch of inodes for repair scans · a7a686cb
      Darrick J. Wong authored
      After observing xfs_scrub taking forever to rebuild parent pointers on a
      pptrs enabled filesystem, I decided to profile what the system was
      doing.  It turns out that when there are a lot of threads trying to scan
      the filesystem, most of our time is spent contending on AGI buffer
      locks.  Given that we're walking the inobt records anyway, we can often
      tell ahead of time when there's a bunch of (up to 64) consecutive inodes
      that we could grab all at once.
      
      Do this to amortize the cost of taking the AGI lock across as many
      inodes as we possibly can.  On the author's system this seems to improve
      parallel throughput from barely one and a half cores to slightly
      sublinear scaling.  The obvious antipattern here of course is where the
      freemask has every other bit set (e.g. all 0xA's)
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      a7a686cb
    • Darrick J. Wong's avatar
      xfs: stagger the starting AG of scrub iscans to reduce contention · c473a332
      Darrick J. Wong authored
      Online directory and parent repairs on parent-pointer equipped
      filesystems have shown that starting a large number of parallel iscans
      causes a lot of AGI buffer contention.  Try to reduce this by making it
      so that iscans scan wrap around the end of the filesystem, and using a
      rotor to stagger where each scanner begins.  Surprisingly, this boosts
      CPU utilization (on the author's test machines) from effectively
      single-threaded to 160%.  Not great, but see the next patch.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      c473a332
    • Darrick J. Wong's avatar
      xfs: allow scrub to hook metadata updates in other writers · 4e98cc90
      Darrick J. Wong authored
      Certain types of filesystem metadata can only be checked by scanning
      every file in the entire filesystem.  Specific examples of this include
      quota counts, file link counts, and reverse mappings of file extents.
      Directory and parent pointer reconstruction may also fall into this
      category.  File scanning is much trickier than scanning AG metadata
      because we have to take inode locks in the same order as the rest of
      [VX]FS, we can't be holding buffer locks when we do that, and scanning
      the whole filesystem takes time.
      
      Earlier versions of the online repair patchset relied heavily on
      fsfreeze as a means to quiesce the filesystem so that we could take
      locks in the proper order without worrying about concurrent updates from
      other writers.  Reviewers of those patches opined that freezing the
      entire fs to check and repair something was not sufficiently better than
      unmounting to run fsck offline.  I don't agree with that 100%, but the
      message was clear: find a way to repair things that minimizes the
      quiet period where nobody can write to the filesystem.
      
      Generally, building btree indexes online can be split into two phases: a
      collection phase where we compute the records that will be put into the
      new btree; and a construction phase, where we construct the physical
      btree blocks and persist them.  While it's simple to hold resource locks
      for the entirety of the two phases to ensure that the new index is
      consistent with the rest of the system, we don't need to hold resource
      locks during the collection phase if we have a means to receive live
      updates of other work going on elsewhere in the system.
      
      The goal of this patch, then, is to enable online fsck to learn about
      metadata updates going on in other threads while it constructs a shadow
      copy of the metadata records to verify or correct the real metadata.  To
      minimize the overhead when online fsck isn't running, we use srcu
      notifiers because they prioritize fast access to the notifier call chain
      (particularly when the chain is empty) at a cost to configuring
      notifiers.  Online fsck should be relatively infrequent, so this is
      acceptable.
      
      The intended usage model is fairly simple.  Code that modifies a
      metadata structure of interest should declare a xfs_hook_chain structure
      in some well defined place, and call xfs_hook_call whenever an update
      happens.  Online fsck code should define a struct notifier_block and use
      xfs_hook_add to attach the block to the chain, along with a function to
      be called.  This function should synchronize with the fsck scanner to
      update whatever in-memory data the scanner is collecting.  When
      finished, xfs_hook_del removes the notifier from the list and waits for
      them all to complete.
      
      Originally, I selected srcu notifiers over blocking notifiers to
      implement live hooks because they seemed to have fewer impacts to
      scalability.  The per-call cost of srcu_notifier_call_chain is higher
      (19ns) than blocking_notifier_ (4ns) in the single threaded case, but
      blocking notifiers use an rwsem to stabilize the list.  Cacheline
      bouncing for that rwsem is costly to runtime code when there are a lot
      of CPUs running regular filesystem operations.  If there are no hooks
      installed, this is a total waste of CPU time.
      
      Therefore, I stuck with srcu notifiers, despite trading off single
      threaded performance for multithreaded performance.  I also wasn't
      thrilled with the very high teardown time for srcu notifiers, since the
      caller has to wait for the next rcu grace period.  This can take a long
      time if there are a lot of CPUs.
      
      Then I discovered the jump label implementation of static keys.
      
      Jump labels use kernel code patching to replace a branch with a nop sled
      when the key is disabled.  IOWs, they can eliminate the overhead of
      _call_chain when there are no hooks enabled.  This makes blocking
      notifiers competitive again -- scrub runs faster because teardown of the
      chain is a lot cheaper, and runtime code only pays the rwsem locking
      overhead when scrub is actually running.
      
      With jump labels enabled, calls to empty notifier chains are elided from
      the call sites when there are no hooks registered, which means that the
      overhead is 0.36ns when fsck is not running.  This is perfect for most
      of the architectures that XFS is expected to run on (e.g. x86, powerpc,
      arm64, s390x, riscv).
      
      For architectures that don't support jump labels (e.g. m68k) the runtime
      overhead of checking the static key is an atomic counter read.  This
      isn't great, but it's still cheaper than taking a shared rwsem.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      4e98cc90
    • Darrick J. Wong's avatar
      xfs: implement live inode scan for scrub · 8660c7b7
      Darrick J. Wong authored
      This patch implements a live file scanner for online fsck functions that
      require the ability to walk a filesystem to gather metadata records and
      stay informed about metadata changes to files that have already been
      visited.
      
      The iscan structure consists of two inode number cursors: one to track
      which inode we want to visit next, and a second one to track which
      inodes have already been visited.  This second cursor is key to
      capturing live updates to files previously scanned while the main thread
      continues scanning -- any inode greater than this value hasn't been
      scanned and can go on its way; any other update must be incorporated
      into the collected data.  It is critical for the scanning thraad to hold
      exclusive access on the inode until after marking the inode visited.
      
      This new code is a separate patch from the patchsets adding callers for
      the sake of enabling the author to move patches around his tree with
      ease.  The intended usage model for this code is roughly:
      
      	xchk_iscan_start(iscan, 0, 0);
      	while ((error = xchk_iscan_iter(sc, iscan, &ip)) == 1) {
      		xfs_ilock(ip, ...);
      		/* capture inode metadata */
      		xchk_iscan_mark_visited(iscan, ip);
      		xfs_iunlock(ip, ...);
      
      		xfs_irele(ip);
      	}
      	xchk_iscan_stop(iscan);
      	if (error)
      		return error;
      
      Hook functions for live updates can then do:
      
      	if (xchk_iscan_want_live_update(...))
      		/* update the captured inode metadata */
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      8660c7b7
    • Darrick J. Wong's avatar
      xfs: speed up xfs_iwalk_adjust_start a little bit · ae05eb11
      Darrick J. Wong authored
      Replace the open-coded loop that recomputes freecount with a single call
      to a bit weight function.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      ae05eb11
  2. 21 Feb, 2024 23 commits
  3. 20 Feb, 2024 2 commits
  4. 19 Feb, 2024 4 commits
  5. 17 Feb, 2024 3 commits
    • Long Li's avatar
      xfs: ensure submit buffers on LSN boundaries in error handlers · e4c3b72a
      Long Li authored
      While performing the IO fault injection test, I caught the following data
      corruption report:
      
       XFS (dm-0): Internal error ltbno + ltlen > bno at line 1957 of file fs/xfs/libxfs/xfs_alloc.c.  Caller xfs_free_ag_extent+0x79c/0x1130
       CPU: 3 PID: 33 Comm: kworker/3:0 Not tainted 6.5.0-rc7-next-20230825-00001-g7f8666926889 #214
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
       Workqueue: xfs-inodegc/dm-0 xfs_inodegc_worker
       Call Trace:
        <TASK>
        dump_stack_lvl+0x50/0x70
        xfs_corruption_error+0x134/0x150
        xfs_free_ag_extent+0x7d3/0x1130
        __xfs_free_extent+0x201/0x3c0
        xfs_trans_free_extent+0x29b/0xa10
        xfs_extent_free_finish_item+0x2a/0xb0
        xfs_defer_finish_noroll+0x8d1/0x1b40
        xfs_defer_finish+0x21/0x200
        xfs_itruncate_extents_flags+0x1cb/0x650
        xfs_free_eofblocks+0x18f/0x250
        xfs_inactive+0x485/0x570
        xfs_inodegc_worker+0x207/0x530
        process_scheduled_works+0x24a/0xe10
        worker_thread+0x5ac/0xc60
        kthread+0x2cd/0x3c0
        ret_from_fork+0x4a/0x80
        ret_from_fork_asm+0x11/0x20
        </TASK>
       XFS (dm-0): Corruption detected. Unmount and run xfs_repair
      
      After analyzing the disk image, it was found that the corruption was
      triggered by the fact that extent was recorded in both inode datafork
      and AGF btree blocks. After a long time of reproduction and analysis,
      we found that the reason of free sapce btree corruption was that the
      AGF btree was not recovered correctly.
      
      Consider the following situation, Checkpoint A and Checkpoint B are in
      the same record and share the same start LSN1, buf items of same object
      (AGF btree block) is included in both Checkpoint A and Checkpoint B. If
      the buf item in Checkpoint A has been recovered and updates metadata LSN
      permanently, then the buf item in Checkpoint B cannot be recovered,
      because log recovery skips items with a metadata LSN >= the current LSN
      of the recovery item. If there is still an inode item in Checkpoint B
      that records the Extent X, the Extent X will be recorded in both inode
      datafork and AGF btree block after Checkpoint B is recovered. Such
      transaction can be seen when allocing enxtent for inode bmap, it record
      both the addition of extent to the inode extent list and the removing
      extent from the AGF.
      
        |------------Record (LSN1)------------------|---Record (LSN2)---|
        |-------Checkpoint A----------|----------Checkpoint B-----------|
        |     Buf Item(Extent X)      | Buf Item / Inode item(Extent X) |
        |     Extent X is freed       |     Extent X is allocated       |
      
      After commit 12818d24 ("xfs: rework log recovery to submit buffers
      on LSN boundaries") was introduced, we submit buffers on lsn boundaries
      during log recovery. The above problem can be avoided under normal paths,
      but it's not guaranteed under abnormal paths. Consider the following
      process, if an error was encountered after recover buf item in Checkpoint
      A and before recover buf item in Checkpoint B, buffers that have been
      added to the buffer_list will still be submitted, this violates the
      submits rule on lsn boundaries. So buf item in Checkpoint B cannot be
      recovered on the next mount due to current lsn of transaction equal to
      metadata lsn on disk. The detailed process of the problem is as follows.
      
      First Mount:
      
        xlog_do_recovery_pass
          error = xlog_recover_process
            xlog_recover_process_data
              xlog_recover_process_ophdr
                xlog_recovery_process_trans
                  ...
                    /* recover buf item in Checkpoint A */
                    xlog_recover_buf_commit_pass2
                      xlog_recover_do_reg_buffer
                      /* add buffer of agf btree block to buffer_list */
                      xfs_buf_delwri_queue(bp, buffer_list)
                  ...
                  ==> Encounter read IO error and return
          /* submit buffers regardless of error */
          if (!list_empty(&buffer_list))
            xfs_buf_delwri_submit(&buffer_list);
      
          <buf items of agf btree block in Checkpoint A recovery success>
      
      Second Mount:
      
        xlog_do_recovery_pass
          error = xlog_recover_process
            xlog_recover_process_data
              xlog_recover_process_ophdr
                xlog_recovery_process_trans
                  ...
                    /* recover buf item in Checkpoint B */
                    xlog_recover_buf_commit_pass2
                      /* buffer of agf btree block wouldn't added to
                         buffer_list due to lsn equal to current_lsn */
                      if (XFS_LSN_CMP(lsn, current_lsn) >= 0)
                        goto out_release
      
          <buf items of agf btree block in Checkpoint B wouldn't recovery>
      
      In order to make sure that submits buffers on lsn boundaries in the
      abnormal paths, we need to check error status before submit buffers that
      have been added from the last record processed. If error status exist,
      buffers in the bufffer_list should not be writen to disk.
      
      Canceling the buffers in the buffer_list directly isn't correct, unlike
      any other place where write list was canceled, these buffers has been
      initialized by xfs_buf_item_init() during recovery and held by buf item,
      buf items will not be released in xfs_buf_delwri_cancel(), it's not easy
      to solve.
      
      If the filesystem has been shut down, then delwri list submission will
      error out all buffers on the list via IO submission/completion and do
      all the correct cleanup automatically. So shutting down the filesystem
      could prevents buffers in the bufffer_list from being written to disk.
      
      Fixes: 50d5c8d8 ("xfs: check LSN ordering for v5 superblocks during recovery")
      Signed-off-by: default avatarLong Li <leo.lilong@huawei.com>
      Reviewed-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      e4c3b72a
    • Shrikanth Hegde's avatar
      xfs: remove duplicate ifdefs · 0164defd
      Shrikanth Hegde authored
      when a ifdef is used in the below manner, second one could be considered as
      duplicate.
      
      ifdef DEFINE_A
      ...code block...
      ifdef DEFINE_A
      ...code block...
      endif
      ...code block...
      endif
      
      In the xfs code two such patterns were seen. Hence removing these ifdefs.
      No functional change is intended here. It only aims to improve code
      readability.
      Reviewed-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Signed-off-by: default avatarShrikanth Hegde <sshegde@linux.ibm.com>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      0164defd
    • Darrick J. Wong's avatar
      xfs: disable sparse inode chunk alignment check when there is no alignment · 1149314a
      Darrick J. Wong authored
      While testing a 64k-blocksize filesystem, I noticed that xfs/709 fails
      to rebuild the inode btree with a bunch of "Corruption remains"
      messages.  It turns out that when the inode chunk size is smaller than a
      single filesystem block, no block alignments constraints are necessary
      for inode chunk allocations, and sb_spino_align is zero.  Hence we can
      skip the check.
      
      Fixes: dbfbf3bd ("xfs: repair inode btrees")
      Signed-off-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      1149314a
  6. 13 Feb, 2024 1 commit