1. 27 Sep, 2017 14 commits
  2. 20 Sep, 2017 26 commits
    • Greg Kroah-Hartman's avatar
      Linux 4.9.51 · 089d7720
      Greg Kroah-Hartman authored
      089d7720
    • Steffen Klassert's avatar
      ipv6: Fix may be used uninitialized warning in rt6_check · 78296840
      Steffen Klassert authored
      commit 36143645 upstream.
      
      rt_cookie might be used uninitialized, fix this by
      initializing it.
      
      Fixes: c5cff856 ("ipv6: add rcu grace period before freeing fib6_node")
      Signed-off-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      78296840
    • Darrick J. Wong's avatar
      xfs: fix compiler warnings · ae04a8c4
      Darrick J. Wong authored
      commit 7bf7a193 upstream.
      
      Fix up all the compiler warnings that have crept in.
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ae04a8c4
    • Song Liu's avatar
      md/raid5: release/flush io in raid5_do_work() · 7b5fcb7f
      Song Liu authored
      commit 9c72a18e upstream.
      
      In raid5, there are scenarios where some ios are deferred to a later
      time, and some IO need a flush to complete. To make sure we make
      progress with these IOs, we need to call the following functions:
      
          flush_deferred_bios(conf);
          r5l_flush_stripe_to_raid(conf->log);
      
      Both of these functions are called in raid5d(), but missing in
      raid5_do_work(). As a result, these functions are not called
      when multi-threading (group_thread_cnt > 0) is enabled. This patch
      adds calls to these function to raid5_do_work().
      
      Note for stable branches:
      
        r5l_flush_stripe_to_raid(conf->log) is need for 4.4+
        flush_deferred_bios(conf) is only needed for 4.11+
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarShaohua Li <shli@fb.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7b5fcb7f
    • Pan Bian's avatar
      xfs: use kmem_free to free return value of kmem_zalloc · 81cb6f1a
      Pan Bian authored
      commit 6c370590 upstream.
      
      In function xfs_test_remount_options(), kfree() is used to free memory
      allocated by kmem_zalloc(). But it is better to use kmem_free().
      Signed-off-by: default avatarPan Bian <bianpan2016@163.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      81cb6f1a
    • Christoph Hellwig's avatar
      xfs: open code end_buffer_async_write in xfs_finish_page_writeback · 772003c6
      Christoph Hellwig authored
      commit 8353a814 upstream.
      
      Our loop in xfs_finish_page_writeback, which iterates over all buffer
      heads in a page and then calls end_buffer_async_write, which also
      iterates over all buffers in the page to check if any I/O is in flight
      is not only inefficient, but also potentially dangerous as
      end_buffer_async_write can cause the page and all buffers to be freed.
      
      Replace it with a single loop that does the work of end_buffer_async_write
      on a per-page basis.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      772003c6
    • Christoph Hellwig's avatar
      xfs: don't set v3 xflags for v2 inodes · bb69e8a2
      Christoph Hellwig authored
      commit dd60687e upstream.
      
      Reject attempts to set XFLAGS that correspond to di_flags2 inode flags
      if the inode isn't a v3 inode, because di_flags2 only exists on v3.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bb69e8a2
    • Amir Goldstein's avatar
      xfs: fix incorrect log_flushed on fsync · f46a61f6
      Amir Goldstein authored
      commit 47c7d0b1 upstream.
      
      When calling into _xfs_log_force{,_lsn}() with a pointer
      to log_flushed variable, log_flushed will be set to 1 if:
      1. xlog_sync() is called to flush the active log buffer
      AND/OR
      2. xlog_wait() is called to wait on a syncing log buffers
      
      xfs_file_fsync() checks the value of log_flushed after
      _xfs_log_force_lsn() call to optimize away an explicit
      PREFLUSH request to the data block device after writing
      out all the file's pages to disk.
      
      This optimization is incorrect in the following sequence of events:
      
       Task A                    Task B
       -------------------------------------------------------
       xfs_file_fsync()
         _xfs_log_force_lsn()
           xlog_sync()
              [submit PREFLUSH]
                                 xfs_file_fsync()
                                   file_write_and_wait_range()
                                     [submit WRITE X]
                                     [endio  WRITE X]
                                   _xfs_log_force_lsn()
                                     xlog_wait()
              [endio  PREFLUSH]
      
      The write X is not guarantied to be on persistent storage
      when PREFLUSH request in completed, because write A was submitted
      after the PREFLUSH request, but xfs_file_fsync() of task A will
      be notified of log_flushed=1 and will skip explicit flush.
      
      If the system crashes after fsync of task A, write X may not be
      present on disk after reboot.
      
      This bug was discovered and demonstrated using Josef Bacik's
      dm-log-writes target, which can be used to record block io operations
      and then replay a subset of these operations onto the target device.
      The test goes something like this:
      - Use fsx to execute ops of a file and record ops on log device
      - Every now and then fsync the file, store md5 of file and mark
        the location in the log
      - Then replay log onto device for each mark, mount fs and compare
        md5 of file to stored value
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAmir Goldstein <amir73il@gmail.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f46a61f6
    • Christoph Hellwig's avatar
      xfs: disable per-inode DAX flag · 0e8d7e36
      Christoph Hellwig authored
      commit 742d8429 upstream.
      
      Currently flag switching can be used to easily crash the kernel.  Disable
      the per-inode DAX flag until that is sorted out.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0e8d7e36
    • Brian Foster's avatar
      xfs: relog dirty buffers during swapext bmbt owner change · a46cf592
      Brian Foster authored
      commit 2dd3d709 upstream.
      
      The owner change bmbt scan that occurs during extent swap operations
      does not handle ordered buffer failures. Buffers that cannot be
      marked ordered must be physically logged so previously dirty ranges
      of the buffer can be relogged in the transaction.
      
      Since the bmbt scan may need to process and potentially log a large
      number of blocks, we can't expect to complete this operation in a
      single transaction. Update extent swap to use a permanent
      transaction with enough log reservation to physically log a buffer.
      Update the bmbt scan to physically log any buffers that cannot be
      ordered and to terminate the scan with -EAGAIN. On -EAGAIN, the
      caller rolls the transaction and restarts the scan. Finally, update
      the bmbt scan helper function to skip bmbt blocks that already match
      the expected owner so they are not reprocessed after scan restarts.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      [darrick: fix the xfs_trans_roll call]
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a46cf592
    • Brian Foster's avatar
      xfs: disallow marking previously dirty buffers as ordered · e2bb9263
      Brian Foster authored
      commit a5814bce upstream.
      
      Ordered buffers are used in situations where the buffer is not
      physically logged but must pass through the transaction/logging
      pipeline for a particular transaction. As a result, ordered buffers
      are not unpinned and written back until the transaction commits to
      the log. Ordered buffers have a strict requirement that the target
      buffer must not be currently dirty and resident in the log pipeline
      at the time it is marked ordered. If a dirty+ordered buffer is
      committed, the buffer is reinserted to the AIL but not physically
      relogged at the LSN of the associated checkpoint. The buffer log
      item is assigned the LSN of the latest checkpoint and the AIL
      effectively releases the previously logged buffer content from the
      active log before the buffer has been written back. If the tail
      pushes forward and a filesystem crash occurs while in this state, an
      inconsistent filesystem could result.
      
      It is currently the caller responsibility to ensure an ordered
      buffer is not already dirty from a previous modification. This is
      unclear and error prone when not used in situations where it is
      guaranteed a buffer has not been previously modified (such as new
      metadata allocations).
      
      To facilitate general purpose use of ordered buffers, update
      xfs_trans_ordered_buf() to conditionally order the buffer based on
      state of the log item and return the status of the result. If the
      bli is dirty, do not order the buffer and return false. The caller
      must either physically log the buffer (having acquired the
      appropriate log reservation) or push it from the AIL to clean it
      before it can be marked ordered in the current transaction.
      
      Note that ordered buffers are currently only used in two situations:
      1.) inode chunk allocation where previously logged buffers are not
      possible and 2.) extent swap which will be updated to handle ordered
      buffer failures in a separate patch.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e2bb9263
    • Brian Foster's avatar
      xfs: move bmbt owner change to last step of extent swap · a51e3e2c
      Brian Foster authored
      commit 6fb10d6d upstream.
      
      The extent swap operation currently resets bmbt block owners before
      the inode forks are swapped. The bmbt buffers are marked as ordered
      so they do not have to be physically logged in the transaction.
      
      This use of ordered buffers is not safe as bmbt buffers may have
      been previously physically logged. The bmbt owner change algorithm
      needs to be updated to physically log buffers that are already dirty
      when/if they are encountered. This means that an extent swap will
      eventually require multiple rolling transactions to handle large
      btrees. In addition, all inode related changes must be logged before
      the bmbt owner change scan begins and can roll the transaction for
      the first time to preserve fs consistency via log recovery.
      
      In preparation for such fixes to the bmbt owner change algorithm,
      refactor the bmbt scan out of the extent fork swap code to the last
      operation before the transaction is committed. Update
      xfs_swap_extent_forks() to only set the inode log flags when an
      owner change scan is necessary. Update xfs_swap_extents() to trigger
      the owner change based on the inode log flags. Note that since the
      owner change now occurs after the extent fork swap, the inode btrees
      must be fixed up with the inode number of the current inode (similar
      to log recovery).
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a51e3e2c
    • Brian Foster's avatar
      xfs: skip bmbt block ino validation during owner change · f9e583ed
      Brian Foster authored
      commit 99c794c6 upstream.
      
      Extent swap uses xfs_btree_visit_blocks() to fix up bmbt block
      owners on v5 (!rmapbt) filesystems. The bmbt scan uses
      xfs_btree_lookup_get_block() to read bmbt blocks which verifies the
      current owner of the block against the parent inode of the bmbt.
      This works during extent swap because the bmbt owners are updated to
      the opposite inode number before the inode extent forks are swapped.
      
      The modified bmbt blocks are marked as ordered buffers which allows
      everything to commit in a single transaction. If the transaction
      commits to the log and the system crashes such that recovery of the
      extent swap is required, log recovery restarts the bmbt scan to fix
      up any bmbt blocks that may have not been written back before the
      crash. The log recovery bmbt scan occurs after the inode forks have
      been swapped, however. This causes the bmbt block owner verification
      to fail, leads to log recovery failure and requires xfs_repair to
      zap the log to recover.
      
      Define a new invalid inode owner flag to inform the btree block
      lookup mechanism that the current inode may be invalid with respect
      to the current owner of the bmbt block. Set this flag on the cursor
      used for change owner scans to allow this operation to work at
      runtime and during log recovery.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Fixes: bb3be7e7 ("xfs: check for bogus values in btree block headers")
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f9e583ed
    • Brian Foster's avatar
      xfs: don't log dirty ranges for ordered buffers · fe211e17
      Brian Foster authored
      commit 8dc518df upstream.
      
      Ordered buffers are attached to transactions and pushed through the
      logging infrastructure just like normal buffers with the exception
      that they are not actually written to the log. Therefore, we don't
      need to log dirty ranges of ordered buffers. xfs_trans_log_buf() is
      called on ordered buffers to set up all of the dirty state on the
      transaction, buffer and log item and prepare the buffer for I/O.
      
      Now that xfs_trans_dirty_buf() is available, call it from
      xfs_trans_ordered_buf() so the latter is now mutually exclusive with
      xfs_trans_log_buf(). This reflects the implementation of ordered
      buffers and helps eliminate confusion over the need to log ranges of
      ordered buffers just to set up internal log state.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarAllison Henderson <allison.henderson@oracle.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fe211e17
    • Brian Foster's avatar
      xfs: refactor buffer logging into buffer dirtying helper · 19a87a94
      Brian Foster authored
      commit 9684010d upstream.
      
      xfs_trans_log_buf() is responsible for logging the dirty segments of
      a buffer along with setting all of the necessary state on the
      transaction, buffer, bli, etc., to ensure that the associated items
      are marked as dirty and prepared for I/O. We have a couple use cases
      that need to to dirty a buffer in a transaction without actually
      logging dirty ranges of the buffer.  One existing use case is
      ordered buffers, which are currently logged with arbitrary ranges to
      accomplish this even though the content of ordered buffers is never
      written to the log. Another pending use case is to relog an already
      dirty buffer across rolled transactions within the deferred
      operations infrastructure. This is required to prevent a held
      (XFS_BLI_HOLD) buffer from pinning the tail of the log.
      
      Refactor xfs_trans_log_buf() into a new function that contains all
      of the logic responsible to dirty the transaction, lidp, buffer and
      bli. This new function can be used in the future for the use cases
      outlined above. This patch does not introduce functional changes.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarAllison Henderson <allison.henderson@oracle.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      19a87a94
    • Brian Foster's avatar
      xfs: ordered buffer log items are never formatted · 93b64516
      Brian Foster authored
      commit e9385cc6 upstream.
      
      Ordered buffers pass through the logging infrastructure without ever
      being written to the log. The way this works is that the ordered
      buffer status is transferred to the log vector at commit time via
      the ->iop_size() callback. In xlog_cil_insert_format_items(),
      ordered log vectors bypass ->iop_format() processing altogether.
      
      Therefore it is unnecessary for xfs_buf_item_format() to handle
      ordered buffers. Remove the unnecessary logic and assert that an
      ordered buffer never reaches this point.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      93b64516
    • Brian Foster's avatar
      xfs: remove unnecessary dirty bli format check for ordered bufs · ba986b3c
      Brian Foster authored
      commit 6453c65d upstream.
      
      xfs_buf_item_unlock() historically checked the dirty state of the
      buffer by manually checking the buffer log formats for dirty
      segments. The introduction of ordered buffers invalidated this check
      because ordered buffers have dirty bli's but no dirty (logged)
      segments. The check was updated to accommodate ordered buffers by
      looking at the bli state first and considering the blf only if the
      bli is clean.
      
      This logic is safe but unnecessary. There is no valid case where the
      bli is clean yet the blf has dirty segments. The bli is set dirty
      whenever the blf is logged (via xfs_trans_log_buf()) and the blf is
      cleared in the only place BLI_DIRTY is cleared (xfs_trans_binval()).
      
      Remove the conditional blf dirty checks and replace with an assert
      that should catch any discrepencies between bli and blf dirty
      states. Refactor the old blf dirty check into a helper function to
      be used by the assert.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ba986b3c
    • Brian Foster's avatar
      xfs: open-code xfs_buf_item_dirty() · 0f5af7ea
      Brian Foster authored
      commit a4f6cf6b upstream.
      
      It checks a single flag and has one caller. It probably isn't worth
      its own function.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0f5af7ea
    • Omar Sandoval's avatar
      xfs: check for race with xfs_reclaim_inode() in xfs_ifree_cluster() · 81286ade
      Omar Sandoval authored
      commit f2e9ad21 upstream.
      
      After xfs_ifree_cluster() finds an inode in the radix tree and verifies
      that the inode number is what it expected, xfs_reclaim_inode() can swoop
      in and free it. xfs_ifree_cluster() will then happily continue working
      on the freed inode. Most importantly, it will mark the inode stale,
      which will probably be overwritten when the inode slab object is
      reallocated, but if it has already been reallocated then we can end up
      with an inode spuriously marked stale.
      
      In 8a17d7dd ("xfs: mark reclaimed inodes invalid earlier") we added
      a second check to xfs_iflush_cluster() to detect this race, but the
      similar RCU lookup in xfs_ifree_cluster() needs the same treatment.
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      81286ade
    • Darrick J. Wong's avatar
      xfs: evict all inodes involved with log redo item · 63d184d2
      Darrick J. Wong authored
      commit 799ea9e9 upstream.
      
      When we introduced the bmap redo log items, we set MS_ACTIVE on the
      mountpoint and XFS_IRECOVERY on the inode to prevent unlinked inodes
      from being truncated prematurely during log recovery.  This also had the
      effect of putting linked inodes on the lru instead of evicting them.
      
      Unfortunately, we neglected to find all those unreferenced lru inodes
      and evict them after finishing log recovery, which means that we leak
      them if anything goes wrong in the rest of xfs_mountfs, because the lru
      is only cleaned out on unmount.
      
      Therefore, evict unreferenced inodes in the lru list immediately
      after clearing MS_ACTIVE.
      
      Fixes: 17c12bcd ("xfs: when replaying bmap operations, don't let unlinked inodes get reaped")
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Cc: viro@ZenIV.linux.org.uk
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      63d184d2
    • Carlos Maiolino's avatar
      xfs: stop searching for free slots in an inode chunk when there are none · 536932f3
      Carlos Maiolino authored
      commit 2d32311c upstream.
      
      In a filesystem without finobt, the Space manager selects an AG to alloc a new
      inode, where xfs_dialloc_ag_inobt() will search the AG for the free slot chunk.
      
      When the new inode is in the same AG as its parent, the btree will be searched
      starting on the parent's record, and then retried from the top if no slot is
      available beyond the parent's record.
      
      To exit this loop though, xfs_dialloc_ag_inobt() relies on the fact that the
      btree must have a free slot available, once its callers relied on the
      agi->freecount when deciding how/where to allocate this new inode.
      
      In the case when the agi->freecount is corrupted, showing available inodes in an
      AG, when in fact there is none, this becomes an infinite loop.
      
      Add a way to stop the loop when a free slot is not found in the btree, making
      the function to fall into the whole AG scan which will then, be able to detect
      the corruption and shut the filesystem down.
      
      As pointed by Brian, this might impact performance, giving the fact we
      don't reset the search distance anymore when we reach the end of the
      tree, giving it fewer tries before falling back to the whole AG search, but
      it will only affect searches that start within 10 records to the end of the tree.
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      536932f3
    • Brian Foster's avatar
      xfs: add log recovery tracepoint for head/tail · 6b6505d9
      Brian Foster authored
      commit e67d3d42 upstream.
      
      Torn write detection and tail overwrite detection can shift the log
      head and tail respectively in the event of CRC mismatch or
      corruption errors. Add a high-level log recovery tracepoint to dump
      the final log head/tail and make those values easily attainable in
      debug/diagnostic situations.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6b6505d9
    • Brian Foster's avatar
      xfs: handle -EFSCORRUPTED during head/tail verification · 7549e7c0
      Brian Foster authored
      commit a4c9b34d upstream.
      
      Torn write and tail overwrite detection both trigger only on
      -EFSBADCRC errors. While this is the most likely failure scenario
      for each condition, -EFSCORRUPTED is still possible in certain cases
      depending on what ends up on disk when a torn write or partial tail
      overwrite occurs. For example, an invalid log record h_len can lead
      to an -EFSCORRUPTED error when running the log recovery CRC pass.
      
      Therefore, update log head and tail verification to trigger the
      associated head/tail fixups in the event of -EFSCORRUPTED errors
      along with -EFSBADCRC. Also, -EFSCORRUPTED can currently be returned
      from xlog_do_recovery_pass() before rhead_blk is initialized if the
      first record encountered happens to be corrupted. This leads to an
      incorrect 'first_bad' return value. Initialize rhead_blk earlier in
      the function to address that problem as well.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7549e7c0
    • Brian Foster's avatar
      xfs: fix log recovery corruption error due to tail overwrite · 47db1fc6
      Brian Foster authored
      commit 4a4f66ea upstream.
      
      If we consider the case where the tail (T) of the log is pinned long
      enough for the head (H) to push and block behind the tail, we can
      end up blocked in the following state without enough free space (f)
      in the log to satisfy a transaction reservation:
      
      	0	phys. log	N
      	[-------HffT---H'--T'---]
      
      The last good record in the log (before H) refers to T. The tail
      eventually pushes forward (T') leaving more free space in the log
      for writes to H. At this point, suppose space frees up in the log
      for the maximum of 8 in-core log buffers to start flushing out to
      the log. If this pushes the head from H to H', these next writes
      overwrite the previous tail T. This is safe because the items logged
      from T to T' have been written back and removed from the AIL.
      
      If the next log writes (H -> H') happen to fail and result in
      partial records in the log, the filesystem shuts down having
      overwritten T with invalid data. Log recovery correctly locates H on
      the subsequent mount, but H still refers to the now corrupted tail
      T. This results in log corruption errors and recovery failure.
      
      Since the tail overwrite results from otherwise correct runtime
      behavior, it is up to log recovery to try and deal with this
      situation. Update log recovery tail verification to run a CRC pass
      from the first record past the tail to the head. This facilitates
      error detection at T and moves the recovery tail to the first good
      record past H' (similar to truncating the head on torn write
      detection). If corruption is detected beyond the range possibly
      affected by the max number of iclogs, the log is legitimately
      corrupted and log recovery failure is expected.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      47db1fc6
    • Brian Foster's avatar
      xfs: always verify the log tail during recovery · e34b72a2
      Brian Foster authored
      commit 5297ac1f upstream.
      
      Log tail verification currently only occurs when torn writes are
      detected at the head of the log. This was introduced because a
      change in the head block due to torn writes can lead to a change in
      the tail block (each log record header references the current tail)
      and the tail block should be verified before log recovery proceeds.
      
      Tail corruption is possible outside of torn write scenarios,
      however. For example, partial log writes can be detected and cleared
      during the initial head/tail block discovery process. If the partial
      write coincides with a tail overwrite, the log tail is corrupted and
      recovery fails.
      
      To facilitate correct handling of log tail overwites, update log
      recovery to always perform tail verification. This is necessary to
      detect potential tail overwrite conditions when torn writes may not
      have occurred. This changes normal (i.e., no torn writes) recovery
      behavior slightly to detect and return CRC related errors near the
      tail before actual recovery starts.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e34b72a2
    • Brian Foster's avatar
      xfs: fix recovery failure when log record header wraps log end · 35093926
      Brian Foster authored
      commit 284f1c2c upstream.
      
      The high-level log recovery algorithm consists of two loops that
      walk the physical log and process log records from the tail to the
      head. The first loop handles the case where the tail is beyond the
      head and processes records up to the end of the physical log. The
      subsequent loop processes records from the beginning of the physical
      log to the head.
      
      Because log records can wrap around the end of the physical log, the
      first loop mentioned above must handle this case appropriately.
      Records are processed from in-core buffers, which means that this
      algorithm must split the reads of such records into two partial
      I/Os: 1.) from the beginning of the record to the end of the log and
      2.) from the beginning of the log to the end of the record. This is
      further complicated by the fact that the log record header and log
      record data are read into independent buffers.
      
      The current handling of each buffer correctly splits the reads when
      either the header or data starts before the end of the log and wraps
      around the end. The data read does not correctly handle the case
      where the prior header read wrapped or ends on the physical log end
      boundary. blk_no is incremented to or beyond the log end after the
      header read to point to the record data, but the split data read
      logic triggers, attempts to read from an invalid log block and
      ultimately causes log recovery to fail. This can be reproduced
      fairly reliably via xfstests tests generic/047 and generic/388 with
      large iclog sizes (256k) and small (10M) logs.
      
      If the record header read has pushed beyond the end of the physical
      log, the subsequent data read is actually contiguous. Update the
      data read logic to detect the case where blk_no has wrapped, mod it
      against the log size to read from the correct address and issue one
      contiguous read for the log data buffer. The log record is processed
      as normal from the buffer(s), the loop exits after the current
      iteration and the subsequent loop picks up with the first new record
      after the start of the log.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      35093926