1. 26 Sep, 2022 40 commits
    • Josef Bacik's avatar
      btrfs: make tree_search_for_insert return extent_state · e349fd3b
      Josef Bacik authored
      We use this to search for an extent state, or return the nodes we need
      to insert a new extent state.  This means we have the following pattern
      
      node = tree_search_for_insert();
      if (!node) {
      	/* alloc and insert. */
      	goto again;
      }
      state = rb_entry(node, struct extent_state, rb_node);
      
      we don't use the node for anything else.  Making
      tree_search_for_insert() return the extent_state means we can drop the
      rb_node and clean this up by eliminating the rb_entry.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e349fd3b
    • Josef Bacik's avatar
      btrfs: make tree_search return struct extent_state · aa852dab
      Josef Bacik authored
      We have a consistent pattern of
      
      n = tree_search();
      if (!n)
      	goto out;
      state = rb_entry(n, struct extent_state, rb_node);
      while (state) {
      	/* do something. */
      }
      
      which is a bit redundant.  If we make tree_search return the state we
      can simply have
      
      state = tree_search();
      while (state) {
      	/* do something. */
      }
      
      which cleans up the code quite a bit.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      aa852dab
    • Josef Bacik's avatar
      btrfs: use next_state instead of rb_next where we can · ccaeff92
      Josef Bacik authored
      We can simplify a lot of these functions where we have to cycle through
      extent_state's by simply using next_state() instead of rb_next().  In
      many spots this allows us to do things like
      
      while (state) {
      	/* whatever */
      	state = next_state(state);
      }
      
      instead of
      
      while (1) {
      	state = rb_entry(n, struct extent_state, rb_node);
      	n = rb_next(n);
      	if (!n)
      		break;
      }
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ccaeff92
    • Josef Bacik's avatar
      btrfs: remove struct tree_entry in extent-io-tree.c · 071d19f5
      Josef Bacik authored
      This existed when we overloaded the tree manipulation functions for both
      the extent_io_tree and the extent buffer tree.  However the extent
      buffers are now stored in a radix tree, so we no longer need this
      abstraction.  Remove struct tree_entry and use extent_state directly
      instead.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      071d19f5
    • Josef Bacik's avatar
      btrfs: unexport all the temporary exports for extent-io-tree.c · a4055213
      Josef Bacik authored
      Now that we've moved everything we can unexport all the temporary
      exports, move the random helpers, and mark everything as static again.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a4055213
    • Josef Bacik's avatar
      btrfs: unexport btrfs_debug_check_extent_io_range · d8038a1f
      Josef Bacik authored
      We no longer need to export this as all users are in extent-io-tree.c,
      remove it from the header and put it into extent-io-tree.c.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d8038a1f
    • Josef Bacik's avatar
      btrfs: move core extent_io_tree functions to extent-io-tree.c · e3974c66
      Josef Bacik authored
      This is still huge, but unfortunately I cannot make it smaller without
      renaming tree_search() and changing all the callers to use the new name,
      then moving those chunks and then changing the name back.  This feels
      like too much churn for code movement, so I've limited this to only
      things that called tree_search().  With this patch all of the
      extent_io_tree code is now in extent-io-tree.c.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e3974c66
    • Josef Bacik's avatar
      btrfs: move a few exported extent_io_tree helpers to extent-io-tree.c · 38830018
      Josef Bacik authored
      These are the last few helpers that do not rely on tree_search() and
      who's other helpers are exported and in extent-io-tree.c already.  Move
      these across now in order to make the core move smaller.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      38830018
    • Josef Bacik's avatar
      btrfs: temporarily export and then move extent state helpers · 04eba893
      Josef Bacik authored
      In order to avoid moving all of the related code at once temporarily
      export all of the extent state related helpers.  Then move these helpers
      into extent-io-tree.c.  We will clean up the exports and make them
      static in followup patches.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      04eba893
    • Josef Bacik's avatar
      btrfs: temporarily export and move core extent_io_tree tree functions · 91af24e4
      Josef Bacik authored
      A lot of the various internals of extent_io_tree call these two
      functions for insert or searching the rb tree for entries, so
      temporarily export them and then move them to extent-io-tree.c.  We
      can't move tree_search() without renaming it, and I don't want to
      introduce a bunch of churn just to do that, so move these functions
      first and then we can move a few big functions and then the remaining
      users of tree_search().
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      91af24e4
    • Josef Bacik's avatar
      btrfs: move btrfs_debug_check_extent_io_range into extent-io-tree.c · 6962541e
      Josef Bacik authored
      This helper is used by a lot of the core extent_io_tree helpers, so
      temporarily export it and move it into extent-io-tree.c in order to make
      it straightforward to migrate the helpers in batches.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6962541e
    • Josef Bacik's avatar
      btrfs: export wait_extent_bit · ec39e39b
      Josef Bacik authored
      This is used by the subpage code in addition to lock_extent_bits, so
      export it so we can move it out of extent_io.c
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ec39e39b
    • Josef Bacik's avatar
      btrfs: move simple extent bit helpers out of extent_io.c · a6631887
      Josef Bacik authored
      These are just variants and wrappers around the actual work horses of
      the extent state.  Extract these out of extent_io.c.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a6631887
    • Josef Bacik's avatar
      btrfs: convert BUG_ON(EXTENT_BIT_LOCKED) checks to ASSERT's · ad795329
      Josef Bacik authored
      We only call these functions from the qgroup code which doesn't call
      with EXTENT_BIT_LOCKED.  These are BUG_ON()'s that exist to keep us
      developers from using these functions with EXTENT_BIT_LOCKED, so convert
      them to ASSERT()'s.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ad795329
    • Josef Bacik's avatar
      btrfs: move extent state init and alloc functions to their own file · 83cf709a
      Josef Bacik authored
      Start cleaning up extent_io.c by moving the extent state code out of it.
      This patch starts with the extent state allocation code and the
      extent_io_tree init code.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      83cf709a
    • Josef Bacik's avatar
      btrfs: temporarily export alloc_extent_state helpers · c45379a2
      Josef Bacik authored
      We're going to move this code in stages, but while we're doing that we
      need to export these helpers so we can more easily move the code into
      the new file.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c45379a2
    • Josef Bacik's avatar
      btrfs: separate out the eb and extent state leak helpers · a40246e8
      Josef Bacik authored
      Currently we have the add/del functions generic so that we can use them
      for both extent buffers and extent states.  We want to separate this
      code however, so separate these helpers into per-object helpers in
      anticipation of the split.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a40246e8
    • Josef Bacik's avatar
      btrfs: separate out the extent state and extent buffer init code · a62a3bd9
      Josef Bacik authored
      In order to help separate the extent buffer from the extent io tree code
      we need to break up the init functions.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a62a3bd9
    • Josef Bacik's avatar
      btrfs: use find_first_extent_bit in btrfs_clean_io_failure · cdca85b0
      Josef Bacik authored
      Currently we're using find_first_extent_bit_state to check if our state
      contains the given failrec range, however this is more of an internal
      extent_io_tree helper, and is technically unsafe to use because we're
      accessing the state outside of the extent_io_tree lock.
      
      Instead use the normal helper find_first_extent_bit which returns the
      range of the extent state we find in find_first_extent_bit_state and use
      that to do our sanity checking.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cdca85b0
    • Josef Bacik's avatar
      btrfs: convert the io_failure_tree to a plain rb_tree · 87c11705
      Josef Bacik authored
      We still have this oddity of stashing the io_failure_record in the
      extent state for the io_failure_tree, which is leftover from when we
      used to stuff private pointers in extent_io_trees.
      
      However this doesn't make a lot of sense for the io failure records, we
      can simply use a normal rb_tree for this.  This will allow us to further
      simplify the extent_io_tree code by removing the io_failure_rec pointer
      from the extent state.
      
      Convert the io_failure_tree to an rb tree + spinlock in the inode, and
      then use our rb tree simple helpers to insert and find failed records.
      This greatly cleans up this code and makes it easier to separate out the
      extent_io_tree code.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      87c11705
    • Josef Bacik's avatar
      btrfs: unexport internal failrec functions · a2061748
      Josef Bacik authored
      These are internally used functions and are not used outside of
      extent_io.c.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a2061748
    • Josef Bacik's avatar
      btrfs: rename clean_io_failure and remove extraneous args · 0d0a762c
      Josef Bacik authored
      This is exported, so rename it to btrfs_clean_io_failure.  Additionally
      we are passing in the io tree's and such from the inode, so instead of
      doing all that simply pass in the inode itself and get all the
      components we need directly inside of btrfs_clean_io_failure.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0d0a762c
    • David Sterba's avatar
      btrfs: add KCSAN annotations for unlocked access to block_rsv->full · 748f553c
      David Sterba authored
      KCSAN reports that there's unlocked access mixed with locked access,
      which is technically correct but is not a bug.  To avoid false alerts at
      least from KCSAN, add annotation and use a wrapper whenever ->full is
      accessed for read outside of lock.
      
      It is used as a fast check and only advisory.  In the worst case the
      block reserve is found !full and becomes full in the meantime, but
      properly handled.
      
      Depending on the value of ->full, btrfs_block_rsv_release decides
      where to return the reservation, and block_rsv_release_bytes handles a
      NULL pointer for block_rsv and if it's not NULL then it double checks
      the full status under a lock.
      
      Link: https://lore.kernel.org/linux-btrfs/CAAwBoOJDjei5Hnem155N_cJwiEkVwJYvgN-tQrwWbZQGhFU=cA@mail.gmail.com/
      Link: https://lore.kernel.org/linux-btrfs/YvHU/vsXd7uz5V6j@hungrycats.orgReported-by: default avatarZygo Blaxell <ce3g8jdj@umail.furryterror.org>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      748f553c
    • Filipe Manana's avatar
      btrfs: remove useless used space increment during space reservation · b0b47a38
      Filipe Manana authored
      At space-info.c:__reserve_bytes(), we increment the 'used' variable, but
      then we don't use the variable anymore, making the increment pointless.
      The increment became useless with commit 2e294c60 ("btrfs: simplify
      the logic in need_preemptive_flushing"), so just remove it.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b0b47a38
    • Christoph Hellwig's avatar
      btrfs: zoned: refactor device checks in btrfs_check_zoned_mode · 650c8a9c
      Christoph Hellwig authored
      btrfs_check_zoned_mode is really hard to follow, mostly due to the
      fact that a lot of the checks use duplicate conditions after support
      for zone emulation for conventional devices on file systems with the
      ZONED flag was added.  Fix this by factoring out the check for host
      managed devices for !ZONED file systems into a separate helper and
      then simplifying the rest of the code.
      Reviewed-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      650c8a9c
    • Christophe JAILLET's avatar
      btrfs: qgroup: fix a typo in a comment · 03ad2531
      Christophe JAILLET authored
      Add a missing 'r'.  s/qgoup/qgroup/ . Codespell does not catch that for
      some reason.
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      03ad2531
    • Gaosheng Cui's avatar
      btrfs: remove btrfs_bit_radix_cachep declaration · 6ea1a526
      Gaosheng Cui authored
      btrfs_bit_radix_cachep has been removed since
      commit 45c06543 ("Btrfs: remove unused btrfs_bit_radix slab"),
      so remove it.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarGaosheng Cui <cuigaosheng1@huawei.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6ea1a526
    • Qu Wenruo's avatar
      btrfs: skip subtree scan if it's too high to avoid low stall in btrfs_commit_transaction() · 011b46c3
      Qu Wenruo authored
      Btrfs qgroup has a long history of bringing performance penalty in
      btrfs_commit_transaction().
      
      Although we tried our best to migrate such impact, there is still an
      unsolved call site, btrfs_drop_snapshot().
      
      This function will find the highest shared tree block and modify its
      extent ownership to do a subvolume/snapshot dropping.
      
      Such change will affect the whole subtree, and cause tons of qgroup
      dirty extents and stall btrfs_commit_transaction().
      
      To avoid such problem, here we introduce a new sysfs interface,
      /sys/fs/btrfs/<uuid>/qgroups/drop_subptree_threshold, to determine at
      whether and at which level we should skip qgroup accounting for subtree
      dropping.
      
      The default value is BTRFS_MAX_LEVEL, thus every subtree drop will go
      through qgroup accounting, to ensure qgroup numbers are kept as
      consistent as possible.
      
      While for performance sensitive cases, add a way to change the values to
      more reasonable values like 3, to make any subtree, which is at or higher
      than level 3, to mark qgroup inconsistent and skip the accounting.
      
      The cost is obvious, the qgroup number is no longer consistent, but at
      least performance is more reasonable, and users have the control.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      011b46c3
    • Qu Wenruo's avatar
      btrfs: introduce BTRFS_QGROUP_RUNTIME_FLAG_NO_ACCOUNTING to skip qgroup accounting · e15e9f43
      Qu Wenruo authored
      The new flag will make btrfs qgroup skip all its time consuming
      qgroup accounting.
      
      The lifespan is the same as BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN,
      only get cleared after a new rescan.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e15e9f43
    • Qu Wenruo's avatar
      btrfs: introduce BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN · e562a8bd
      Qu Wenruo authored
      Introduce a new runtime flag, BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN,
      which will inform qgroup rescan to cancel its work asynchronously.
      
      This is to address the window when an operation makes qgroup numbers
      inconsistent (like qgroup inheriting) while a qgroup rescan is running.
      
      In that case, qgroup inconsistent flag will be cleared when qgroup
      rescan finishes.
      But we changed the ownership of some extents, which means the rescan is
      already meaningless, and the qgroup inconsistent flag should not be
      cleared.
      
      With the new flag, each time we set INCONSISTENT flag, we also set this
      new flag to inform any running qgroup rescan to exit immediately, and
      leaving the INCONSISTENT flag there.
      
      The new runtime flag can only be cleared when a new rescan is started.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e562a8bd
    • Qu Wenruo's avatar
      btrfs: introduce BTRFS_QGROUP_STATUS_FLAGS_MASK for later expansion · e71564c0
      Qu Wenruo authored
      Currently we only have 3 qgroup flags:
      
      - BTRFS_QGROUP_STATUS_FLAG_ON
      - BTRFS_QGROUP_STATUS_FLAG_RESCAN
      - BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT
      
      These flags match the on-disk flags used in btrfs_qgroup_status.
      
      But we're going to introduce extra runtime flags which will not reach
      disks.
      
      So here we introduce a new mask, BTRFS_QGROUP_STATUS_FLAGS_MASK, to
      make sure only those flags can reach disks.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e71564c0
    • Qu Wenruo's avatar
      btrfs: sysfs: introduce global qgroup attribute group · ed2e35d8
      Qu Wenruo authored
      Although we already have info kobject for each qgroup, we don't have
      global qgroup info attributes to show things like enabled or
      inconsistent status flags.
      
      Add this qgroups attribute groups, and the first member is qgroup_flags,
      which is a read-only attribute to show human readable qgroup flags.
      
      The path is:
        /sys/fs/btrfs/<uuid>/qgroups/enabled
        /sys/fs/btrfs/<uuid>/qgroups/inconsistent
      
      The output is simple, just 1 or 0.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ed2e35d8
    • Filipe Manana's avatar
      btrfs: make fiemap more efficient and accurate reporting extent sharedness · ac3c0d36
      Filipe Manana authored
      The current fiemap implementation does not scale very well with the number
      of extents a file has. This is both because the main algorithm to find out
      the extents has a high algorithmic complexity and because for each extent
      we have to check if it's shared. This second part, checking if an extent
      is shared, is significantly improved by the two previous patches in this
      patchset, while the first part is improved by this specific patch. Every
      now and then we get reports from users mentioning fiemap is too slow or
      even unusable for files with a very large number of extents, such as the
      two recent reports referred to by the Link tags at the bottom of this
      change log.
      
      To understand why the part of finding which extents a file has is very
      inefficient, consider the example of doing a full ranged fiemap against
      a file that has over 100K extents (normal for example for a file with
      more than 10G of data and using compression, which limits the extent size
      to 128K). When we enter fiemap at extent_fiemap(), the following happens:
      
      1) Before entering the main loop, we call get_extent_skip_holes() to get
         the first extent map. This leads us to btrfs_get_extent_fiemap(), which
         in turn calls btrfs_get_extent(), to find the first extent map that
         covers the file range [0, LLONG_MAX).
      
         btrfs_get_extent() will first search the inode's extent map tree, to
         see if we have an extent map there that covers the range. If it does
         not find one, then it will search the inode's subvolume b+tree for a
         fitting file extent item. After finding the file extent item, it will
         allocate an extent map, fill it in with information extracted from the
         file extent item, and add it to the inode's extent map tree (which
         requires a search for insertion in the tree).
      
      2) Then we enter the main loop at extent_fiemap(), emit the details of
         the extent, and call again get_extent_skip_holes(), with a start
         offset matching the end of the extent map we previously processed.
      
         We end up at btrfs_get_extent() again, will search the extent map tree
         and then search the subvolume b+tree for a file extent item if we could
         not find an extent map in the extent tree. We allocate an extent map,
         fill it in with the details in the file extent item, and then insert
         it into the extent map tree (yet another search in this tree).
      
      3) The second step is repeated over and over, until we have processed the
         whole file range. Each iteration ends at btrfs_get_extent(), which
         does a red black tree search on the extent map tree, then searches the
         subvolume b+tree, allocates an extent map and then does another search
         in the extent map tree in order to insert the extent map.
      
         In the best scenario we have all the extent maps already in the extent
         tree, and so for each extent we do a single search on a red black tree,
         so we have a complexity of O(n log n).
      
         In the worst scenario we don't have any extent map already loaded in
         the extent map tree, or have very few already there. In this case the
         complexity is much higher since we do:
      
         - A red black tree search on the extent map tree, which has O(log n)
           complexity, initially very fast since the tree is empty or very
           small, but as we end up allocating extent maps and adding them to
           the tree when we don't find them there, each subsequent search on
           the tree gets slower, since it's getting bigger and bigger after
           each iteration.
      
         - A search on the subvolume b+tree, also O(log n) complexity, but it
           has items for all inodes in the subvolume, not just items for our
           inode. Plus on a filesystem with concurrent operations on other
           inodes, we can block doing the search due to lock contention on
           b+tree nodes/leaves.
      
         - Allocate an extent map - this can block, and can also fail if we
           are under serious memory pressure.
      
         - Do another search on the extent maps red black tree, with the goal
           of inserting the extent map we just allocated. Again, after every
           iteration this tree is getting bigger by 1 element, so after many
           iterations the searches are slower and slower.
      
         - We will not need the allocated extent map anymore, so it's pointless
           to add it to the extent map tree. It's just wasting time and memory.
      
         In short we end up searching the extent map tree multiple times, on a
         tree that is growing bigger and bigger after each iteration. And
         besides that we visit the same leaf of the subvolume b+tree many times,
         since a leaf with the default size of 16K can easily have more than 200
         file extent items.
      
      This is very inefficient overall. This patch changes the algorithm to
      instead iterate over the subvolume b+tree, visiting each leaf only once,
      and only searching in the extent map tree for file ranges that have holes
      or prealloc extents, in order to figure out if we have delalloc there.
      It will never allocate an extent map and add it to the extent map tree.
      This is very similar to what was previously done for the lseek's hole and
      data seeking features.
      
      Also, the current implementation relying on extent maps for figuring out
      which extents we have is not correct. This is because extent maps can be
      merged even if they represent different extents - we do this to minimize
      memory utilization and keep extent map trees smaller. For example if we
      have two extents that are contiguous on disk, once we load the two extent
      maps, they get merged into a single one - however if only one of the
      extents is shared, we end up reporting both as shared or both as not
      shared, which is incorrect.
      
      This reproducer triggers that bug:
      
          $ cat fiemap-bug.sh
          #!/bin/bash
      
          DEV=/dev/sdj
          MNT=/mnt/sdj
      
          mkfs.btrfs -f $DEV
          mount $DEV $MNT
      
          # Create a file with two 256K extents.
          # Since there is no other write activity, they will be contiguous,
          # and their extent maps merged, despite having two distinct extents.
          xfs_io -f -c "pwrite -S 0xab 0 256K" \
                    -c "fsync" \
                    -c "pwrite -S 0xcd 256K 256K" \
                    -c "fsync" \
                    $MNT/foo
      
          # Now clone only the second extent into another file.
          xfs_io -f -c "reflink $MNT/foo 256K 0 256K" $MNT/bar
      
          # Filefrag will report a single 512K extent, and say it's not shared.
          echo
          filefrag -v $MNT/foo
      
          umount $MNT
      
      Running the reproducer:
      
          $ ./fiemap-bug.sh
          wrote 262144/262144 bytes at offset 0
          256 KiB, 64 ops; 0.0038 sec (65.479 MiB/sec and 16762.7030 ops/sec)
          wrote 262144/262144 bytes at offset 262144
          256 KiB, 64 ops; 0.0040 sec (61.125 MiB/sec and 15647.9218 ops/sec)
          linked 262144/262144 bytes at offset 0
          256 KiB, 1 ops; 0.0002 sec (1.034 GiB/sec and 4237.2881 ops/sec)
      
          Filesystem type is: 9123683e
          File size of /mnt/sdj/foo is 524288 (128 blocks of 4096 bytes)
           ext:     logical_offset:        physical_offset: length:   expected: flags:
             0:        0..     127:       3328..      3455:    128:             last,eof
          /mnt/sdj/foo: 1 extent found
      
      We end up reporting that we have a single 512K that is not shared, however
      we have two 256K extents, and the second one is shared. Changing the
      reproducer to clone instead the first extent into file 'bar', makes us
      report a single 512K extent that is shared, which is algo incorrect since
      we have two 256K extents and only the first one is shared.
      
      This patch is part of a larger patchset that is comprised of the following
      patches:
      
          btrfs: allow hole and data seeking to be interruptible
          btrfs: make hole and data seeking a lot more efficient
          btrfs: remove check for impossible block start for an extent map at fiemap
          btrfs: remove zero length check when entering fiemap
          btrfs: properly flush delalloc when entering fiemap
          btrfs: allow fiemap to be interruptible
          btrfs: rename btrfs_check_shared() to a more descriptive name
          btrfs: speedup checking for extent sharedness during fiemap
          btrfs: skip unnecessary extent buffer sharedness checks during fiemap
          btrfs: make fiemap more efficient and accurate reporting extent sharedness
      
      The patchset was tested on a machine running a non-debug kernel (Debian's
      default config) and compared the tests below on a branch without the
      patchset versus the same branch with the whole patchset applied.
      
      The following test for a large compressed file without holes:
      
          $ cat fiemap-perf-test.sh
          #!/bin/bash
      
          DEV=/dev/sdi
          MNT=/mnt/sdi
      
          mkfs.btrfs -f $DEV
          mount -o compress=lzo $DEV $MNT
      
          # 40G gives 327680 128K file extents (due to compression).
          xfs_io -f -c "pwrite -S 0xab -b 1M 0 20G" $MNT/foobar
      
          umount $MNT
          mount -o compress=lzo $DEV $MNT
      
          start=$(date +%s%N)
          filefrag $MNT/foobar
          end=$(date +%s%N)
          dur=$(( (end - start) / 1000000 ))
          echo "fiemap took $dur milliseconds (metadata not cached)"
      
          start=$(date +%s%N)
          filefrag $MNT/foobar
          end=$(date +%s%N)
          dur=$(( (end - start) / 1000000 ))
          echo "fiemap took $dur milliseconds (metadata cached)"
      
          umount $MNT
      
      Before patchset:
      
          $ ./fiemap-perf-test.sh
          (...)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 3597 milliseconds (metadata not cached)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 2107 milliseconds (metadata cached)
      
      After patchset:
      
          $ ./fiemap-perf-test.sh
          (...)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 1214 milliseconds (metadata not cached)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 684 milliseconds (metadata cached)
      
      That's a speedup of about 3x for both cases (no metadata cached and all
      metadata cached).
      
      The test provided by Pavel (first Link tag at the bottom), which uses
      files with a large number of holes, was also used to measure the gains,
      and it consists on a small C program and a shell script to invoke it.
      The C program is the following:
      
          $ cat pavels-test.c
          #include <stdio.h>
          #include <unistd.h>
          #include <stdlib.h>
          #include <fcntl.h>
      
          #include <sys/stat.h>
          #include <sys/time.h>
          #include <sys/ioctl.h>
      
          #include <linux/fs.h>
          #include <linux/fiemap.h>
      
          #define FILE_INTERVAL (1<<13) /* 8Kb */
      
          long long interval(struct timeval t1, struct timeval t2)
          {
              long long val = 0;
              val += (t2.tv_usec - t1.tv_usec);
              val += (t2.tv_sec - t1.tv_sec) * 1000 * 1000;
              return val;
          }
      
          int main(int argc, char **argv)
          {
              struct fiemap fiemap = {};
              struct timeval t1, t2;
              char data = 'a';
              struct stat st;
              int fd, off, file_size = FILE_INTERVAL;
      
              if (argc != 3 && argc != 2) {
                      printf("usage: %s <path> [size]\n", argv[0]);
                      return 1;
              }
      
              if (argc == 3)
                      file_size = atoi(argv[2]);
              if (file_size < FILE_INTERVAL)
                      file_size = FILE_INTERVAL;
              file_size -= file_size % FILE_INTERVAL;
      
              fd = open(argv[1], O_RDWR | O_CREAT | O_TRUNC, 0644);
              if (fd < 0) {
                  perror("open");
                  return 1;
              }
      
              for (off = 0; off < file_size; off += FILE_INTERVAL) {
                  if (pwrite(fd, &data, 1, off) != 1) {
                      perror("pwrite");
                      close(fd);
                      return 1;
                  }
              }
      
              if (ftruncate(fd, file_size)) {
                  perror("ftruncate");
                  close(fd);
                  return 1;
              }
      
              if (fstat(fd, &st) < 0) {
                  perror("fstat");
                  close(fd);
                  return 1;
              }
      
              printf("size: %ld\n", st.st_size);
              printf("actual size: %ld\n", st.st_blocks * 512);
      
              fiemap.fm_length = FIEMAP_MAX_OFFSET;
              gettimeofday(&t1, NULL);
              if (ioctl(fd, FS_IOC_FIEMAP, &fiemap) < 0) {
                  perror("fiemap");
                  close(fd);
                  return 1;
              }
              gettimeofday(&t2, NULL);
      
              printf("fiemap: fm_mapped_extents = %d\n",
                     fiemap.fm_mapped_extents);
              printf("time = %lld us\n", interval(t1, t2));
      
              close(fd);
              return 0;
          }
      
          $ gcc -o pavels_test pavels_test.c
      
      And the wrapper shell script:
      
          $ cat fiemap-pavels-test.sh
      
          #!/bin/bash
      
          DEV=/dev/sdi
          MNT=/mnt/sdi
      
          mkfs.btrfs -f -O no-holes $DEV
          mount $DEV $MNT
      
          echo
          echo "*********** 256M ***********"
          echo
      
          ./pavels-test $MNT/testfile $((1 << 28))
          echo
          ./pavels-test $MNT/testfile $((1 << 28))
      
          echo
          echo "*********** 512M ***********"
          echo
      
          ./pavels-test $MNT/testfile $((1 << 29))
          echo
          ./pavels-test $MNT/testfile $((1 << 29))
      
          echo
          echo "*********** 1G ***********"
          echo
      
          ./pavels-test $MNT/testfile $((1 << 30))
          echo
          ./pavels-test $MNT/testfile $((1 << 30))
      
          umount $MNT
      
      Running his reproducer before applying the patchset:
      
          *********** 256M ***********
      
          size: 268435456
          actual size: 134217728
          fiemap: fm_mapped_extents = 32768
          time = 4003133 us
      
          size: 268435456
          actual size: 134217728
          fiemap: fm_mapped_extents = 32768
          time = 4895330 us
      
          *********** 512M ***********
      
          size: 536870912
          actual size: 268435456
          fiemap: fm_mapped_extents = 65536
          time = 30123675 us
      
          size: 536870912
          actual size: 268435456
          fiemap: fm_mapped_extents = 65536
          time = 33450934 us
      
          *********** 1G ***********
      
          size: 1073741824
          actual size: 536870912
          fiemap: fm_mapped_extents = 131072
          time = 224924074 us
      
          size: 1073741824
          actual size: 536870912
          fiemap: fm_mapped_extents = 131072
          time = 217239242 us
      
      Running it after applying the patchset:
      
          *********** 256M ***********
      
          size: 268435456
          actual size: 134217728
          fiemap: fm_mapped_extents = 32768
          time = 29475 us
      
          size: 268435456
          actual size: 134217728
          fiemap: fm_mapped_extents = 32768
          time = 29307 us
      
          *********** 512M ***********
      
          size: 536870912
          actual size: 268435456
          fiemap: fm_mapped_extents = 65536
          time = 58996 us
      
          size: 536870912
          actual size: 268435456
          fiemap: fm_mapped_extents = 65536
          time = 59115 us
      
          *********** 1G ***********
      
          size: 1073741824
          actual size: 536870912
          fiemap: fm_mapped_extents = 116251
          time = 124141 us
      
          size: 1073741824
          actual size: 536870912
          fiemap: fm_mapped_extents = 131072
          time = 119387 us
      
      The speedup is massive, both on the first fiemap call and on the second
      one as well, as his test creates files with many holes and small extents
      (every extent follows a hole and precedes another hole).
      
      For the 256M file we go from 4 seconds down to 29 milliseconds in the
      first run, and then from 4.9 seconds down to 29 milliseconds again in the
      second run, a speedup of 138x and 169x, respectively.
      
      For the 512M file we go from 30.1 seconds down to 59 milliseconds in the
      first run, and then from 33.5 seconds down to 59 milliseconds again in the
      second run, a speedup of 510x and 568x, respectively.
      
      For the 1G file, we go from 225 seconds down to 124 milliseconds in the
      first run, and then from 217 seconds down to 119 milliseconds in the
      second run, a speedup of 1815x and 1824x, respectively.
      Reported-by: default avatarPavel Tikhomirov <ptikhomirov@virtuozzo.com>
      Link: https://lore.kernel.org/linux-btrfs/21dd32c6-f1f9-f44a-466a-e18fdc6788a7@virtuozzo.com/Reported-by: default avatarDominique MARTINET <dominique.martinet@atmark-techno.com>
      Link: https://lore.kernel.org/linux-btrfs/Ysace25wh5BbLd5f@atmark-techno.com/Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ac3c0d36
    • Filipe Manana's avatar
      btrfs: skip unnecessary extent buffer sharedness checks during fiemap · b8f164e3
      Filipe Manana authored
      During fiemap, for each file extent we find, we must check if it's shared
      or not. The sharedness check starts by verifying if the extent is directly
      shared (its refcount in the extent tree is > 1), and if it is not directly
      shared, then we will check if every node in the subvolume b+tree leading
      from the root to the leaf that has the file extent item (in reverse order),
      is shared (through snapshots).
      
      However this second step is not needed if our extent was created in a
      transaction more recent than the last transaction where a snapshot of the
      inode's root happened, because it can't be shared indirectly (through
      shared subtrees) without a snapshot created in a more recent transaction.
      
      So grab the generation of the extent from the extent map and pass it to
      btrfs_is_data_extent_shared(), which will skip this second phase when the
      generation is more recent than the root's last snapshot value. Note that
      we skip this optimization if the extent map is the result of merging 2
      or more extent maps, because in this case its generation is the maximum
      of the generations of all merged extent maps.
      
      The fact the we use extent maps and they can be merged despite the
      underlying extents being distinct (different file extent items in the
      subvolume b+tree and different extent items in the extent b+tree), can
      result in some bugs when reporting shared extents. But this is a problem
      of the current implementation of fiemap relying on extent maps.
      One example where we get incorrect results is:
      
          $ cat fiemap-bug.sh
          #!/bin/bash
      
          DEV=/dev/sdj
          MNT=/mnt/sdj
      
          mkfs.btrfs -f $DEV
          mount $DEV $MNT
      
          # Create a file with two 256K extents.
          # Since there is no other write activity, they will be contiguous,
          # and their extent maps merged, despite having two distinct extents.
          xfs_io -f -c "pwrite -S 0xab 0 256K" \
                    -c "fsync" \
                    -c "pwrite -S 0xcd 256K 256K" \
                    -c "fsync" \
                    $MNT/foo
      
          # Now clone only the second extent into another file.
          xfs_io -f -c "reflink $MNT/foo 256K 0 256K" $MNT/bar
      
          # Filefrag will report a single 512K extent, and say it's not shared.
          echo
          filefrag -v $MNT/foo
      
          umount $MNT
      
      Running the reproducer:
      
          $ ./fiemap-bug.sh
          wrote 262144/262144 bytes at offset 0
          256 KiB, 64 ops; 0.0038 sec (65.479 MiB/sec and 16762.7030 ops/sec)
          wrote 262144/262144 bytes at offset 262144
          256 KiB, 64 ops; 0.0040 sec (61.125 MiB/sec and 15647.9218 ops/sec)
          linked 262144/262144 bytes at offset 0
          256 KiB, 1 ops; 0.0002 sec (1.034 GiB/sec and 4237.2881 ops/sec)
      
          Filesystem type is: 9123683e
          File size of /mnt/sdj/foo is 524288 (128 blocks of 4096 bytes)
           ext:     logical_offset:        physical_offset: length:   expected: flags:
             0:        0..     127:       3328..      3455:    128:             last,eof
          /mnt/sdj/foo: 1 extent found
      
      We end up reporting that we have a single 512K that is not shared, however
      we have two 256K extents, and the second one is shared. Changing the
      reproducer to clone instead the first extent into file 'bar', makes us
      report a single 512K extent that is shared, which is algo incorrect since
      we have two 256K extents and only the first one is shared.
      
      This is z problem that existed before this change, and remains after this
      change, as it can't be easily fixed. The next patch in the series reworks
      fiemap to primarily use file extent items instead of extent maps (except
      for checking for delalloc ranges), with the goal of improving its
      scalability and performance, but it also ends up fixing this particular
      bug caused by extent map merging.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b8f164e3
    • Filipe Manana's avatar
      btrfs: speedup checking for extent sharedness during fiemap · 12a824dc
      Filipe Manana authored
      One of the most expensive tasks performed during fiemap is to check if
      an extent is shared. This task has two major steps:
      
      1) Check if the data extent is shared. This implies checking the extent
         item in the extent tree, checking delayed references, etc. If we
         find the data extent is directly shared, we terminate immediately;
      
      2) If the data extent is not directly shared (its extent item has a
         refcount of 1), then it may be shared if we have snapshots that share
         subtrees of the inode's subvolume b+tree. So we check if the leaf
         containing the file extent item is shared, then its parent node, then
         the parent node of the parent node, etc, until we reach the root node
         or we find one of them is shared - in which case we stop immediately.
      
      During fiemap we process the extents of a file from left to right, from
      file offset 0 to EOF. This means that we iterate b+tree leaves from left
      to right, and has the implication that we keep repeating that second step
      above several times for the same b+tree path of the inode's subvolume
      b+tree.
      
      For example, if we have two file extent items in leaf X, and the path to
      leaf X is A -> B -> C -> X, then when we try to determine if the data
      extent referenced by the first extent item is shared, we check if the data
      extent is shared - if it's not, then we check if leaf X is shared, if not,
      then we check if node C is shared, if not, then check if node B is shared,
      if not than check if node A is shared. When we move to the next file
      extent item, after determining the data extent is not shared, we repeat
      the checks for X, C, B and A - doing all the expensive searches in the
      extent tree, delayed refs, etc. If we have thousands of tile extents, then
      we keep repeating the sharedness checks for the same paths over and over.
      
      On a file that has no shared extents or only a small portion, it's easy
      to see that this scales terribly with the number of extents in the file
      and the sizes of the extent and subvolume b+trees.
      
      This change eliminates the repeated sharedness check on extent buffers
      by caching the results of the last path used. The results can be used as
      long as no snapshots were created since they were cached (for not shared
      extent buffers) or no roots were dropped since they were cached (for
      shared extent buffers). This greatly reduces the time spent by fiemap for
      files with thousands of extents and/or large extent and subvolume b+trees.
      
      Example performance test:
      
          $ cat fiemap-perf-test.sh
          #!/bin/bash
      
          DEV=/dev/sdi
          MNT=/mnt/sdi
      
          mkfs.btrfs -f $DEV
          mount -o compress=lzo $DEV $MNT
      
          # 40G gives 327680 128K file extents (due to compression).
          xfs_io -f -c "pwrite -S 0xab -b 1M 0 40G" $MNT/foobar
      
          umount $MNT
          mount -o compress=lzo $DEV $MNT
      
          start=$(date +%s%N)
          filefrag $MNT/foobar
          end=$(date +%s%N)
          dur=$(( (end - start) / 1000000 ))
          echo "fiemap took $dur milliseconds (metadata not cached)"
      
          start=$(date +%s%N)
          filefrag $MNT/foobar
          end=$(date +%s%N)
          dur=$(( (end - start) / 1000000 ))
          echo "fiemap took $dur milliseconds (metadata cached)"
      
          umount $MNT
      
      Before this patch:
      
          $ ./fiemap-perf-test.sh
          (...)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 3597 milliseconds (metadata not cached)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 2107 milliseconds (metadata cached)
      
      After this patch:
      
          $ ./fiemap-perf-test.sh
          (...)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 1646 milliseconds (metadata not cached)
          /mnt/sdi/foobar: 327680 extents found
          fiemap took 698 milliseconds (metadata cached)
      
      That's about 2.2x faster when no metadata is cached, and about 3x faster
      when all metadata is cached. On a real filesystem with many other files,
      data, directories, etc, the b+trees will be 2 or 3 levels higher,
      therefore this optimization will have a higher impact.
      
      Several reports of a slow fiemap show up often, the two Link tags below
      refer to two recent reports of such slowness. This patch, together with
      the next ones in the series, is meant to address that.
      
      Link: https://lore.kernel.org/linux-btrfs/21dd32c6-f1f9-f44a-466a-e18fdc6788a7@virtuozzo.com/
      Link: https://lore.kernel.org/linux-btrfs/Ysace25wh5BbLd5f@atmark-techno.com/Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      12a824dc
    • Filipe Manana's avatar
      btrfs: rename btrfs_check_shared() to a more descriptive name · 8eedadda
      Filipe Manana authored
      The function btrfs_check_shared() is supposed to be used to check if a
      data extent is shared, but its name is too generic, may easily cause
      confusion in the sense that it may be used for metadata extents.
      
      So rename it to btrfs_is_data_extent_shared(), which will also make it
      less confusing after the next change that adds a backref lookup cache for
      the b+tree nodes that lead to the leaf that contains the file extent item
      that points to the target data extent.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8eedadda
    • Filipe Manana's avatar
      btrfs: allow fiemap to be interruptible · 09fbc1c8
      Filipe Manana authored
      Doing fiemap on a file with a very large number of extents can take a very
      long time, and we have reports of it being too slow (two recent examples
      in the Link tags below), so make it interruptible.
      
      Link: https://lore.kernel.org/linux-btrfs/21dd32c6-f1f9-f44a-466a-e18fdc6788a7@virtuozzo.com/
      Link: https://lore.kernel.org/linux-btrfs/Ysace25wh5BbLd5f@atmark-techno.com/Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      09fbc1c8
    • Filipe Manana's avatar
      btrfs: properly flush delalloc when entering fiemap · 33a86cfa
      Filipe Manana authored
      If the flag FIEMAP_FLAG_SYNC is passed to fiemap, it means all delalloc
      should be flushed and writeback complete. We call the generic helper
      fiemap_prep() which does a filemap_write_and_wait() in case that flag is
      given, however that is not enough if we have compression. Because a
      single filemap_fdatawrite_range() only starts compression (in an async
      thread) and therefore returns before the compression is done and writeback
      is started.
      
      So make btrfs_fiemap(), actually wait for all writeback to start and
      complete if FIEMAP_FLAG_SYNC is set. We start and wait for writeback
      on the whole possible file range, from 0 to LLONG_MAX, because that is
      what the generic code at fiemap_prep() does.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      33a86cfa
    • Filipe Manana's avatar
      btrfs: remove zero length check when entering fiemap · 9a42bbae
      Filipe Manana authored
      There's no point to check for a 0 length at extent_fiemap(), as before
      calling it, we called fiemap_prep() at btrfs_fiemap(), which already
      checks for a zero length and returns the same -EINVAL error. So remove
      the pointless check.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9a42bbae
    • Filipe Manana's avatar
      btrfs: remove check for impossible block start for an extent map at fiemap · f12eec9a
      Filipe Manana authored
      During fiemap we are testing if an extent map has a block start with a
      value of EXTENT_MAP_LAST_BYTE, but that is never set on an extent map,
      and never was according to git history. So remove that useless check.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f12eec9a