1. 01 Oct, 2022 24 commits
  2. 30 Sep, 2022 8 commits
  3. 29 Sep, 2022 4 commits
  4. 27 Sep, 2022 2 commits
    • Eric Whitney's avatar
      ext4: minor defrag code improvements · d412df53
      Eric Whitney authored
      Modify the error returns for two file types that can't be defragged to
      more clearly communicate those restrictions to a caller.  When the
      defrag code is applied to swap files, return -ETXTBSY, and when applied
      to quota files, return -EOPNOTSUPP.  Move an extent tree search whose
      results are only occasionally required to the site always requiring them
      for improved efficiency.  Address a few typos.
      Signed-off-by: default avatarEric Whitney <enwlinux@gmail.com>
      Link: https://lore.kernel.org/r/20220722163910.268564-1-enwlinux@gmail.comSigned-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      d412df53
    • Jerry Lee 李修賢's avatar
      ext4: continue to expand file system when the target size doesn't reach · df3cb754
      Jerry Lee 李修賢 authored
      When expanding a file system from (16TiB-2MiB) to 18TiB, the operation
      exits early which leads to result inconsistency between resize2fs and
      Ext4 kernel driver.
      
      === before ===
      ○ → resize2fs /dev/mapper/thin
      resize2fs 1.45.5 (07-Jan-2020)
      Filesystem at /dev/mapper/thin is mounted on /mnt/test; on-line resizing required
      old_desc_blocks = 2048, new_desc_blocks = 2304
      The filesystem on /dev/mapper/thin is now 4831837696 (4k) blocks long.
      
      [  865.186308] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
      [  912.091502] dm-4: detected capacity change from 34359738368 to 38654705664
      [  970.030550] dm-5: detected capacity change from 34359734272 to 38654701568
      [ 1000.012751] EXT4-fs (dm-5): resizing filesystem from 4294966784 to 4831837696 blocks
      [ 1000.012878] EXT4-fs (dm-5): resized filesystem to 4294967296
      
      === after ===
      [  129.104898] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
      [  143.773630] dm-4: detected capacity change from 34359738368 to 38654705664
      [  198.203246] dm-5: detected capacity change from 34359734272 to 38654701568
      [  207.918603] EXT4-fs (dm-5): resizing filesystem from 4294966784 to 4831837696 blocks
      [  207.918754] EXT4-fs (dm-5): resizing filesystem from 4294967296 to 4831837696 blocks
      [  207.918758] EXT4-fs (dm-5): Converting file system to meta_bg
      [  207.918790] EXT4-fs (dm-5): resizing filesystem from 4294967296 to 4831837696 blocks
      [  221.454050] EXT4-fs (dm-5): resized to 4658298880 blocks
      [  227.634613] EXT4-fs (dm-5): resized filesystem to 4831837696
      Signed-off-by: default avatarJerry Lee <jerrylee@qnap.com>
      Link: https://lore.kernel.org/r/PU1PR04MB22635E739BD21150DC182AC6A18C9@PU1PR04MB2263.apcprd04.prod.outlook.comSigned-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      df3cb754
  5. 26 Sep, 2022 1 commit
  6. 22 Sep, 2022 1 commit
    • Theodore Ts'o's avatar
      ext4: limit the number of retries after discarding preallocations blocks · 80fa46d6
      Theodore Ts'o authored
      This patch avoids threads live-locking for hours when a large number
      threads are competing over the last few free extents as they blocks
      getting added and removed from preallocation pools.  From our bug
      reporter:
      
         A reliable way for triggering this has multiple writers
         continuously write() to files when the filesystem is full, while
         small amounts of space are freed (e.g. by truncating a large file
         -1MiB at a time). In the local filesystem, this can be done by
         simply not checking the return code of write (0) and/or the error
         (ENOSPACE) that is set. Over NFS with an async mount, even clients
         with proper error checking will behave this way since the linux NFS
         client implementation will not propagate the server errors [the
         write syscalls immediately return success] until the file handle is
         closed. This leads to a situation where NFS clients send a
         continuous stream of WRITE rpcs which result in ERRNOSPACE -- but
         since the client isn't seeing this, the stream of writes continues
         at maximum network speed.
      
         When some space does appear, multiple writers will all attempt to
         claim it for their current write. For NFS, we may see dozens to
         hundreds of threads that do this.
      
         The real-world scenario of this is database backup tooling (in
         particular, github.com/mdkent/percona-xtrabackup) which may write
         large files (>1TiB) to NFS for safe keeping. Some temporary files
         are written, rewound, and read back -- all before closing the file
         handle (the temp file is actually unlinked, to trigger automatic
         deletion on close/crash.) An application like this operating on an
         async NFS mount will not see an error code until TiB have been
         written/read.
      
         The lockup was observed when running this database backup on large
         filesystems (64 TiB in this case) with a high number of block
         groups and no free space. Fragmentation is generally not a factor
         in this filesystem (~thousands of large files, mostly contiguous
         except for the parts written while the filesystem is at capacity.)
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      80fa46d6