- 06 Nov, 2017 13 commits
-
-
Christoph Hellwig authored
Match the iteration order for extent deletion in the truncate and reflink I/O completion path. This also happens to make implementing the new incore extent list a lot easier. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Add a new xfs_iext_cursor structure to hide the direct extent map index manipulations. In addition to the existing lookup/get/insert/ remove and update routines new primitives to get the first and last extent cursor, as well as moving up and down by one extent are provided. Also new are convenience to increment/decrement the cursor and retreive the new extent, as well as to peek into the previous/next extent without updating the cursor and last but not least a macro to iterate over all extents in a fork. [darrick: rename for_each_iext to for_each_xfs_iext] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
This actually makes the function very slightly less efficient for now as we detour through the expanded irect format between the in-core extent format and the on-disk one instead of just endian swapping them. But with the incore extent btree the in-core one will use a different format and the representation will be entirely hidden. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
This actually makes the function very slightly less efficient for now as we detour through the expanded irect format between the in-core extent format and the on-disk one instead of just endian swapping them. But with the incore extent btree the in-core one will use a different format and the representation will be entirely hidden. It also happens to make the function a whole more readable. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
This prepares for getting rid of the current in-memory extent format. At the end of the series we will change the calling convention again to pass the xfs_bmbt_irec structure once it is available everywhere. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Reported-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Two cases in xfs_bmap_add_extent_delay_real currently insert a new extent before updating the existing one that is being split. While this works fine with a simple extent list, a more complex tree can't easily cope with overlapping extent. Reshuffle the code a bit to update the slot of the existing delalloc extent to the new real extent before inserting the shortened delalloc extent before or after it. This avoids the overlapping extents while still allowing to update the br_startblock field of the delalloc extent with the updated indirect block reservation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
- 03 Nov, 2017 2 commits
-
-
Darrick J. Wong authored
The newly added xfs_scrub_da_btree_block() function has one code path that returns the 'error' variable without initializing it first, as shown by this compiler warning: fs/xfs/scrub/dabtree.c: In function 'xfs_scrub_da_btree_block': fs/xfs/scrub/dabtree.c:462:9: error: 'error' may be used uninitialized in this function [-Werror=maybe-uninitialized] Return zero since the caller will exit the scrub code if we don't produce a buffer pointer. Fixes: 7c4a07a4 ("xfs: scrub directory/attribute btrees") Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Eryu Guan authored
On truncate down, if new size is not block size aligned, we zero the rest of block to avoid exposing stale data to user, and iomap_truncate_page() skips zeroing if the range is already in unwritten state or a hole. Then we writeback from on-disk i_size to the new size if this range hasn't been written to disk yet, and truncate page cache beyond new EOF and set in-core i_size. The problem is that we could write data between di_size and newsize before removing the page cache beyond newsize, as the extents may still be in unwritten state right after a buffer write. As such, the page of data that newsize lies in has not been zeroed by page cache invalidation before it is written, and xfs_do_writepage() hasn't triggered it's "zero data beyond EOF" case because we haven't updated in-core i_size yet. Then a subsequent mmap read could see non-zeros past EOF. I occasionally see this in fsx runs in fstests generic/112, a simplified fsx operation sequence is like (assuming 4k block size xfs): fallocate 0x0 0x1000 0x0 keep_size write 0x0 0x1000 0x0 truncate 0x0 0x800 0x1000 punch_hole 0x0 0x800 0x800 mapread 0x0 0x800 0x800 where fallocate allocates unwritten extent but doesn't update i_size, buffer write populates the page cache and extent is still unwritten, truncate skips zeroing page past new EOF and writes the page to disk, punch_hole invalidates the page cache, at last mapread reads the block back and sees non-zero beyond EOF. Fix it by moving truncate_setsize() to before writeback so the page cache invalidation zeros the partial page at the new EOF. This also triggers "zero data beyond EOF" in xfs_do_writepage() at writeback time, because newsize has been set and page straddles the newsize. Also fixed the wrong 'end' param of filemap_write_and_wait_range() call while we're at it, the 'end' is inclusive and should be 'newsize - 1'. Suggested-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Eryu Guan <eguan@redhat.com> Acked-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
- 01 Nov, 2017 4 commits
-
-
Dave Chinner authored
Some were missed in the pass that converted the function return values from int to bool. Update the remaining ones for consistency. Signed-Off-By: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Darrick J. Wong authored
As we walk the attribute btree, explicitly check the structure of the attribute leaves to make sure the pointers make sense and the freemap is sensible. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com>
-
Darrick J. Wong authored
Move the error injection tag names into a libxfs header so that we can share it between kernel and userspace. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com>
-
Darrick J. Wong authored
Remove xfs_inode_log_format_t now that xfs_inode_log_format is explicitly padded and therefore is a real on-disk structure. This enables xfs/122 to check the size of the structure. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com>
-
- 31 Oct, 2017 1 commit
-
-
Colin Ian King authored
Variable bit is being assigned a value that is never read, hence the assignment is redundant and can be removed. Cleans up clang warning: fs/xfs/libxfs/xfs_rtbitmap.c:675:3: warning: Value stored to 'bit' is never read Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
- 27 Oct, 2017 4 commits
-
-
Brian Foster authored
Fix an unused variable warning on non-DEBUG builds introduced by commit 7561d27e ("xfs: buffer lru reference count error injection tag"). Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Darrick J. Wong authored
When we're done checking all the records/keys in a btree block, compute the low and high key of the block and compare them to the associated key in the parent btree block. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Abort an dir/attr btree operation if the attr btree has obvious problems like loops back to the root or pointers don't point down the tree. Found by fuzzing btree[0].before to zero in xfs/402, which livelocks on the cycle in the attr btree. Apply the same checks to xfs_da3_node_lookup_int. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
When we're iterating the attribute list and we can't find our previous location based off the attribute cursor, we'll instead walk down the attribute btree from the root trying to find where we left off. Move this code into a separate function for later cleanups. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
- 26 Oct, 2017 16 commits
-
-
Darrick J. Wong authored
Make sure the log stripe unit is sane before proceeding with mounting. AFAICT this means that logsunit has to be 0, 1, or a multiple of the fs block size. Found this by setting the LSB of logsunit in xfs/350 and watching the system crash as soon as we try to write to the log. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Brian Foster authored
Log recovery of v4 filesystems does not use buffer verifiers because log recovery historically can result in transient buffer corruption when target buffers might be ahead of the log after a crash. v5 filesystems work around this problem with metadata LSN ordering. While this log recovery verifier behavior is necessary on v4 supers, it can result in leaving buffers around in the LRU without verifiers attached for a significant amount of time. This leads to use of unverified buffers while the filesystem is in active use, long after recovery has completed. To address this problem, drain all buffers from the LRU as a final step of the log mount sequence. Note that this is done unconditionally to provide a consistently clean cache footprint, regardless of superblock version or log state. As a side effect, this ensures that all cache resident, unverified buffers are reclaimed after log recovery and therefore must be recreated with verifiers on subsequent use. Reported-by: Darrick Wong <darrick.wong@oracle.com> Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Brian Foster authored
It is possible for mkfs to format very small filesystems with too small of an internal log with respect to the various minimum size and block count requirements. If this occurs when the log happens to be smaller than the scan window used for cycle verification and the scan wraps the end of the log, the start_blk calculation in xlog_find_head() underflows and leads to an attempt to scan an invalid range of log blocks. This results in log recovery failure and a failed mount. Since there may be filesystems out in the wild with this kind of geometry, we cannot simply refuse to mount. Instead, cap the scan window for cycle verification to the size of the physical log. This ensures that the cycle verification proceeds as expected when the scan wraps the end of the log. Reported-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Brian Foster authored
mkfs has a historical problem where it can format very small filesystems with too small of a physical log. Under certain conditions, log recovery of an associated filesystem can end up passing garbage parameter values to some of the cycle and log record verification functions due to bugs in log recovery not dealing with such filesystems properly. This results in attempts to read from bogus/underflowed log block addresses. Since the buffer read may ultimately succeed, log recovery can proceed with bogus data and otherwise go off the rails and crash. One example of this is a negative last_blk being passed to xlog_find_verify_log_record() causing us to skip the loop, pass a NULL head pointer to xlog_header_check_mount() and crash. Improve the xlog buffer verification to address this problem. We already verify xlog buffer length, so update this mechanism to also sanity check for a valid log relative block address and otherwise return an error. Pass a fixed, valid log block address from xlog_get_bp() since the target address will be validated when the buffer is read. This ensures that any bogus log block address/length calculations lead to graceful mount failure rather than risking a crash or worse if recovery proceeds with bogus data. Reported-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
This helper looks up the last extent the covers space before the passed in block number. This is useful for truncate and similar operations that operate backwards over the extent list. For xfs_bunmapi it also is a slight optimization as we can return early if there are not extents at or below the end of the to be truncated range. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
xfs_iread_extents is just a trivial wrapper, there is no good reason to keep the two separate. [darrick: minor fixups having left xfs_bmbt_validate_extent intact] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Look at the return value of xfs_iext_get_extent instead of figuring out the extent count first and looping up to it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Rewrite xfs_bmap_insert_extents so that we don't rely on extent indices except for iterating over them. Not being able to iterate to the previous extent or finding the extent that stop_fsb is in are sufficient exit conditions, and we don't need to do any extent count games given that: a) we already flushed all delalloc extents past our start offset before doing the operation b) xfs_iext_count() includes delalloc extents anyway Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Rewrite xfs_bmap_collapse_extents so that we don't rely on extent indices except for iterating over them. Not being able to iterate to the next extent is a sufficient exit condition, and we don't need to do any extent count games given that: a) we already flushed all delalloc extents past our start offset before doing the operation b) xfs_iext_count() includes delalloc extents anyway Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
This way the caller gets the proper updated extent returned in got. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Instead do the actual left and right shift work in the callers, and just keep a helper to update the bmap and rmap btrees as well as the in-core extent list. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Have a separate helper for insert vs collapse, as this prepares us for simplifying the code in the next patches. Also changed the done output argument to a bool intead of int for both new functions. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
The define was always set to 1, which means looping until we reach is was dead code from the start. Also remove an initialization of next_fsb for the done case that doesn't fit the new code flow - it was never checked by the caller in the done case to start with. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
The code is sufficiently different for the insert vs collapse cases both in xfs_shift_file_space itself and the callers that untangling them will make life a lot easier down the road. We still keep a common helper for flushing all data and COW state to get the inode into the right shape for shifting the extents around. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
We can simply use the i_rdev field in the Linux inode and just convert to and from the XFS dev_t when reading or logging/writing the inode. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-