- 17 Apr, 2023 40 commits
-
-
Christoph Hellwig authored
Guard all the code to punt bios to a per-cgroup submission helper by a new CONFIG_BLK_CGROUP_PUNT_BIO symbol that is selected by btrfs. This way non-btrfs kernel builds don't need to have this code. Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
async_bio_lock is only taken from bio submission and workqueue context, both are never in bottom halves. Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
REQ_CGROUP_PUNT is a bit annoying as it is hard to follow and adds a branch to the bio submission hot path. To fix this, export blkcg_punt_bio_submit and let btrfs call it directly. Add a new REQ_FS_PRIVATE flag for btrfs to indicate to it's own low-level bio submission code that a punt to the cgroup submission helper is required. Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
punt_to_cgroup is only used by extent_write_locked_range, but that function also directly controls the bio flags for the actual submission. Remove th punt_to_cgroup field, and just set REQ_CGROUP_PUNT directly in extent_write_locked_range. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
submit_one_async_extent needs to use submit_one_async_extent no matter if the range it handles ends up beeing compressed or not as the deadlock risk due to cgroup thottling is the same. Call kthread_associate_blkcg earlier to cover submit_uncompressed_range case as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
Let submit_one_async_extent, which is the only caller of submit_uncompressed_range handle freeing of the async_extent in one central place. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
btrfs_submit_compressed_write should not have to care if it is called from a helper thread or not. Move the kthread_associate_blkcg handling into submit_one_async_extent, as that is the one caller that needs it. Also move the assignment of REQ_CGROUP_PUNT into cow_file_range_async, as that is the routine that sets up the helper thread offload. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When starting a transaction, we are assuming the number of bytes used for each delayed ref update matches the number of bytes used for each item update, that is the return value of: btrfs_calc_insert_metadata_size(fs_info, num_items) However that is not correct when we are using the free space tree, as we need to multiply that value by 2, since delayed ref updates need to modify the free space tree besides the extent tree. So fix this by using btrfs_calc_delayed_ref_bytes() to get the correct number of bytes used for delayed ref updates. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When starting a transaction we are comparing the result of a call to btrfs_block_rsv_full() with 0, but the function returns a boolean. While in practice it is not incorrect, as 0 is equivalent to false, it makes it a bit odd and less readable. So update the check to not compare against 0 and instead use the logical not (!) operator. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Boris Burkov authored
If an application is doing direct io to a btrfs file and experiences a page fault reading from the write buffer, iomap will issue a partial bio, and allow the fs to keep going. However, there was a subtle bug in this code path in the btrfs dio iomap implementation that led to the partial write ending up as a gap in the file's extents and to be read back as zeros. The sequence of events in a partial write, lightly summarized and trimmed down for brevity is as follows: ==== WRITING TASK ==== btrfs_direct_write __iomap_dio_write iomap_iter btrfs_dio_iomap_begin # create full ordered extent iomap_dio_bio_iter bio_iov_iter_get_pages # page fault; partial read submit_bio # partial bio iomap_iter btrfs_dio_iomap_end btrfs_mark_ordered_io_finished # sets BTRFS_ORDERED_IOERR; # submit to finish_ordered_fn wq fault_in_iov_iter_readable # btrfs_direct_write detects partial write __iomap_dio_write iomap_iter btrfs_dio_iomap_begin # create second partial ordered extent iomap_dio_bio_iter bio_iov_iter_get_pages # read all of remainder submit_bio # partial bio with all of remainder iomap_iter btrfs_dio_iomap_end # nothing exciting to do with ordered io ==== DIO ENDIO ==== == FIRST PARTIAL BIO == btrfs_dio_end_io btrfs_mark_ordered_io_finished # bytes_left > 0 # don't submit to finish_ordered_fn wq == SECOND PARTIAL BIO == btrfs_dio_end_io btrfs_mark_ordered_io_finished # bytes_left == 0 # submit to finish_ordered_fn wq ==== BTRFS FINISH ORDERED WQ ==== == FIRST PARTIAL BIO == btrfs_finish_ordered_io # called by dio_iomap_end_io, sees # BTRFS_ORDERED_IOERR, just drops the # ordered_extent ==SECOND PARTIAL BIO== btrfs_finish_ordered_io # called by btrfs_dio_end_io, writes out file # extents, csums, etc... The essence of the problem is that while btrfs_direct_write and iomap properly interact to submit all the correct bios, there is insufficient logic in the btrfs dio functions (btrfs_dio_iomap_begin, btrfs_dio_submit_io, btrfs_dio_end_io, and btrfs_dio_iomap_end) to ensure that every bio is at least a part of a completed ordered_extent. And it is completing an ordered_extent that results in crucial functionality like writing out a file extent for the range. More specifically, btrfs_dio_end_io treats the ordered extent as unfinished but btrfs_dio_iomap_end sets BTRFS_ORDERED_IOERR on it. Thus, the finish io work doesn't result in file extents, csums, etc. In the aftermath, such a file behaves as though it has a hole in it, instead of the purportedly written data. We considered a few options for fixing the bug: 1. treat the partial bio as if we had truncated the file, which would result in properly finishing it. 2. split the ordered extent when submitting a partial bio. 3. cache the ordered extent across calls to __iomap_dio_rw in iter->private, so that we could reuse it and correctly apply several bios to it. I had trouble with 1, and it felt the most like a hack, so I tried 2 and 3. Since 3 has the benefit of also not creating an extra file extent, and avoids an ordered extent lookup during bio submission, it felt like the best option. However, that turned out to re-introduce a deadlock which this code discarding the ordered_extent between faults was meant to fix in the first place. (Link to an explanation of the deadlock below.) Therefore, go with fix 2, which requires a bit more setup work but fixes the corruption without introducing the deadlock, which is fundamentally caused by the ordered extent existing when we attempt to fault in a range that overlaps with it. Put succinctly, what this patch does is: when we submit a dio bio, check if it is partial against the ordered extent stored in dio_data, and if it is, extract the ordered_extent that matches the bio exactly out of the larger ordered_extent. Keep the remaining ordered_extent around in dio_data for cancellation in iomap_end. Thanks to Josef, Christoph, and Filipe with their help figuring out the bug and the fix. Fixes: 51bd9563 ("btrfs: fix deadlock due to page faults during direct IO reads and writes") Link: https://bugzilla.redhat.com/show_bug.cgi?id=2169947 Link: https://lore.kernel.org/linux-btrfs/aa1fb69e-b613-47aa-a99e-a0a2c9ed273f@app.fastmail.com/ Link: https://pastebin.com/3SDaH8C6 Link: https://lore.kernel.org/linux-btrfs/20230315195231.GW10580@twin.jikos.cz/T/#tReviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Boris Burkov <boris@bur.io> [ hch: refactored the ordered_extent extraction ] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Boris Burkov authored
NOCOW writes just overwrite an existing extent map, which thus should not be split in btrfs_extract_ordered_extent. The NOCOW case can't currently happen as btrfs_extract_ordered_extent is only used on zoned devices that do not support NOCOW writes, but this will change soon. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Boris Burkov <boris@bur.io> [ hch: split from a larger patch, wrote a commit log ] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
To prepare for a new caller that already has the ordered_extent available, change btrfs_extract_ordered_extent to take an argument for it. Add a wrapper for the bio case that still has to do the lookup (for now). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
split_zoned_em is only ever asked to split out the beginning of an extent map. Change it to only take a len to split out instead of a pre and post region. Also rename the function to split_extent_map as there is nothing zoned device specific about it. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
The function btrfs_clone_ordered_extent is very specific to the usage in btrfs_split_ordered_extent. Now that only a single call to btrfs_clone_ordered_extent is left, just fold it into btrfs_split_ordered_extent to make the operation more clear. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
btrfs_split_ordered_extent is only ever asked to split out the beginning of an ordered_extent (i.e. post == 0). Change it to only take a len to split out, and switch it to allocate the new extent for the beginning, as that helps with callers that want to keep a pointer to the ordered_extent that it is stealing from. Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
btrfs_extract_ordered_extent is always used to split an ordered_extent and extent_map into two parts, so it doesn't need to deal with a three way split. Simplify it by only allowing for a single split point, and always split out the beginning of the extent, as that is what we'll later need to be able to hold on to a reference to the original ordered_extent that the first part is split off for submission. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
Move the three checks that are about ordered extent internal sanity checking into btrfs_split_ordered_extent instead of doing them in the higher level btrfs_extract_ordered_extent routine. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Boris Burkov authored
While it is not feasible for an ordered extent to survive across the calls btrfs_direct_write makes into __iomap_dio_rw, it is still helpful to stash it on the dio_data in between creating it in iomap_begin and finishing it in either end_io or iomap_end. The specific use I have in mind is that we can check if a particular bio is partial in submit_io without unconditionally looking up the ordered extent. This is a preparatory patch for a later patch which does just that. Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
-
Boris Burkov authored
The ordered_extent flags are declared as unsigned long, so pass them as such to btrfs_add_ordered_extent. Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Boris Burkov <boris@bur.io> [ hch: split from a larger patch ] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Boris Burkov authored
Currently, btrfs_add_ordered_extent allocates a new ordered extent, adds it to the rb_tree, but doesn't return a referenced pointer to the caller. There are cases where it is useful for the creator of a new ordered_extent to hang on to such a pointer, so add a new function btrfs_alloc_ordered_extent which is the same as btrfs_add_ordered_extent, except it takes an additional reference count and returns a pointer to the ordered_extent. Implement btrfs_add_ordered_extent as btrfs_alloc_ordered_extent followed by dropping the new reference and handling the IS_ERR case. The type of flags in btrfs_alloc_ordered_extent and btrfs_add_ordered_extent is changed from unsigned int to unsigned long so it's unified with the other ordered extent functions. Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Johannes Thumshirn authored
The btrfs raid56 sector submission code uses bio_add_page() to add a page to a newly created bio. bio_add_page() can fail, but the return value is never checked. Use __bio_add_page() as adding a single page to a newly created bio is guaranteed to succeed. This brings us a step closer to marking bio_add_page() as __must_check. Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Johannes Thumshirn authored
The btrfs repair bio submission code uses bio_add_page() to add a page to a newly created bio. bio_add_page() can fail, but the return value is never checked. Use __bio_add_page() as adding a single page to a newly created bio is guaranteed to succeed. This brings us a step closer to marking bio_add_page() as __must_check. Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Anand Jain authored
The function wait_dev_flush() tests for the BTRFS_DEV_STATE_FLUSH_SENT bit and then clears it separately. Instead, use test_and_clear_bit(). Though we don't need to do the atomic test and clear, it's following a common pattern. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Anand Jain authored
The flush error code is maintained in btrfs_device::last_flush_error, so there is no point in returning it in wait_dev_flush() when it is not being used. Instead, we can return a boolean value. Note that even though btrfs_device::last_flush_error may not be used, we will keep it for now. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Anand Jain authored
check_barrier_error() is almost a single line function, and just calls btrfs_check_rw_degradable(). Instead, open code it. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Anand Jain authored
We parallelize the flush command across devices using our own code, write_dev_flush() sends the flush command to each device and wait_dev_flush() waits for the flush to complete on all devices. Errors from each device are recorded at device->last_flush_error and reset to BLK_STS_OK in write_dev_flush() and to the error, if any, in wait_dev_flush(). These functions are called from barrier_all_devices(). This patch consolidates the use of device->last_flush_error in write_dev_flush() and wait_dev_flush() to remove it from barrier_all_devices(). Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
Instead of using two labels at btrfs_evict_inode() for exiting depending on whether we need to delete the inode items and orphan or some error happened, we can use a single exit label if we initialize the block reserve to NULL, since btrfs_free_block_rsv() ignores a NULL block reserve pointer. So just do that. It will also make an upcoming change simpler by avoiding one extra error label. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When updating the global block reserve, we account for the 6 items needed by an unlink operation and the 6 delayed references for each one of those items. However the calculation for the delayed references is not correct in case we have the free space tree enabled, as in that case we need to touch the free space tree as well and therefore need twice the number of bytes. So use the btrfs_calc_delayed_ref_bytes() helper to calculate the number of bytes need for the delayed references at btrfs_update_global_block_rsv(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
Instead of hard coding the number of metadata units for an unlink operation in a couple places, define a macro and use it instead. This eliminates the problem of one place getting out of sync with the other, such as recently fixed by the previous patch in the series ("btrfs: fix calculation of the global block reserve's size"). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
At btrfs_update_global_block_rsv(), we are assuming an unlink operation uses 5 metadata units, but that's not true anymore, it uses 6 since the commit bca4ad7c ("btrfs: reserve correct number of items for unlink and rmdir"). So update the code and comments to consider 6 units. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When evicting an inode, we are incorrectly calculating the amount of space required for a single delayed reference in case the free space tree is enabled. We have to multiply by 2 the result of btrfs_calc_insert_metadata_size(). We should be calculating according to the size update and space release of the delayed block reserve logic at btrfs_update_delayed_refs_rsv() and btrfs_delayed_refs_rsv_release(). Fix this by using the btrfs_calc_delayed_ref_bytes() helper at evict_refill_and_join() instead of btrfs_calc_insert_metadata_size(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
Instead of duplicating the logic for calculating how much space is required for a given number of delayed references, add an inline helper to encapsulate that logic and use it everywhere we are calculating the space required. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
Now that btrfs_calc_insert_metadata_size() can take a const fs_info argument, make the fs_info argument of calc_reclaim_items_nr() and of calc_delayed_refs_nr() const as well. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
The fs_info argument of the helpers btrfs_calc_insert_metadata_size() and btrfs_calc_metadata_size() is not modified so it can be const. This will also allow a new helper function in one of the next patches to have its fs_info argument as const. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When flushing a limited number of delayed references (FLUSH_DELAYED_REFS_NR state), we are assuming each delayed reference is holding a number of bytes matching the needed space for inserting for a single metadata item (the result of btrfs_calc_insert_metadata_size()). That is not correct when using the free space tree, as in that case we have to multiply that value by 2 since we need to touch the free space tree as well. This is the same computation as we do at btrfs_update_delayed_refs_rsv() and at btrfs_delayed_refs_rsv_release(). So correct the computation for the amount of delayed references we need to flush in case we have the free space tree. This does not fix a functional issue, instead it makes the flush code flush less delayed references, only the minimum necessary to satisfy a ticket. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
When refilling the delayed block reserve we are incorrectly computing the amount of bytes for a single delayed reference if the free space tree is being used. In that case we should double the calculated amount. Everywhere else we compute the correct amount, like when updating the delayed block reserve, at btrfs_update_delayed_refs_rsv(), or when releasing space from the delayed block reserve, at btrfs_delayed_refs_rsv_release(). So fix btrfs_delayed_refs_rsv_refill() to multiply the amount of bytes for a single delayed reference by two in case the free space tree is used. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
During inode eviction, if we are truncating a deleted inode, we don't add delayed items for our inode, so there's no need to throttle on delayed items on each iteration of the loop that truncates inode items from its subvolume tree. But we dirty extent buffers from its subvolume tree, so we only need to throttle on btree inode dirty pages. So use btrfs_btree_balance_dirty_nodelay() in the loop that truncates inode items. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
We have this logic encapsulated in btrfs_should_throttle_delayed_refs() where we try to estimate if running the current amount of delayed references we have will take more than half a second, and if so, the caller btrfs_should_throttle_delayed_refs() should do something to prevent more and more delayed refs from being accumulated. This logic was added in commit 0a2b2a84 ("Btrfs: throttle delayed refs better") and then further refined in commit a79b7d4b ("Btrfs: async delayed refs"). The idea back then was that the caller of btrfs_should_throttle_delayed_refs() would release its transaction handle (by calling btrfs_end_transaction()) when that function returned true, then btrfs_end_transaction() would trigger an async job to run delayed references in a workqueue, and later start/join a transaction again and do more work. However we don't run delayed references asynchronously anymore, that was removed in commit db2462a6 ("btrfs: don't run delayed refs in the end transaction logic"). That makes the logic that tries to estimate how long we will take to run our current delayed references, at btrfs_should_throttle_delayed_refs(), pointless as we don't take any action to run delayed references anymore. We do have other type of throttling, which consists of checking the size and reserved space of the delayed and global block reserves, as well as if fluhsing delayed references for the current transaction was already started, etc - this is all done by btrfs_should_end_transaction(), and the only user of btrfs_should_throttle_delayed_refs() does periodically call btrfs_should_end_transaction(). So remove btrfs_should_throttle_delayed_refs() and the infrastructure that keeps track of the average time used for running delayed references, as well as adapting btrfs_truncate_inode_items() to call btrfs_check_space_for_delayed_refs() instead. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
At btrfs_block_rsv_refill(), there's no point in initializing the 'num_bytes' variable to 0 and then, after taking the block reserve's spinlock, initializing it to the value of the 'min_reserved' parameter. So just get rid of the 'num_bytes' local variable and rename the 'min_reserved' parameter to 'num_bytes'. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Filipe Manana authored
At btrfs_truncate_inode_items(), in the while loop when we decide that we are going to delete an item, it's pointless to check that 'pending_del_nr' is non-zero in an else clause because the corresponding if statement is checking if 'pending_del_nr' has a value of zero. So just remove that condition from the else clause. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-