- 10 Mar, 2014 40 commits
-
-
Miao Xie authored
The reason is: - The per-cpu counter has its own lock to protect itself. - Here we needn't get a exact value. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
As the comment in the btrfs_direct_IO says, only the compressed pages need be flush again to make sure they are on the disk, but the common pages needn't, so we add a if statement to check if the inode has compressed pages or not, if no, skip the flush. And in order to prevent the write ranges from intersecting, we need wait for the running ordered extents. But the current code waits for them twice, one is done before the direct IO starts (in btrfs_wait_ordered_range()), the other is before we get the blocks, it is unnecessary. because we can do the direct IO without holding i_mutex, it means that the intersected ordered extents may happen during the direct IO, the first wait can not avoid this problem. So we use filemap_fdatawrite_range() instead of btrfs_wait_ordered_range() to remove the first wait. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
The tasks that wait for the IO_DONE flag just care about the io of the dirty pages, so it is better to wake up them immediately after all the pages are written, not the whole process of the io completes. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
btrfs_wait_ordered_roots() moves all the list entries to a new list, and then deals with them one by one. But if the other task invokes this function at that time, it would get a empty list. It makes the enospc error happens more early. Fix it. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
If the snapshot creation happened after the nocow write but before the dirty data flush, we would fail to flush the dirty data because of no space. So we must keep track of when those nocow write operations start and when they end, if there are nocow writers, the snapshot creators must wait. In order to implement this function, I introduce btrfs_{start, end}_nocow_write(), which is similar to mnt_{want,drop}_write(). These two functions are only used for nocow file write operations. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Add ftrace for btrfs_workqueue for further workqueue tunning. This patch needs to applied after the workqueue replace patchset. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
The new btrfs_workqueue still use open-coded function defition, this patch will change them into btrfs_func_t type which is much the same as kernel workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Liu Bo authored
Btrfs send reads data from disk and then writes to a stream via pipe or a file via flush. Currently we're going to read each page a time, so every page results in a disk read, which is not friendly to disks, esp. HDD. Given that, the performance can be gained by adding readahead for those pages. Here is a quick test: $ btrfs subvolume create send $ xfs_io -f -c "pwrite 0 1G" send/foobar $ btrfs subvolume snap -r send ro $ time "btrfs send ro -f /dev/null" w/o w real 1m37.527s 0m9.097s user 0m0.122s 0m0.086s sys 0m53.191s 0m12.857s Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Liu Bo authored
This has no functional change, only picks out the same part of two functions, and makes it shared. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
When we're finishing processing of an inode, if we're dealing with a directory inode that has a pending move/rename operation, we don't need to send a utimes update instruction to the send stream, as we'll do it later after doing the move/rename operation. Therefore we save some time here building paths and doing btree lookups. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
When using prealloc extents, a file defragment operation may actually fragment the file and increase the amount of data space used by the file. This change fixes that behaviour. Example: $ mkfs.btrfs -f /dev/sdb3 $ mount /dev/sdb3 /mnt $ cd /mnt $ xfs_io -f -c 'falloc 0 1048576' foobar && sync $ xfs_io -c 'pwrite -S 0xff -b 100000 5000 100000' foobar $ xfs_io -c 'pwrite -S 0xac -b 100000 200000 100000' foobar $ xfs_io -c 'pwrite -S 0xe1 -b 100000 900000 100000' foobar && sync Before defragmenting the file: $ btrfs filesystem df /mnt Data, single: total=8.00MiB, used=1.25MiB System, DUP: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=1.00GiB, used=112.00KiB Metadata, single: total=8.00MiB, used=0.00 $ btrfs-debug-tree /dev/sdb3 (...) item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53 prealloc data disk byte 12845056 nr 1048576 prealloc data offset 0 nr 4096 item 7 key (257 EXTENT_DATA 4096) itemoff 15757 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 4096 nr 102400 ram 1048576 extent compression 0 item 8 key (257 EXTENT_DATA 106496) itemoff 15704 itemsize 53 prealloc data disk byte 12845056 nr 1048576 prealloc data offset 106496 nr 90112 item 9 key (257 EXTENT_DATA 196608) itemoff 15651 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 196608 nr 106496 ram 1048576 extent compression 0 item 10 key (257 EXTENT_DATA 303104) itemoff 15598 itemsize 53 prealloc data disk byte 12845056 nr 1048576 prealloc data offset 303104 nr 593920 item 11 key (257 EXTENT_DATA 897024) itemoff 15545 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 897024 nr 106496 ram 1048576 extent compression 0 item 12 key (257 EXTENT_DATA 1003520) itemoff 15492 itemsize 53 prealloc data disk byte 12845056 nr 1048576 prealloc data offset 1003520 nr 45056 (...) Now defragmenting the file results in more data space used than before: $ btrfs filesystem defragment -f foobar && sync $ btrfs filesystem df /mnt Data, single: total=8.00MiB, used=1.55MiB System, DUP: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=1.00GiB, used=112.00KiB Metadata, single: total=8.00MiB, used=0.00 And the corresponding file extent items are now no longer perfectly sequential as before, and we're now needlessly using more space from data block groups: $ btrfs-debug-tree /dev/sdb3 (...) item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 0 nr 4096 ram 1048576 extent compression 0 item 7 key (257 EXTENT_DATA 4096) itemoff 15757 itemsize 53 extent data disk byte 13893632 nr 102400 extent data offset 0 nr 102400 ram 102400 extent compression 0 item 8 key (257 EXTENT_DATA 106496) itemoff 15704 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 106496 nr 90112 ram 1048576 extent compression 0 item 9 key (257 EXTENT_DATA 196608) itemoff 15651 itemsize 53 extent data disk byte 13996032 nr 106496 extent data offset 0 nr 106496 ram 106496 extent compression 0 item 10 key (257 EXTENT_DATA 303104) itemoff 15598 itemsize 53 prealloc data disk byte 12845056 nr 1048576 prealloc data offset 303104 nr 593920 item 11 key (257 EXTENT_DATA 897024) itemoff 15545 itemsize 53 extent data disk byte 14102528 nr 106496 extent data offset 0 nr 106496 ram 106496 extent compression 0 item 12 key (257 EXTENT_DATA 1003520) itemoff 15492 itemsize 53 extent data disk byte 12845056 nr 1048576 extent data offset 1003520 nr 45056 ram 1048576 extent compression 0 (...) With this change, the above example will no longer cause allocation of new data space nor change the sequentiality of the file extents, that is, defragment will be effectless, leaving all extent items pointing to the extent starting at disk byte 12845056. In a 20Gb filesystem I had, mounted with the autodefrag option and 20 files of 400Mb each, initially consisting of a single prealloc extent of 400Mb, having random writes happening at a low rate, lead to a total of over ~17Gb of data space used, not far from eventually reaching an ENOSPC state. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
When the defrag flag BTRFS_DEFRAG_RANGE_START_IO is set and compression enabled, we weren't flushing completely, as writing compressed extents is a 2 steps process, one to compress the data and another one to write the compressed data to disk. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Since the "_struct" suffix is mainly used for distinguish the differnt btrfs_work between the original and the newly created one, there is no need using the suffix since all btrfs_workers are changed into btrfs_workqueue. Also this patch fixed some codes whose code style is changed due to the too long "_struct" suffix. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Since all the btrfs_worker is replaced with the newly created btrfs_workqueue, the old codes can be easily remove. Signed-off-by: Quwenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->scrub_* with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->qgroup_rescan_worker with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->delayed_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->fixup_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->readahead_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->cache_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->rmw_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->endio_* workqueues with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Replace the fs_info->submit_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Much like the fs_info->workers, replace the fs_info->submit_workers use the same btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Much like the fs_info->workers, replace the fs_info->delalloc_workers use the same btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Use the newly created btrfs_workqueue_struct to replace the original fs_info->workers Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
The original btrfs_workers has thresholding functions to dynamically create or destroy kthreads. Though there is no such function in kernel workqueue because the worker is not created manually, we can still use the workqueue_set_max_active to simulated the behavior, mainly to achieve a better HDD performance by setting a high threshold on submit_workers. (Sadly, no resource can be saved) So in this patch, extra workqueue pending counters are introduced to dynamically change the max active of each btrfs_workqueue_struct, hoping to restore the behavior of the original thresholding function. Also, workqueue_set_max_active use a mutex to protect workqueue_struct, which is not meant to be called too frequently, so a new interval mechanism is applied, that will only call workqueue_set_max_active after a count of work is queued. Hoping to balance both the random and sequence performance on HDD. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Add high priority function to btrfs_workqueue. This is implemented by embedding a btrfs_workqueue into a btrfs_workqueue and use some helper functions to differ the normal priority wq and high priority wq. So the high priority wq is completely independent from the normal workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
Use kernel workqueue to implement a new btrfs_workqueue_struct, which has the ordering execution feature like the btrfs_worker. The func is executed in a concurrency way, and the ordred_func/ordered_free is executed in the sequence them are queued after the corresponding func is done. The new btrfs_workqueue works much like the original one, one workqueue for normal work and a list for ordered work. When a work is queued, ordered work will be added to the list and helper function will be queued into the workqueue. The helper function will execute a normal work and then check and execute as many ordered work as possible in the sequence they were queued. At this patch, high priority work queue or thresholding is not added yet. The high priority feature and thresholding will be added in the following patches. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Qu Wenruo authored
The struct async_sched is not used by any codes and can be removed. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fusionio.com> Tested-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Liu Bo authored
It is really unnecessary to search tree again for @gen, @mode and @rdev in the case of REG inodes' creation, as we've got btrfs_inode_item in sctx, and @gen, @mode and @rdev can easily be fetched. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
We can not release the reserved metadata space for the first write if we find the write position is pre-allocated. Because the kernel might write the data on the disk before we do the second write but after the can-nocow check, if we release the space for the first write, we might fail to update the metadata because of no space. Fix this problem by end nocow write if there is dirty data in the range whose space is pre-allocated. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Miao Xie authored
The write range may not be sector-aligned, for example: |--------|--------| <- write range, sector-unaligned, size: 2blocks |--------|--------|--------| <- correct lock range, size: 3blocks But according to the old code, we used the size of write range to calculate the lock range directly, not considered the offset, we would get a wrong lock range: |--------|--------| <- write range, sector-unaligned, size: 2blocks |--------|--------| <- wrong lock range, size: 2blocks And besides that, the old code also had the same problem when calculating the real write size. Correct them. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
David Sterba authored
Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
David Sterba authored
In "btrfs: send: lower memory requirements in common case" the code to save the old_buf_len was incorrectly moved to a wrong place and broke the original logic. Reported-by: Filipe David Manana <fdmanana@gmail.com> Signed-off-by: David Sterba <dsterba@suse.cz> Reviewed-by: Filipe David Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
While droping extent map structures from the extent cache that cover our target range, we would remove each extent map structure from the red black tree and then add either 1 or 2 new extent map structures if the former extent map covered sections outside our target range. This change simply attempts to replace the existing extent map structure with a new one that covers the subsection we're not interested in, instead of doing a red black remove operation followed by an insertion operation. The number of elements in an inode's extent map tree can get very high for large files under random writes. For example, while running the following test: sysbench --test=fileio --file-num=1 --file-total-size=10G \ --file-test-mode=rndrw --num-threads=32 --file-block-size=32768 \ --max-requests=500000 --file-rw-ratio=2 [prepare|run] I captured the following histogram capturing the number of extent_map items in the red black tree while that test was running: Count: 122462 Range: 1.000 - 172231.000; Mean: 96415.831; Median: 101855.000; Stddev: 49700.981 Percentiles: 90th: 160120.000; 95th: 166335.000; 99th: 171070.000 1.000 - 5.231: 452 | 5.231 - 187.392: 87 | 187.392 - 585.911: 206 | 585.911 - 1827.438: 623 | 1827.438 - 5695.245: 1962 # 5695.245 - 17744.861: 6204 #### 17744.861 - 55283.764: 21115 ############ 55283.764 - 172231.000: 91813 ##################################################### Benchmark: sysbench --test=fileio --file-num=1 --file-total-size=10G --file-test-mode=rndwr \ --num-threads=64 --file-block-size=32768 --max-requests=0 --max-time=60 \ --file-io-mode=sync --file-fsync-freq=0 [prepare|run] Before this change: 122.1Mb/sec After this change: 125.07Mb/sec (averages of 5 test runs) Test machine: quad core intel i5-3570K, 32Gb of ram, SSD Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
When we split an extent state there's no need to start the rbtree search from the root node - we can start it from the original extent state node, since we would end up in its subtree if we do the search starting at the root node anyway. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Filipe Manana authored
We don't need to have an unsigned int field in the extent_map struct to tell us whether the extent map is in the inode's extent_map tree or not. We can use the rb_node struct field and the RB_CLEAR_NODE and RB_EMPTY_NODE macros to achieve the same task. This reduces sizeof(struct extent_map) from 152 bytes to 144 bytes (on a 64 bits system). Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com>
-
Wang Shilong authored
We won't change commit root, skip locking dance with commit root when walking backrefs, this can speed up btrfs send operations. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com>
-