- 25 Sep, 2008 40 commits
-
-
Balaji Rao authored
Date: Mon, 21 Jul 2008 02:01:04 +0530 This patch introduces a btrfs_iget helper to be used in NFS support. Signed-off-by: Balaji Rao <balajirrao@gmail.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Before, the btrfs bdi congestion function was used to test for too many async bios. This keeps that check to throttle pdflush, but also adds a check while queuing bios. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This optimization had been removed because I thought it was triggering csum errors. The real cause of the errors was elsewhere, and so this optimization is back. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
add_extent_mapping was allowing the insertion of overlapping extents. This never used to happen because it only inserted the extents from disk and those were never overlapping. But, with the data=ordered code, the disk and memory representations of the file are not the same. add_extent_mapping needs to ensure a new extent does not overlap before it inserts. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
David Woodhouse authored
These ended up freeing objects while they were still using them. Under guidance from Chris, just rip out the 'clever' bits and do things the simple way. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This takes the csum mutex deeper in the call chain and releases it more often. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Before this change, btrfs would use a bdi congestion function to make sure there weren't too many pending async checksum work items. This change makes the process creating async work items wait instead, leading to fewer congestion returns from the bdi. This improves pdflush background_writeout scanning. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
After writing out all the remaining btree blocks in the transaction, the commit code would use filemap_fdatawait to make sure it was all on disk. This means it would wait for blocks written by other procs as well. The new code walks the list of blocks for this transaction again and waits only for those required by this transaction. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The writeback_index field is used by write_cache_pages to pick up where writeback on a given inode left off. But, it is never set to a sane value, so writeback can often start at a random offset in the file. Kernels 2.6.28 and higher will have this fixed, but for everyone else, we also fill in the value in btrfs. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
David Woodhouse authored
Add backwards compatibility in compat.h Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> --- compat.h | 3 +++ extent_io.c | 3 ++- 2 files changed, 5 insertions(+), 1 deletions(-) Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Eric Sandeen authored
Newer RHEL5 kernels define both ClearPageFSMisc and ClearPageChecked, so test for both before redefining. Signed-off-by: Eric Sandeen <sandeen@redhat.com> --- Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Yan Zheng authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
rename and link don't always have a lock on the source inode, and our use of a per-inode index variable was racy. This changes things to store the index in a local variable instead. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The multi-bio code is responsible for duplicating blocks in raid1 and single spindle duplication. It has counters to make sure all of the locations for a given extent are properly written before io completion is returned to the higher layers. But, it didn't always complete the same bio it was given, sometimes a clone was completed instead. This lead to problems with the async work queues because they saved a pointer to the bio in a struct off bi_private. The fix is to remember the original bio and only complete that one. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Yan Zheng authored
This patch updates the file clone ioctl for the tree locking and new data ordered code. --- Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Yan Zheng authored
This trivial patch contains two locking fixes and a off by one fix. --- Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Far from the perfect fix, but these structs are small. TODO for the next release. The block group cache structs are referenced in many different places, and it isn't safe to just free them while resizing. A real fix will be a larger change to the allocator so that it doesn't have to carry about the block group cache structs to find good places to search for free blocks. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Sage Weil authored
Commit 597:466b27332893 (btrfs_start_transaction: wait for commits in progress) breaks the transaction start/stop ioctls by making btrfs_start_transaction conditionally wait for the next transaction to start. If an application artificially is holding a transaction open, things deadlock. This workaround maintains a count of open ioctl-initiated transactions in fs_info, and avoids wait_current_trans() if any are currently open (in start_transaction() and btrfs_throttle()). The start transaction ioctl uses a new btrfs_start_ioctl_transaction() that _does_ call wait_current_trans(), effectively pushing the join/wait decision to the outer ioctl-initiated transaction. This more or less neuters btrfs_throttle() when ioctl-initiated transactions are in use, but that seems like a pretty fundamental consequence of wrapping lots of write()'s in a transaction. Btrfs has no way to tell if the application considers a given operation as part of it's transaction. Obviously, if the transaction start/stop ioctls aren't being used, there is no effect on current behavior. Signed-off-by: Sage Weil <sage@newdream.net> --- ctree.h | 1 + ioctl.c | 12 +++++++++++- transaction.c | 18 +++++++++++++----- transaction.h | 2 ++ 4 files changed, 27 insertions(+), 6 deletions(-) Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Intel doesn't yet ship hardware to the public with this enabled, but when they do, they will be ready. Original code from: Austin Zhang <austin_zhang@linux.intel.com> It is currently disabled, but edit crc32c.h to turn it on. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
* Make walk_down_tree wake up throttled tasks more often * Make walk_down_tree call cond_resched during long loops * As the size of the ref cache grows, wait longer in throttle * Get rid of the reada code in walk_down_tree, the leaves don't get read anymore, thanks to the ref cache. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
A btree block cow has two parts, the first is to allocate a destination block and the second is to copy the old bock over. The first part needs locks in the extent allocation tree, and may need to do IO. This changeset splits that into a separate function that can be called without any tree locks held. btrfs_search_slot is changed to drop its path and start over if it has to COW a contended block. This often means that many writers will pre-alloc a new destination for a the same contended block, but they cache their prealloc for later use on lower levels in the tree. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
While dropping snapshots, walk_down_tree does most of the work of checking reference counts and limiting tree traversal to just the blocks that we are freeing. It dropped and held the allocation mutex in strange and confusing ways, this commit changes it to only hold the mutex while actually freeing a block. The rest of the checks around reference counts should be safe without the lock because we only allow one process in btrfs_drop_snapshot at a time. Other processes dropping reference counts should not drop it to 1 because their tree roots already have an extra ref on the block. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Large streaming reads make for large bios, which means each entry on the list async work queues represents a large amount of data. IO congestion throttling on the device was kicking in before the async worker threads decided a single thread was busy and needed some help. The end result was that a streaming read would result in a single CPU running at 100% instead of balancing the work off to other CPUs. This patch also changes the pre-IO checksum lookup done by reads to work on a per-bio basis instead of a per-page. This results in many extra btree lookups on large streaming reads. Doing the checksum lookup right before bio submit allows us to reuse searches while processing adjacent offsets. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This avoids waiting for transactions with pages locked by breaking out the code to wait for the current transaction to close into a function called by btrfs_throttle. It also lowers the limits for where we start throttling. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Sven Wegener authored
Add a couple of #if's to follow API changes. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Yan authored
The memory reclaiming issue happens when snapshot exists. In that case, some cache entries may not be used during old snapshot dropping, so they will remain in the cache until umount. The patch adds a field to struct btrfs_leaf_ref to record create time. Besides, the patch makes all dead roots of a given snapshot linked together in order of create time. After a old snapshot was completely dropped, we check the dead root list and remove all cache entries created before the oldest dead root in the list. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
It was incorrectly clearing the up to date flag on the buffer even when the buffer properly verified. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Yan Zheng authored
To check whether a given file extent is referenced by multiple snapshots, the checker walks down the fs tree through dead root and checks all tree blocks in the path. We can easily detect whether a given tree block is directly referenced by other snapshot. We can also detect any indirect reference from other snapshot by checking reference's generation. The checker can always detect multiple references, but can't reliably detect cases of single reference. So btrfs may do file data cow even there is only one reference. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-