Commit e7651b81 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs

Pull btrfs updates from Chris Mason:
 "This is a pretty big pull, and most of these changes have been
  floating in btrfs-next for a long time.  Filipe's properties work is a
  cool building block for inheriting attributes like compression down on
  a per inode basis.

  Jeff Mahoney kicked in code to export filesystem info into sysfs.

  Otherwise, lots of performance improvements, cleanups and bug fixes.

  Looks like there are still a few other small pending incrementals, but
  I wanted to get the bulk of this in first"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (149 commits)
  Btrfs: fix spin_unlock in check_ref_cleanup
  Btrfs: setup inode location during btrfs_init_inode_locked
  Btrfs: don't use ram_bytes for uncompressed inline items
  Btrfs: fix btrfs_search_slot_for_read backwards iteration
  Btrfs: do not export ulist functions
  Btrfs: rework ulist with list+rb_tree
  Btrfs: fix memory leaks on walking backrefs failure
  Btrfs: fix send file hole detection leading to data corruption
  Btrfs: add a reschedule point in btrfs_find_all_roots()
  Btrfs: make send's file extent item search more efficient
  Btrfs: fix to catch all errors when resolving indirect ref
  Btrfs: fix protection between walking backrefs and root deletion
  btrfs: fix warning while merging two adjacent extents
  Btrfs: fix infinite path build loops in incremental send
  btrfs: undo sysfs when open_ctree() fails
  Btrfs: fix snprintf usage by send's gen_unique_name
  btrfs: fix defrag 32-bit integer overflow
  btrfs: sysfs: list the NO_HOLES feature
  btrfs: sysfs: don't show reserved incompat feature
  btrfs: call permission checks earlier in ioctls and return EPERM
  ...
parents 060e8e3b cf93da7b
...@@ -38,7 +38,7 @@ Mount Options ...@@ -38,7 +38,7 @@ Mount Options
============= =============
When mounting a btrfs filesystem, the following option are accepted. When mounting a btrfs filesystem, the following option are accepted.
Unless otherwise specified, all options default to off. Options with (*) are default options and will not show in the mount options.
alloc_start=<bytes> alloc_start=<bytes>
Debugging option to force all block allocations above a certain Debugging option to force all block allocations above a certain
...@@ -46,10 +46,12 @@ Unless otherwise specified, all options default to off. ...@@ -46,10 +46,12 @@ Unless otherwise specified, all options default to off.
bytes, optionally with a K, M, or G suffix, case insensitive. bytes, optionally with a K, M, or G suffix, case insensitive.
Default is 1MB. Default is 1MB.
noautodefrag(*)
autodefrag autodefrag
Detect small random writes into files and queue them up for the Disable/enable auto defragmentation.
defrag process. Works best for small files; Not well suited for Auto defragmentation detects small random writes into files and queue
large database workloads. them up for the defrag process. Works best for small files;
Not well suited for large database workloads.
check_int check_int
check_int_data check_int_data
...@@ -96,21 +98,26 @@ Unless otherwise specified, all options default to off. ...@@ -96,21 +98,26 @@ Unless otherwise specified, all options default to off.
can be avoided. Especially useful when trying to mount a multi-device can be avoided. Especially useful when trying to mount a multi-device
setup as root. May be specified multiple times for multiple devices. setup as root. May be specified multiple times for multiple devices.
nodiscard(*)
discard discard
Issue frequent commands to let the block device reclaim space freed by Disable/enable discard mount option.
the filesystem. This is useful for SSD devices, thinly provisioned Discard issues frequent commands to let the block device reclaim space
freed by the filesystem.
This is useful for SSD devices, thinly provisioned
LUNs and virtual machine images, but may have a significant LUNs and virtual machine images, but may have a significant
performance impact. (The fstrim command is also available to performance impact. (The fstrim command is also available to
initiate batch trims from userspace). initiate batch trims from userspace).
noenospc_debug(*)
enospc_debug enospc_debug
Debugging option to be more verbose in some ENOSPC conditions. Disable/enable debugging option to be more verbose in some ENOSPC conditions.
fatal_errors=<action> fatal_errors=<action>
Action to take when encountering a fatal error: Action to take when encountering a fatal error:
"bug" - BUG() on a fatal error. This is the default. "bug" - BUG() on a fatal error. This is the default.
"panic" - panic() on a fatal error. "panic" - panic() on a fatal error.
noflushoncommit(*)
flushoncommit flushoncommit
The 'flushoncommit' mount option forces any data dirtied by a write in a The 'flushoncommit' mount option forces any data dirtied by a write in a
prior transaction to commit as part of the current commit. This makes prior transaction to commit as part of the current commit. This makes
...@@ -134,26 +141,32 @@ Unless otherwise specified, all options default to off. ...@@ -134,26 +141,32 @@ Unless otherwise specified, all options default to off.
Specify that 1 metadata chunk should be allocated after every <value> Specify that 1 metadata chunk should be allocated after every <value>
data chunks. Off by default. data chunks. Off by default.
acl(*)
noacl noacl
Disable support for Posix Access Control Lists (ACLs). See the Enable/disable support for Posix Access Control Lists (ACLs). See the
acl(5) manual page for more information about ACLs. acl(5) manual page for more information about ACLs.
barrier(*)
nobarrier nobarrier
Disables the use of block layer write barriers. Write barriers ensure Enable/disable the use of block layer write barriers. Write barriers
that certain IOs make it through the device cache and are on persistent ensure that certain IOs make it through the device cache and are on
storage. If used on a device with a volatile (non-battery-backed) persistent storage. If disabled on a device with a volatile
write-back cache, this option will lead to filesystem corruption on a (non-battery-backed) write-back cache, nobarrier option will lead to
system crash or power loss. filesystem corruption on a system crash or power loss.
datacow(*)
nodatacow nodatacow
Disable data copy-on-write for newly created files. Implies nodatasum, Enable/disable data copy-on-write for newly created files.
and disables all compression. Nodatacow implies nodatasum, and disables all compression.
datasum(*)
nodatasum nodatasum
Disable data checksumming for newly created files. Enable/disable data checksumming for newly created files.
Datasum implies datacow.
treelog(*)
notreelog notreelog
Disable the tree logging used for fsync and O_SYNC writes. Enable/disable the tree logging used for fsync and O_SYNC writes.
recovery recovery
Enable autorecovery attempts if a bad tree root is found at mount time. Enable autorecovery attempts if a bad tree root is found at mount time.
......
config BTRFS_FS config BTRFS_FS
tristate "Btrfs filesystem support" tristate "Btrfs filesystem support"
select LIBCRC32C select CRYPTO
select CRYPTO_CRC32C
select ZLIB_INFLATE select ZLIB_INFLATE
select ZLIB_DEFLATE select ZLIB_DEFLATE
select LZO_COMPRESS select LZO_COMPRESS
......
...@@ -9,7 +9,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \ ...@@ -9,7 +9,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \
export.o tree-log.o free-space-cache.o zlib.o lzo.o \ export.o tree-log.o free-space-cache.o zlib.o lzo.o \
compression.o delayed-ref.o relocation.o delayed-inode.o scrub.o \ compression.o delayed-ref.o relocation.o delayed-inode.o scrub.o \
reada.o backref.o ulist.o qgroup.o send.o dev-replace.o raid56.o \ reada.o backref.o ulist.o qgroup.o send.o dev-replace.o raid56.o \
uuid-tree.o uuid-tree.o props.o hash.o
btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
btrfs-$(CONFIG_BTRFS_FS_CHECK_INTEGRITY) += check-integrity.o btrfs-$(CONFIG_BTRFS_FS_CHECK_INTEGRITY) += check-integrity.o
......
This diff is collapsed.
...@@ -43,6 +43,7 @@ ...@@ -43,6 +43,7 @@
#define BTRFS_INODE_COPY_EVERYTHING 8 #define BTRFS_INODE_COPY_EVERYTHING 8
#define BTRFS_INODE_IN_DELALLOC_LIST 9 #define BTRFS_INODE_IN_DELALLOC_LIST 9
#define BTRFS_INODE_READDIO_NEED_LOCK 10 #define BTRFS_INODE_READDIO_NEED_LOCK 10
#define BTRFS_INODE_HAS_PROPS 11
/* in memory btrfs inode */ /* in memory btrfs inode */
struct btrfs_inode { struct btrfs_inode {
...@@ -135,6 +136,9 @@ struct btrfs_inode { ...@@ -135,6 +136,9 @@ struct btrfs_inode {
*/ */
u64 index_cnt; u64 index_cnt;
/* Cache the directory index number to speed the dir/file remove */
u64 dir_index;
/* the fsync log has some corner cases that mean we have to check /* the fsync log has some corner cases that mean we have to check
* directories to see if any unlinks have been done before * directories to see if any unlinks have been done before
* the directory was logged. See tree-log.c for all the * the directory was logged. See tree-log.c for all the
......
...@@ -1456,10 +1456,14 @@ static int btrfsic_handle_extent_data( ...@@ -1456,10 +1456,14 @@ static int btrfsic_handle_extent_data(
btrfsic_read_from_block_data(block_ctx, &file_extent_item, btrfsic_read_from_block_data(block_ctx, &file_extent_item,
file_extent_item_offset, file_extent_item_offset,
sizeof(struct btrfs_file_extent_item)); sizeof(struct btrfs_file_extent_item));
next_bytenr = btrfs_stack_file_extent_disk_bytenr(&file_extent_item) + next_bytenr = btrfs_stack_file_extent_disk_bytenr(&file_extent_item);
btrfs_stack_file_extent_offset(&file_extent_item); if (btrfs_stack_file_extent_compression(&file_extent_item) ==
generation = btrfs_stack_file_extent_generation(&file_extent_item); BTRFS_COMPRESS_NONE) {
next_bytenr += btrfs_stack_file_extent_offset(&file_extent_item);
num_bytes = btrfs_stack_file_extent_num_bytes(&file_extent_item); num_bytes = btrfs_stack_file_extent_num_bytes(&file_extent_item);
} else {
num_bytes = btrfs_stack_file_extent_disk_num_bytes(&file_extent_item);
}
generation = btrfs_stack_file_extent_generation(&file_extent_item); generation = btrfs_stack_file_extent_generation(&file_extent_item);
if (state->print_mask & BTRFSIC_PRINT_MASK_VERY_VERBOSE) if (state->print_mask & BTRFSIC_PRINT_MASK_VERY_VERBOSE)
......
...@@ -128,9 +128,8 @@ static int check_compressed_csum(struct inode *inode, ...@@ -128,9 +128,8 @@ static int check_compressed_csum(struct inode *inode,
kunmap_atomic(kaddr); kunmap_atomic(kaddr);
if (csum != *cb_sum) { if (csum != *cb_sum) {
printk(KERN_INFO "btrfs csum failed ino %llu " btrfs_info(BTRFS_I(inode)->root->fs_info,
"extent %llu csum %u " "csum failed ino %llu extent %llu csum %u wanted %u mirror %d",
"wanted %u mirror %d\n",
btrfs_ino(inode), disk_start, csum, *cb_sum, btrfs_ino(inode), disk_start, csum, *cb_sum,
cb->mirror_num); cb->mirror_num);
ret = -EIO; ret = -EIO;
...@@ -411,7 +410,8 @@ int btrfs_submit_compressed_write(struct inode *inode, u64 start, ...@@ -411,7 +410,8 @@ int btrfs_submit_compressed_write(struct inode *inode, u64 start,
bio_add_page(bio, page, PAGE_CACHE_SIZE, 0); bio_add_page(bio, page, PAGE_CACHE_SIZE, 0);
} }
if (bytes_left < PAGE_CACHE_SIZE) { if (bytes_left < PAGE_CACHE_SIZE) {
printk("bytes left %lu compress len %lu nr %lu\n", btrfs_info(BTRFS_I(inode)->root->fs_info,
"bytes left %lu compress len %lu nr %lu",
bytes_left, cb->compressed_len, cb->nr_pages); bytes_left, cb->compressed_len, cb->nr_pages);
} }
bytes_left -= PAGE_CACHE_SIZE; bytes_left -= PAGE_CACHE_SIZE;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -48,6 +48,10 @@ struct btrfs_delayed_root { ...@@ -48,6 +48,10 @@ struct btrfs_delayed_root {
wait_queue_head_t wait; wait_queue_head_t wait;
}; };
#define BTRFS_DELAYED_NODE_IN_LIST 0
#define BTRFS_DELAYED_NODE_INODE_DIRTY 1
#define BTRFS_DELAYED_NODE_DEL_IREF 2
struct btrfs_delayed_node { struct btrfs_delayed_node {
u64 inode_id; u64 inode_id;
u64 bytes_reserved; u64 bytes_reserved;
...@@ -65,8 +69,7 @@ struct btrfs_delayed_node { ...@@ -65,8 +69,7 @@ struct btrfs_delayed_node {
struct btrfs_inode_item inode_item; struct btrfs_inode_item inode_item;
atomic_t refs; atomic_t refs;
u64 index_cnt; u64 index_cnt;
bool in_list; unsigned long flags;
bool inode_dirty;
int count; int count;
}; };
...@@ -125,6 +128,7 @@ int btrfs_commit_inode_delayed_inode(struct inode *inode); ...@@ -125,6 +128,7 @@ int btrfs_commit_inode_delayed_inode(struct inode *inode);
int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans, int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct inode *inode); struct btrfs_root *root, struct inode *inode);
int btrfs_fill_inode(struct inode *inode, u32 *rdev); int btrfs_fill_inode(struct inode *inode, u32 *rdev);
int btrfs_delayed_delete_inode_ref(struct inode *inode);
/* Used for drop dead root */ /* Used for drop dead root */
void btrfs_kill_all_delayed_nodes(struct btrfs_root *root); void btrfs_kill_all_delayed_nodes(struct btrfs_root *root);
......
This diff is collapsed.
...@@ -81,7 +81,10 @@ struct btrfs_delayed_ref_head { ...@@ -81,7 +81,10 @@ struct btrfs_delayed_ref_head {
*/ */
struct mutex mutex; struct mutex mutex;
struct list_head cluster; spinlock_t lock;
struct rb_root ref_root;
struct rb_node href_node;
struct btrfs_delayed_extent_op *extent_op; struct btrfs_delayed_extent_op *extent_op;
/* /*
...@@ -98,6 +101,7 @@ struct btrfs_delayed_ref_head { ...@@ -98,6 +101,7 @@ struct btrfs_delayed_ref_head {
*/ */
unsigned int must_insert_reserved:1; unsigned int must_insert_reserved:1;
unsigned int is_data:1; unsigned int is_data:1;
unsigned int processing:1;
}; };
struct btrfs_delayed_tree_ref { struct btrfs_delayed_tree_ref {
...@@ -116,7 +120,8 @@ struct btrfs_delayed_data_ref { ...@@ -116,7 +120,8 @@ struct btrfs_delayed_data_ref {
}; };
struct btrfs_delayed_ref_root { struct btrfs_delayed_ref_root {
struct rb_root root; /* head ref rbtree */
struct rb_root href_root;
/* this spin lock protects the rbtree and the entries inside */ /* this spin lock protects the rbtree and the entries inside */
spinlock_t lock; spinlock_t lock;
...@@ -124,7 +129,7 @@ struct btrfs_delayed_ref_root { ...@@ -124,7 +129,7 @@ struct btrfs_delayed_ref_root {
/* how many delayed ref updates we've queued, used by the /* how many delayed ref updates we've queued, used by the
* throttling code * throttling code
*/ */
unsigned long num_entries; atomic_t num_entries;
/* total number of head nodes in tree */ /* total number of head nodes in tree */
unsigned long num_heads; unsigned long num_heads;
...@@ -132,15 +137,6 @@ struct btrfs_delayed_ref_root { ...@@ -132,15 +137,6 @@ struct btrfs_delayed_ref_root {
/* total number of head nodes ready for processing */ /* total number of head nodes ready for processing */
unsigned long num_heads_ready; unsigned long num_heads_ready;
/*
* bumped when someone is making progress on the delayed
* refs, so that other procs know they are just adding to
* contention intead of helping
*/
atomic_t procs_running_refs;
atomic_t ref_seq;
wait_queue_head_t wait;
/* /*
* set when the tree is flushing before a transaction commit, * set when the tree is flushing before a transaction commit,
* used by the throttling code to decide if new updates need * used by the throttling code to decide if new updates need
...@@ -226,9 +222,9 @@ static inline void btrfs_delayed_ref_unlock(struct btrfs_delayed_ref_head *head) ...@@ -226,9 +222,9 @@ static inline void btrfs_delayed_ref_unlock(struct btrfs_delayed_ref_head *head)
mutex_unlock(&head->mutex); mutex_unlock(&head->mutex);
} }
int btrfs_find_ref_cluster(struct btrfs_trans_handle *trans,
struct list_head *cluster, u64 search_start); struct btrfs_delayed_ref_head *
void btrfs_release_ref_cluster(struct list_head *cluster); btrfs_select_ref_head(struct btrfs_trans_handle *trans);
int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info, int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info,
struct btrfs_delayed_ref_root *delayed_refs, struct btrfs_delayed_ref_root *delayed_refs,
......
...@@ -102,7 +102,8 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info) ...@@ -102,7 +102,8 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info)
ptr = btrfs_item_ptr(eb, slot, struct btrfs_dev_replace_item); ptr = btrfs_item_ptr(eb, slot, struct btrfs_dev_replace_item);
if (item_size != sizeof(struct btrfs_dev_replace_item)) { if (item_size != sizeof(struct btrfs_dev_replace_item)) {
pr_warn("btrfs: dev_replace entry found has unexpected size, ignore entry\n"); btrfs_warn(fs_info,
"dev_replace entry found has unexpected size, ignore entry");
goto no_valid_dev_replace_entry_found; goto no_valid_dev_replace_entry_found;
} }
...@@ -145,13 +146,19 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info) ...@@ -145,13 +146,19 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info)
if (!dev_replace->srcdev && if (!dev_replace->srcdev &&
!btrfs_test_opt(dev_root, DEGRADED)) { !btrfs_test_opt(dev_root, DEGRADED)) {
ret = -EIO; ret = -EIO;
pr_warn("btrfs: cannot mount because device replace operation is ongoing and\n" "srcdev (devid %llu) is missing, need to run 'btrfs dev scan'?\n", btrfs_warn(fs_info,
"cannot mount because device replace operation is ongoing and");
btrfs_warn(fs_info,
"srcdev (devid %llu) is missing, need to run 'btrfs dev scan'?",
src_devid); src_devid);
} }
if (!dev_replace->tgtdev && if (!dev_replace->tgtdev &&
!btrfs_test_opt(dev_root, DEGRADED)) { !btrfs_test_opt(dev_root, DEGRADED)) {
ret = -EIO; ret = -EIO;
pr_warn("btrfs: cannot mount because device replace operation is ongoing and\n" "tgtdev (devid %llu) is missing, need to run btrfs dev scan?\n", btrfs_warn(fs_info,
"cannot mount because device replace operation is ongoing and");
btrfs_warn(fs_info,
"tgtdev (devid %llu) is missing, need to run 'btrfs dev scan'?",
BTRFS_DEV_REPLACE_DEVID); BTRFS_DEV_REPLACE_DEVID);
} }
if (dev_replace->tgtdev) { if (dev_replace->tgtdev) {
...@@ -210,7 +217,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, ...@@ -210,7 +217,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans,
} }
ret = btrfs_search_slot(trans, dev_root, &key, path, -1, 1); ret = btrfs_search_slot(trans, dev_root, &key, path, -1, 1);
if (ret < 0) { if (ret < 0) {
pr_warn("btrfs: error %d while searching for dev_replace item!\n", btrfs_warn(fs_info, "error %d while searching for dev_replace item!",
ret); ret);
goto out; goto out;
} }
...@@ -230,7 +237,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, ...@@ -230,7 +237,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans,
*/ */
ret = btrfs_del_item(trans, dev_root, path); ret = btrfs_del_item(trans, dev_root, path);
if (ret != 0) { if (ret != 0) {
pr_warn("btrfs: delete too small dev_replace item failed %d!\n", btrfs_warn(fs_info, "delete too small dev_replace item failed %d!",
ret); ret);
goto out; goto out;
} }
...@@ -243,7 +250,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, ...@@ -243,7 +250,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans,
ret = btrfs_insert_empty_item(trans, dev_root, path, ret = btrfs_insert_empty_item(trans, dev_root, path,
&key, sizeof(*ptr)); &key, sizeof(*ptr));
if (ret < 0) { if (ret < 0) {
pr_warn("btrfs: insert dev_replace item failed %d!\n", btrfs_warn(fs_info, "insert dev_replace item failed %d!",
ret); ret);
goto out; goto out;
} }
...@@ -305,7 +312,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root, ...@@ -305,7 +312,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root,
struct btrfs_device *src_device = NULL; struct btrfs_device *src_device = NULL;
if (btrfs_fs_incompat(fs_info, RAID56)) { if (btrfs_fs_incompat(fs_info, RAID56)) {
pr_warn("btrfs: dev_replace cannot yet handle RAID5/RAID6\n"); btrfs_warn(fs_info, "dev_replace cannot yet handle RAID5/RAID6");
return -EINVAL; return -EINVAL;
} }
...@@ -325,7 +332,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root, ...@@ -325,7 +332,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root,
ret = btrfs_init_dev_replace_tgtdev(root, args->start.tgtdev_name, ret = btrfs_init_dev_replace_tgtdev(root, args->start.tgtdev_name,
&tgt_device); &tgt_device);
if (ret) { if (ret) {
pr_err("btrfs: target device %s is invalid!\n", btrfs_err(fs_info, "target device %s is invalid!",
args->start.tgtdev_name); args->start.tgtdev_name);
mutex_unlock(&fs_info->volume_mutex); mutex_unlock(&fs_info->volume_mutex);
return -EINVAL; return -EINVAL;
...@@ -341,7 +348,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root, ...@@ -341,7 +348,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root,
} }
if (tgt_device->total_bytes < src_device->total_bytes) { if (tgt_device->total_bytes < src_device->total_bytes) {
pr_err("btrfs: target device is smaller than source device!\n"); btrfs_err(fs_info, "target device is smaller than source device!");
ret = -EINVAL; ret = -EINVAL;
goto leave_no_lock; goto leave_no_lock;
} }
...@@ -366,7 +373,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root, ...@@ -366,7 +373,7 @@ int btrfs_dev_replace_start(struct btrfs_root *root,
dev_replace->tgtdev = tgt_device; dev_replace->tgtdev = tgt_device;
printk_in_rcu(KERN_INFO printk_in_rcu(KERN_INFO
"btrfs: dev_replace from %s (devid %llu) to %s started\n", "BTRFS: dev_replace from %s (devid %llu) to %s started\n",
src_device->missing ? "<missing disk>" : src_device->missing ? "<missing disk>" :
rcu_str_deref(src_device->name), rcu_str_deref(src_device->name),
src_device->devid, src_device->devid,
...@@ -489,7 +496,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, ...@@ -489,7 +496,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
if (scrub_ret) { if (scrub_ret) {
printk_in_rcu(KERN_ERR printk_in_rcu(KERN_ERR
"btrfs: btrfs_scrub_dev(%s, %llu, %s) failed %d\n", "BTRFS: btrfs_scrub_dev(%s, %llu, %s) failed %d\n",
src_device->missing ? "<missing disk>" : src_device->missing ? "<missing disk>" :
rcu_str_deref(src_device->name), rcu_str_deref(src_device->name),
src_device->devid, src_device->devid,
...@@ -504,7 +511,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, ...@@ -504,7 +511,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
} }
printk_in_rcu(KERN_INFO printk_in_rcu(KERN_INFO
"btrfs: dev_replace from %s (devid %llu) to %s) finished\n", "BTRFS: dev_replace from %s (devid %llu) to %s) finished\n",
src_device->missing ? "<missing disk>" : src_device->missing ? "<missing disk>" :
rcu_str_deref(src_device->name), rcu_str_deref(src_device->name),
src_device->devid, src_device->devid,
...@@ -699,7 +706,7 @@ void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info) ...@@ -699,7 +706,7 @@ void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info)
BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED; BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED;
dev_replace->time_stopped = get_seconds(); dev_replace->time_stopped = get_seconds();
dev_replace->item_needs_writeback = 1; dev_replace->item_needs_writeback = 1;
pr_info("btrfs: suspending dev_replace for unmount\n"); btrfs_info(fs_info, "suspending dev_replace for unmount");
break; break;
} }
...@@ -728,8 +735,9 @@ int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info) ...@@ -728,8 +735,9 @@ int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info)
break; break;
} }
if (!dev_replace->tgtdev || !dev_replace->tgtdev->bdev) { if (!dev_replace->tgtdev || !dev_replace->tgtdev->bdev) {
pr_info("btrfs: cannot continue dev_replace, tgtdev is missing\n" btrfs_info(fs_info, "cannot continue dev_replace, tgtdev is missing");
"btrfs: you may cancel the operation after 'mount -o degraded'\n"); btrfs_info(fs_info,
"you may cancel the operation after 'mount -o degraded'");
btrfs_dev_replace_unlock(dev_replace); btrfs_dev_replace_unlock(dev_replace);
return 0; return 0;
} }
...@@ -755,7 +763,7 @@ static int btrfs_dev_replace_kthread(void *data) ...@@ -755,7 +763,7 @@ static int btrfs_dev_replace_kthread(void *data)
kfree(status_args); kfree(status_args);
do_div(progress, 10); do_div(progress, 10);
printk_in_rcu(KERN_INFO printk_in_rcu(KERN_INFO
"btrfs: continuing dev_replace from %s (devid %llu) to %s @%u%%\n", "BTRFS: continuing dev_replace from %s (devid %llu) to %s @%u%%\n",
dev_replace->srcdev->missing ? "<missing disk>" : dev_replace->srcdev->missing ? "<missing disk>" :
rcu_str_deref(dev_replace->srcdev->name), rcu_str_deref(dev_replace->srcdev->name),
dev_replace->srcdev->devid, dev_replace->srcdev->devid,
......
...@@ -261,7 +261,7 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir, ...@@ -261,7 +261,7 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
* see if there is room in the item to insert this * see if there is room in the item to insert this
* name * name
*/ */
data_size = sizeof(*di) + name_len + sizeof(struct btrfs_item); data_size = sizeof(*di) + name_len;
leaf = path->nodes[0]; leaf = path->nodes[0];
slot = path->slots[0]; slot = path->slots[0];
if (data_size + btrfs_item_size_nr(leaf, slot) + if (data_size + btrfs_item_size_nr(leaf, slot) +
...@@ -459,7 +459,7 @@ int verify_dir_item(struct btrfs_root *root, ...@@ -459,7 +459,7 @@ int verify_dir_item(struct btrfs_root *root,
u8 type = btrfs_dir_type(leaf, dir_item); u8 type = btrfs_dir_type(leaf, dir_item);
if (type >= BTRFS_FT_MAX) { if (type >= BTRFS_FT_MAX) {
printk(KERN_CRIT "btrfs: invalid dir item type: %d\n", btrfs_crit(root->fs_info, "invalid dir item type: %d",
(int)type); (int)type);
return 1; return 1;
} }
...@@ -468,7 +468,7 @@ int verify_dir_item(struct btrfs_root *root, ...@@ -468,7 +468,7 @@ int verify_dir_item(struct btrfs_root *root,
namelen = XATTR_NAME_MAX; namelen = XATTR_NAME_MAX;
if (btrfs_dir_name_len(leaf, dir_item) > namelen) { if (btrfs_dir_name_len(leaf, dir_item) > namelen) {
printk(KERN_CRIT "btrfs: invalid dir item name len: %u\n", btrfs_crit(root->fs_info, "invalid dir item name len: %u",
(unsigned)btrfs_dir_data_len(leaf, dir_item)); (unsigned)btrfs_dir_data_len(leaf, dir_item));
return 1; return 1;
} }
...@@ -476,7 +476,7 @@ int verify_dir_item(struct btrfs_root *root, ...@@ -476,7 +476,7 @@ int verify_dir_item(struct btrfs_root *root,
/* BTRFS_MAX_XATTR_SIZE is the same for all dir items */ /* BTRFS_MAX_XATTR_SIZE is the same for all dir items */
if ((btrfs_dir_data_len(leaf, dir_item) + if ((btrfs_dir_data_len(leaf, dir_item) +
btrfs_dir_name_len(leaf, dir_item)) > BTRFS_MAX_XATTR_SIZE(root)) { btrfs_dir_name_len(leaf, dir_item)) > BTRFS_MAX_XATTR_SIZE(root)) {
printk(KERN_CRIT "btrfs: invalid dir item name + data len: %u + %u\n", btrfs_crit(root->fs_info, "invalid dir item name + data len: %u + %u",
(unsigned)btrfs_dir_name_len(leaf, dir_item), (unsigned)btrfs_dir_name_len(leaf, dir_item),
(unsigned)btrfs_dir_data_len(leaf, dir_item)); (unsigned)btrfs_dir_data_len(leaf, dir_item));
return 1; return 1;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -43,6 +43,7 @@ ...@@ -43,6 +43,7 @@
#define EXTENT_BUFFER_WRITEBACK 7 #define EXTENT_BUFFER_WRITEBACK 7
#define EXTENT_BUFFER_IOERR 8 #define EXTENT_BUFFER_IOERR 8
#define EXTENT_BUFFER_DUMMY 9 #define EXTENT_BUFFER_DUMMY 9
#define EXTENT_BUFFER_IN_TREE 10
/* these are flags for extent_clear_unlock_delalloc */ /* these are flags for extent_clear_unlock_delalloc */
#define PAGE_UNLOCK (1 << 0) #define PAGE_UNLOCK (1 << 0)
...@@ -94,12 +95,10 @@ struct extent_io_ops { ...@@ -94,12 +95,10 @@ struct extent_io_ops {
struct extent_io_tree { struct extent_io_tree {
struct rb_root state; struct rb_root state;
struct radix_tree_root buffer;
struct address_space *mapping; struct address_space *mapping;
u64 dirty_bytes; u64 dirty_bytes;
int track_uptodate; int track_uptodate;
spinlock_t lock; spinlock_t lock;
spinlock_t buffer_lock;
struct extent_io_ops *ops; struct extent_io_ops *ops;
}; };
...@@ -130,7 +129,7 @@ struct extent_buffer { ...@@ -130,7 +129,7 @@ struct extent_buffer {
unsigned long map_start; unsigned long map_start;
unsigned long map_len; unsigned long map_len;
unsigned long bflags; unsigned long bflags;
struct extent_io_tree *tree; struct btrfs_fs_info *fs_info;
spinlock_t refs_lock; spinlock_t refs_lock;
atomic_t refs; atomic_t refs;
atomic_t io_pages; atomic_t io_pages;
...@@ -266,11 +265,11 @@ int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, ...@@ -266,11 +265,11 @@ int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
int get_state_private(struct extent_io_tree *tree, u64 start, u64 *private); int get_state_private(struct extent_io_tree *tree, u64 start, u64 *private);
void set_page_extent_mapped(struct page *page); void set_page_extent_mapped(struct page *page);
struct extent_buffer *alloc_extent_buffer(struct extent_io_tree *tree, struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
u64 start, unsigned long len); u64 start, unsigned long len);
struct extent_buffer *alloc_dummy_extent_buffer(u64 start, unsigned long len); struct extent_buffer *alloc_dummy_extent_buffer(u64 start, unsigned long len);
struct extent_buffer *btrfs_clone_extent_buffer(struct extent_buffer *src); struct extent_buffer *btrfs_clone_extent_buffer(struct extent_buffer *src);
struct extent_buffer *find_extent_buffer(struct extent_io_tree *tree, struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
u64 start); u64 start);
void free_extent_buffer(struct extent_buffer *eb); void free_extent_buffer(struct extent_buffer *eb);
void free_extent_buffer_stale(struct extent_buffer *eb); void free_extent_buffer_stale(struct extent_buffer *eb);
......
...@@ -79,12 +79,21 @@ void free_extent_map(struct extent_map *em) ...@@ -79,12 +79,21 @@ void free_extent_map(struct extent_map *em)
} }
} }
static struct rb_node *tree_insert(struct rb_root *root, u64 offset, /* simple helper to do math around the end of an extent, handling wrap */
struct rb_node *node) static u64 range_end(u64 start, u64 len)
{
if (start + len < start)
return (u64)-1;
return start + len;
}
static int tree_insert(struct rb_root *root, struct extent_map *em)
{ {
struct rb_node **p = &root->rb_node; struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL; struct rb_node *parent = NULL;
struct extent_map *entry; struct extent_map *entry = NULL;
struct rb_node *orig_parent = NULL;
u64 end = range_end(em->start, em->len);
while (*p) { while (*p) {
parent = *p; parent = *p;
...@@ -92,19 +101,37 @@ static struct rb_node *tree_insert(struct rb_root *root, u64 offset, ...@@ -92,19 +101,37 @@ static struct rb_node *tree_insert(struct rb_root *root, u64 offset,
WARN_ON(!entry->in_tree); WARN_ON(!entry->in_tree);
if (offset < entry->start) if (em->start < entry->start)
p = &(*p)->rb_left; p = &(*p)->rb_left;
else if (offset >= extent_map_end(entry)) else if (em->start >= extent_map_end(entry))
p = &(*p)->rb_right; p = &(*p)->rb_right;
else else
return parent; return -EEXIST;
} }
entry = rb_entry(node, struct extent_map, rb_node); orig_parent = parent;
entry->in_tree = 1; while (parent && em->start >= extent_map_end(entry)) {
rb_link_node(node, parent, p); parent = rb_next(parent);
rb_insert_color(node, root); entry = rb_entry(parent, struct extent_map, rb_node);
return NULL; }
if (parent)
if (end > entry->start && em->start < extent_map_end(entry))
return -EEXIST;
parent = orig_parent;
entry = rb_entry(parent, struct extent_map, rb_node);
while (parent && em->start < entry->start) {
parent = rb_prev(parent);
entry = rb_entry(parent, struct extent_map, rb_node);
}
if (parent)
if (end > entry->start && em->start < extent_map_end(entry))
return -EEXIST;
em->in_tree = 1;
rb_link_node(&em->rb_node, orig_parent, p);
rb_insert_color(&em->rb_node, root);
return 0;
} }
/* /*
...@@ -228,7 +255,7 @@ static void try_merge_map(struct extent_map_tree *tree, struct extent_map *em) ...@@ -228,7 +255,7 @@ static void try_merge_map(struct extent_map_tree *tree, struct extent_map *em)
merge = rb_entry(rb, struct extent_map, rb_node); merge = rb_entry(rb, struct extent_map, rb_node);
if (rb && mergable_maps(em, merge)) { if (rb && mergable_maps(em, merge)) {
em->len += merge->len; em->len += merge->len;
em->block_len += merge->len; em->block_len += merge->block_len;
rb_erase(&merge->rb_node, &tree->map); rb_erase(&merge->rb_node, &tree->map);
merge->in_tree = 0; merge->in_tree = 0;
em->mod_len = (merge->mod_start + merge->mod_len) - em->mod_start; em->mod_len = (merge->mod_start + merge->mod_len) - em->mod_start;
...@@ -310,20 +337,11 @@ int add_extent_mapping(struct extent_map_tree *tree, ...@@ -310,20 +337,11 @@ int add_extent_mapping(struct extent_map_tree *tree,
struct extent_map *em, int modified) struct extent_map *em, int modified)
{ {
int ret = 0; int ret = 0;
struct rb_node *rb;
struct extent_map *exist;
exist = lookup_extent_mapping(tree, em->start, em->len); ret = tree_insert(&tree->map, em);
if (exist) { if (ret)
free_extent_map(exist);
ret = -EEXIST;
goto out; goto out;
}
rb = tree_insert(&tree->map, em->start, &em->rb_node);
if (rb) {
ret = -EEXIST;
goto out;
}
atomic_inc(&em->refs); atomic_inc(&em->refs);
em->mod_start = em->start; em->mod_start = em->start;
...@@ -337,14 +355,6 @@ int add_extent_mapping(struct extent_map_tree *tree, ...@@ -337,14 +355,6 @@ int add_extent_mapping(struct extent_map_tree *tree,
return ret; return ret;
} }
/* simple helper to do math around the end of an extent, handling wrap */
static u64 range_end(u64 start, u64 len)
{
if (start + len < start)
return (u64)-1;
return start + len;
}
static struct extent_map * static struct extent_map *
__lookup_extent_mapping(struct extent_map_tree *tree, __lookup_extent_mapping(struct extent_map_tree *tree,
u64 start, u64 len, int strict) u64 start, u64 len, int strict)
......
...@@ -246,8 +246,8 @@ static int __btrfs_lookup_bio_sums(struct btrfs_root *root, ...@@ -246,8 +246,8 @@ static int __btrfs_lookup_bio_sums(struct btrfs_root *root,
offset + bvec->bv_len - 1, offset + bvec->bv_len - 1,
EXTENT_NODATASUM, GFP_NOFS); EXTENT_NODATASUM, GFP_NOFS);
} else { } else {
printk(KERN_INFO "btrfs no csum found " btrfs_info(BTRFS_I(inode)->root->fs_info,
"for inode %llu start %llu\n", "no csum found for inode %llu start %llu",
btrfs_ino(inode), offset); btrfs_ino(inode), offset);
} }
item = NULL; item = NULL;
......
This diff is collapsed.
...@@ -347,8 +347,8 @@ static int io_ctl_prepare_pages(struct io_ctl *io_ctl, struct inode *inode, ...@@ -347,8 +347,8 @@ static int io_ctl_prepare_pages(struct io_ctl *io_ctl, struct inode *inode,
btrfs_readpage(NULL, page); btrfs_readpage(NULL, page);
lock_page(page); lock_page(page);
if (!PageUptodate(page)) { if (!PageUptodate(page)) {
printk(KERN_ERR "btrfs: error reading free " btrfs_err(BTRFS_I(inode)->root->fs_info,
"space cache\n"); "error reading free space cache");
io_ctl_drop_pages(io_ctl); io_ctl_drop_pages(io_ctl);
return -EIO; return -EIO;
} }
...@@ -405,7 +405,7 @@ static int io_ctl_check_generation(struct io_ctl *io_ctl, u64 generation) ...@@ -405,7 +405,7 @@ static int io_ctl_check_generation(struct io_ctl *io_ctl, u64 generation)
gen = io_ctl->cur; gen = io_ctl->cur;
if (le64_to_cpu(*gen) != generation) { if (le64_to_cpu(*gen) != generation) {
printk_ratelimited(KERN_ERR "btrfs: space cache generation " printk_ratelimited(KERN_ERR "BTRFS: space cache generation "
"(%Lu) does not match inode (%Lu)\n", *gen, "(%Lu) does not match inode (%Lu)\n", *gen,
generation); generation);
io_ctl_unmap_page(io_ctl); io_ctl_unmap_page(io_ctl);
...@@ -463,7 +463,7 @@ static int io_ctl_check_crc(struct io_ctl *io_ctl, int index) ...@@ -463,7 +463,7 @@ static int io_ctl_check_crc(struct io_ctl *io_ctl, int index)
PAGE_CACHE_SIZE - offset); PAGE_CACHE_SIZE - offset);
btrfs_csum_final(crc, (char *)&crc); btrfs_csum_final(crc, (char *)&crc);
if (val != crc) { if (val != crc) {
printk_ratelimited(KERN_ERR "btrfs: csum mismatch on free " printk_ratelimited(KERN_ERR "BTRFS: csum mismatch on free "
"space cache\n"); "space cache\n");
io_ctl_unmap_page(io_ctl); io_ctl_unmap_page(io_ctl);
return -EIO; return -EIO;
...@@ -1902,7 +1902,7 @@ int __btrfs_add_free_space(struct btrfs_free_space_ctl *ctl, ...@@ -1902,7 +1902,7 @@ int __btrfs_add_free_space(struct btrfs_free_space_ctl *ctl,
spin_unlock(&ctl->tree_lock); spin_unlock(&ctl->tree_lock);
if (ret) { if (ret) {
printk(KERN_CRIT "btrfs: unable to add free space :%d\n", ret); printk(KERN_CRIT "BTRFS: unable to add free space :%d\n", ret);
ASSERT(ret != -EEXIST); ASSERT(ret != -EEXIST);
} }
...@@ -2011,14 +2011,15 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group, ...@@ -2011,14 +2011,15 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
info = rb_entry(n, struct btrfs_free_space, offset_index); info = rb_entry(n, struct btrfs_free_space, offset_index);
if (info->bytes >= bytes && !block_group->ro) if (info->bytes >= bytes && !block_group->ro)
count++; count++;
printk(KERN_CRIT "entry offset %llu, bytes %llu, bitmap %s\n", btrfs_crit(block_group->fs_info,
"entry offset %llu, bytes %llu, bitmap %s",
info->offset, info->bytes, info->offset, info->bytes,
(info->bitmap) ? "yes" : "no"); (info->bitmap) ? "yes" : "no");
} }
printk(KERN_INFO "block group has cluster?: %s\n", btrfs_info(block_group->fs_info, "block group has cluster?: %s",
list_empty(&block_group->cluster_list) ? "no" : "yes"); list_empty(&block_group->cluster_list) ? "no" : "yes");
printk(KERN_INFO "%d blocks of free space at or bigger than bytes is" btrfs_info(block_group->fs_info,
"\n", count); "%d blocks of free space at or bigger than bytes is", count);
} }
void btrfs_init_free_space_ctl(struct btrfs_block_group_cache *block_group) void btrfs_init_free_space_ctl(struct btrfs_block_group_cache *block_group)
...@@ -2421,7 +2422,6 @@ setup_cluster_no_bitmap(struct btrfs_block_group_cache *block_group, ...@@ -2421,7 +2422,6 @@ setup_cluster_no_bitmap(struct btrfs_block_group_cache *block_group,
struct btrfs_free_space *entry = NULL; struct btrfs_free_space *entry = NULL;
struct btrfs_free_space *last; struct btrfs_free_space *last;
struct rb_node *node; struct rb_node *node;
u64 window_start;
u64 window_free; u64 window_free;
u64 max_extent; u64 max_extent;
u64 total_size = 0; u64 total_size = 0;
...@@ -2443,7 +2443,6 @@ setup_cluster_no_bitmap(struct btrfs_block_group_cache *block_group, ...@@ -2443,7 +2443,6 @@ setup_cluster_no_bitmap(struct btrfs_block_group_cache *block_group,
entry = rb_entry(node, struct btrfs_free_space, offset_index); entry = rb_entry(node, struct btrfs_free_space, offset_index);
} }
window_start = entry->offset;
window_free = entry->bytes; window_free = entry->bytes;
max_extent = entry->bytes; max_extent = entry->bytes;
first = entry; first = entry;
......
/*
* Copyright (C) 2014 Filipe David Borba Manana <fdmanana@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <crypto/hash.h>
#include <linux/err.h>
#include "hash.h"
static struct crypto_shash *tfm;
int __init btrfs_hash_init(void)
{
tfm = crypto_alloc_shash("crc32c", 0, 0);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
return 0;
}
void btrfs_hash_exit(void)
{
crypto_free_shash(tfm);
}
u32 btrfs_crc32c(u32 crc, const void *address, unsigned int length)
{
struct {
struct shash_desc shash;
char ctx[crypto_shash_descsize(tfm)];
} desc;
int err;
desc.shash.tfm = tfm;
desc.shash.flags = 0;
*(u32 *)desc.ctx = crc;
err = crypto_shash_update(&desc.shash, address, length);
BUG_ON(err);
return *(u32 *)desc.ctx;
}
...@@ -19,10 +19,15 @@ ...@@ -19,10 +19,15 @@
#ifndef __HASH__ #ifndef __HASH__
#define __HASH__ #define __HASH__
#include <linux/crc32c.h> int __init btrfs_hash_init(void);
void btrfs_hash_exit(void);
u32 btrfs_crc32c(u32 crc, const void *address, unsigned int length);
static inline u64 btrfs_name_hash(const char *name, int len) static inline u64 btrfs_name_hash(const char *name, int len)
{ {
return crc32c((u32)~1, name, len); return btrfs_crc32c((u32)~1, name, len);
} }
/* /*
...@@ -31,7 +36,7 @@ static inline u64 btrfs_name_hash(const char *name, int len) ...@@ -31,7 +36,7 @@ static inline u64 btrfs_name_hash(const char *name, int len)
static inline u64 btrfs_extref_hash(u64 parent_objectid, const char *name, static inline u64 btrfs_extref_hash(u64 parent_objectid, const char *name,
int len) int len)
{ {
return (u64) crc32c(parent_objectid, name, len); return (u64) btrfs_crc32c(parent_objectid, name, len);
} }
#endif #endif
...@@ -91,32 +91,6 @@ int btrfs_find_name_in_ext_backref(struct btrfs_path *path, u64 ref_objectid, ...@@ -91,32 +91,6 @@ int btrfs_find_name_in_ext_backref(struct btrfs_path *path, u64 ref_objectid,
return 0; return 0;
} }
static struct btrfs_inode_ref *
btrfs_lookup_inode_ref(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path,
const char *name, int name_len,
u64 inode_objectid, u64 ref_objectid, int ins_len,
int cow)
{
int ret;
struct btrfs_key key;
struct btrfs_inode_ref *ref;
key.objectid = inode_objectid;
key.type = BTRFS_INODE_REF_KEY;
key.offset = ref_objectid;
ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
if (ret < 0)
return ERR_PTR(ret);
if (ret > 0)
return NULL;
if (!find_name_in_backref(path, name, name_len, &ref))
return NULL;
return ref;
}
/* Returns NULL if no extref found */ /* Returns NULL if no extref found */
struct btrfs_inode_extref * struct btrfs_inode_extref *
btrfs_lookup_inode_extref(struct btrfs_trans_handle *trans, btrfs_lookup_inode_extref(struct btrfs_trans_handle *trans,
...@@ -144,45 +118,6 @@ btrfs_lookup_inode_extref(struct btrfs_trans_handle *trans, ...@@ -144,45 +118,6 @@ btrfs_lookup_inode_extref(struct btrfs_trans_handle *trans,
return extref; return extref;
} }
int btrfs_get_inode_ref_index(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path,
const char *name, int name_len,
u64 inode_objectid, u64 ref_objectid, int mod,
u64 *ret_index)
{
struct btrfs_inode_ref *ref;
struct btrfs_inode_extref *extref;
int ins_len = mod < 0 ? -1 : 0;
int cow = mod != 0;
ref = btrfs_lookup_inode_ref(trans, root, path, name, name_len,
inode_objectid, ref_objectid, ins_len,
cow);
if (IS_ERR(ref))
return PTR_ERR(ref);
if (ref != NULL) {
*ret_index = btrfs_inode_ref_index(path->nodes[0], ref);
return 0;
}
btrfs_release_path(path);
extref = btrfs_lookup_inode_extref(trans, root, path, name,
name_len, inode_objectid,
ref_objectid, ins_len, cow);
if (IS_ERR(extref))
return PTR_ERR(extref);
if (extref) {
*ret_index = btrfs_inode_extref_index(path->nodes[0], extref);
return 0;
}
return -ENOENT;
}
static int btrfs_del_inode_extref(struct btrfs_trans_handle *trans, static int btrfs_del_inode_extref(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct btrfs_root *root,
const char *name, int name_len, const char *name, int name_len,
......
This diff is collapsed.
This diff is collapsed.
...@@ -141,7 +141,7 @@ static int lzo_compress_pages(struct list_head *ws, ...@@ -141,7 +141,7 @@ static int lzo_compress_pages(struct list_head *ws,
ret = lzo1x_1_compress(data_in, in_len, workspace->cbuf, ret = lzo1x_1_compress(data_in, in_len, workspace->cbuf,
&out_len, workspace->mem); &out_len, workspace->mem);
if (ret != LZO_E_OK) { if (ret != LZO_E_OK) {
printk(KERN_DEBUG "btrfs deflate in loop returned %d\n", printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n",
ret); ret);
ret = -1; ret = -1;
goto out; goto out;
...@@ -357,7 +357,7 @@ static int lzo_decompress_biovec(struct list_head *ws, ...@@ -357,7 +357,7 @@ static int lzo_decompress_biovec(struct list_head *ws,
if (need_unmap) if (need_unmap)
kunmap(pages_in[page_in_index - 1]); kunmap(pages_in[page_in_index - 1]);
if (ret != LZO_E_OK) { if (ret != LZO_E_OK) {
printk(KERN_WARNING "btrfs decompress failed\n"); printk(KERN_WARNING "BTRFS: decompress failed\n");
ret = -1; ret = -1;
break; break;
} }
...@@ -401,7 +401,7 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in, ...@@ -401,7 +401,7 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in,
out_len = PAGE_CACHE_SIZE; out_len = PAGE_CACHE_SIZE;
ret = lzo1x_decompress_safe(data_in, in_len, workspace->buf, &out_len); ret = lzo1x_decompress_safe(data_in, in_len, workspace->buf, &out_len);
if (ret != LZO_E_OK) { if (ret != LZO_E_OK) {
printk(KERN_WARNING "btrfs decompress failed!\n"); printk(KERN_WARNING "BTRFS: decompress failed!\n");
ret = -1; ret = -1;
goto out; goto out;
} }
......
This diff is collapsed.
...@@ -69,23 +69,3 @@ int btrfs_del_orphan_item(struct btrfs_trans_handle *trans, ...@@ -69,23 +69,3 @@ int btrfs_del_orphan_item(struct btrfs_trans_handle *trans,
btrfs_free_path(path); btrfs_free_path(path);
return ret; return ret;
} }
int btrfs_find_orphan_item(struct btrfs_root *root, u64 offset)
{
struct btrfs_path *path;
struct btrfs_key key;
int ret;
key.objectid = BTRFS_ORPHAN_OBJECTID;
key.type = BTRFS_ORPHAN_ITEM_KEY;
key.offset = offset;
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
btrfs_free_path(path);
return ret;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment