1. 20 Feb, 2013 21 commits
  2. 15 Feb, 2013 2 commits
    • David Sterba's avatar
      btrfs: access superblock via pagecache in scan_one_device · 6f60cbd3
      David Sterba authored
      btrfs_scan_one_device is calling set_blocksize() which can race
      with a concurrent process making dirty page cache pages.  It can end up
      dropping dirty page cache pages on the floor, which isn't very nice when
      someone is just running btrfs dev scan to find filesystems on the
      box.
      
      Now that udev is registering btrfs devices as it discovers them, we can
      actually end up racing with our own mkfs program too.  When this
      happens, we drop some of the important blocks written by mkfs.
      
      This commit changes scan_one_device to read the super out of the page
      cache instead of trying to use bread.  This way we don't have to care
      about the blocksize of the device.
      
      This also drops the invalidate_bdev() call.  It wasn't very polite to
      invalidate during the scan either.  mkfs is putting the super into the
      page cache, there's no reason to invalidate at this point.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <chris.mason@fusionio.com>
      6f60cbd3
    • Arne Jansen's avatar
      Btrfs: fix crash in log replay with qgroups enabled · 2a745b14
      Arne Jansen authored
      When replaying a log tree with qgroups enabled, tree_mod_log_rewind does a
      sanity-check of the number of items against the maximum possible number.
      It calculates that number with the nodesize of fs_root. Unfortunately
      fs_root is not yet set at this stage. So instead use the nodesize from
      tree_root, which is already initialized.
      Signed-off-by: default avatarArne Jansen <sensille@gmx.net>
      Signed-off-by: default avatarChris Mason <chris.mason@fusionio.com>
      2a745b14
  3. 06 Feb, 2013 3 commits
  4. 05 Feb, 2013 6 commits
    • Josef Bacik's avatar
      Btrfs: fix possible stale data exposure · 59fe4f41
      Josef Bacik authored
      We specifically do not update the disk i_size if there are ordered extents
      outstanding for any area between the current disk_i_size and our ordered
      extent so that we do not expose stale data.  The problem is the check we
      have only checks if the ordered extent starts at or after the current
      disk_i_size, which doesn't take into account an ordered extent that starts
      before the current disk_i_size and ends past the disk_i_size.  Fix this by
      checking if the extent ends past the disk_i_size.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      59fe4f41
    • Josef Bacik's avatar
      Btrfs: fix missing i_size update · 5d1f4020
      Josef Bacik authored
      If we have an ordered extent before the ordered extent we are currently
      completing that is after the current disk_i_size we will put our i_size
      update into that ordered extent so that we do not expose stale data.  The
      problem is that if our disk i_size is updated past the previous ordered
      extent we won't update the i_size with the pending i_size update.  So check
      the pending i_size update and if its above the current disk i_size we need
      to go ahead and try to update.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      5d1f4020
    • Liu Bo's avatar
      Btrfs: fix race between snapshot deletion and getting inode · 6f1c3605
      Liu Bo authored
      While running snapshot testscript created by Mitch and David,
      the race between autodefrag and snapshot deletion can lead to
      corruption of dead_root list so that we can get crash on
      btrfs_clean_old_snapshots().
      
      And besides autodefrag, scrub also does the same thing, ie. read
      root first and get inode.
      
      Here is the story(take autodefrag as an example):
      (1) when we delete a snapshot or subvolume, it will set its root's
      refs to zero and do a iput() on its own inode, and if this inode happens
      to be the only active in-meory one in root's inode rbtree, it will add
      itself to the global dead_roots list for later cleanup.
      
      (2) after (1), the autodefrag thread may read another inode for defrag
      and the inode is just in the deleted snapshot/subvolume, but all of these
      are without checking if the root is still valid(refs > 0).  So the end up
      result is adding the deleted snapshot/subvolume's root to the global
      dead_roots list AGAIN.
      
      Fortunately, we already have a srcu lock to avoid the race, ie. subvol_srcu.
      
      So all we need to do is to take the lock to protect 'read root and get inode',
      since we synchronize to wait for the rcu grace period before adding something
      to the global dead_roots list.
      Reported-by: default avatarMitch Harder <mitch.harder@sabayonlinux.org>
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      6f1c3605
    • Miao Xie's avatar
      Btrfs: fix missing release of the space/qgroup reservation in start_transaction() · 843fcf35
      Miao Xie authored
      When we fail to start a transaction, we need to release the reserved free space
      and qgroup space, fix it.
      Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Reviewed-by: default avatarJan Schmidt <list.btrfs@jan-o-sch.net>
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      843fcf35
    • Miao Xie's avatar
      Btrfs: fix wrong sync_writers decrement in btrfs_file_aio_write() · 0a3404dc
      Miao Xie authored
      If the checks at the beginning of btrfs_file_aio_write() fail, we needn't
      decrease ->sync_writers, because we have not increased it. Fix it.
      Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      0a3404dc
    • Josef Bacik's avatar
      Btrfs: do not merge logged extents if we've removed them from the tree · 222c81dc
      Josef Bacik authored
      You can run into this problem where if somebody is fsyncing and writing out
      the existing extents you will have removed the extent map from the em tree,
      but it's still valid for the current fsync so we go ahead and write it.  The
      problem is we unconditionally try to merge it back into the em tree, but if
      we've removed it from the em tree that will cause use after free problems.
      Fix this to only merge if we are still a part of the tree.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
      222c81dc
  5. 01 Feb, 2013 1 commit
  6. 24 Jan, 2013 7 commits