1. 24 Jun, 2010 9 commits
    • NeilBrown's avatar
      md/raid5: More careful check for "has array failed". · 674806d6
      NeilBrown authored
      When we are reshaping an array, the device failure combinations
      that cause us to decide that the array as failed are more subtle.
      
      In particular, any 'spare' will be fully in-sync in the section
      of the array that has already been reshaped, thus failures that
      affect only that section are less critical.
      
      So encode this subtlety in a new function and call it as appropriate.
      
      The case that showed this problem was a 4 drive RAID5 to 8 drive RAID6
      conversion where the last two devices failed.
      This resulted in:
      
        good good good good incomplete good good failed failed
      
      while converting a 5-drive RAID6 to 8 drive RAID5
      The incomplete device causes the whole array to look bad,
      bad as it was actually good for the section that had been
      converted to 8-drives, all the data was actually safe.
      Reported-by: default avatarTerry Morris <tbmorris@tbmorris.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      674806d6
    • NeilBrown's avatar
      md: Don't update ->recovery_offset when reshaping an array to fewer devices. · 70fffd0b
      NeilBrown authored
      When an array is reshaped to have fewer devices, the reshape proceeds
      from the end of the devices to the beginning.
      
      If a device happens to be non-In_sync (which is possible but rare)
      we would normally update the ->recovery_offset as the reshape
      progresses. However that would be wrong as the recover_offset records
      that the early part of the device is in_sync, while in fact it would
      only be the later part that is in_sync, and in any case the offset
      number would be measured from the wrong end of the device.
      
      Relatedly, if after a reshape a spare is discovered to not be
      recoverred all the way to the end, not allow spare_active
      to incorporate it in the array.
      
      This becomes relevant in the following sample scenario:
      
      A 4 drive RAID5 is converted to a 6 drive RAID6 in a combined
      operation.
      The RAID5->RAID6 conversion will cause a 5 drive to be included as a
      spare, then the 5drive -> 6drive reshape will effectively rebuild that
      spare as it progresses.  The 6th drive is treated as in_sync the whole
      time as there is never any case that we might consider reading from
      it, but must not because there is no valid data.
      
      If we interrupt this reshape part-way through and reverse it to return
      to a 5-drive RAID6 (or event a 4-drive RAID5), we don't want to update
      the recovery_offset - as that would be wrong - and we don't want to
      include that spare as active in the 5-drive RAID6 when the reversed
      reshape completed and it will be mostly out-of-sync still.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      70fffd0b
    • NeilBrown's avatar
      md/raid5: avoid oops when number of devices is reduced then increased. · e4e11e38
      NeilBrown authored
      The entries in the stripe_cache maintained by raid5 are enlarged
      when we increased the number of devices in the array, but not
      shrunk when we reduce the number of devices.
      So if entries are added after reducing the number of devices, we
      much ensure to initialise the whole entry, not just the part that
      is currently relevant.  Otherwise if we enlarge the array again,
      we will reference uninitialised values.
      
      As grow_buffers/shrink_buffer now want to use a count that is stored
      explicity in the raid_conf, they should get it from there rather than
      being passed it as a parameter.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e4e11e38
    • Maciej Trela's avatar
      md: enable raid4->raid0 takeover · 049d6c1e
      Maciej Trela authored
      Only level 5 with layout=PARITY_N can be taken over to raid0 now.
      Lets allow level 4 either.
      Signed-off-by: default avatarMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      049d6c1e
    • Maciej Trela's avatar
      md: clear layout after ->raid0 takeover · 001048a3
      Maciej Trela authored
      After takeover from raid5/10 -> raid0 mddev->layout is not cleared.
      Signed-off-by: default avatarMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      001048a3
    • Maciej Trela's avatar
      md: fix raid10 takeover: use new_layout for setup_conf · f73ea873
      Maciej Trela authored
      Use mddev->new_layout in setup_conf.
      Also use new_chunk, and don't set ->degraded in takeover().  That
      gets set in run()
      Signed-off-by: default avatarMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      f73ea873
    • NeilBrown's avatar
      md: fix handling of array level takeover that re-arranges devices. · e93f68a1
      NeilBrown authored
      Most array level changes leave the list of devices largely unchanged,
      possibly causing one at the end to become redundant.
      However conversions between RAID0 and RAID10 need to renumber
      all devices (except 0).
      
      This renumbering is currently being done in the ->run method when the
      new personality takes over.  However this is too late as the common
      code in md.c might already have invalidated some of the devices if
      they had a ->raid_disk number that appeared to high.
      
      Moving it into the ->takeover method is too early as the array is
      still active at that time and wrong ->raid_disk numbers could cause
      confusion.
      
      So add a ->new_raid_disk field to mdk_rdev_s and use it to communicate
      the new raid_disk number.
      Now the common code knows exactly which devices need to be renumbered,
      and which can be invalidated, and can do it all at a convenient time
      when the array is suspend.
      It can also update some symlinks in sysfs which previously were not be
      updated correctly.
      Reported-by: default avatarMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e93f68a1
    • Prasanna S. Panchamukhi's avatar
      md: raid10: Fix null pointer dereference in fix_read_error() · 0544a21d
      Prasanna S. Panchamukhi authored
      Such NULL pointer dereference can occur when the driver was fixing the
      read errors/bad blocks and the disk was physically removed
      causing a system crash. This patch check if the
      rcu_dereference() returns valid rdev before accessing it in fix_read_error().
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarPrasanna S. Panchamukhi <prasanna.panchamukhi@riverbed.com>
      Signed-off-by: default avatarRob Becker <rbecker@riverbed.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      0544a21d
    • NeilBrown's avatar
      Restore partition detection of newly created md arrays. · f3b99be1
      NeilBrown authored
      Commit  b821eaa5 broke partition
      detection for md arrays.
      
      The logic was almost right.  However if revalidate_disk is called
      when the device is not yet open, bdev->bd_disk won't be set, so the
      flush_disk() Call will not set bd_invalidated.
      
      So when md_open is called we still need to ensure that
      ->bd_invalidated gets set.  This is easily done with a call to
      check_disk_size_change in the place where the offending commit removed
      check_disk_change.  At the important times, the size will have changed
      from 0 to non-zero, so check_disk_size_change will set bd_invalidated.
      Tested-by: default avatarDuncan <1i5t5.duncan@cox.net>
      Reported-by: default avatarDuncan <1i5t5.duncan@cox.net>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      f3b99be1
  2. 12 Jun, 2010 1 commit
  3. 11 Jun, 2010 30 commits