1. 14 Dec, 2009 8 commits
    • NeilBrown's avatar
      md: add honouring of suspend_{lo,hi} to raid1. · 6eef4b21
      NeilBrown authored
      This will allow us to stop writeout to portions of the array
      while  they are resynced by someone else - e.g. another node in
      a cluster.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      6eef4b21
    • NeilBrown's avatar
      md/raid5: don't complete make_request on barrier until writes are scheduled · 729a1866
      NeilBrown authored
      The post-barrier-flush is sent by md as soon as make_request on the
      barrier write completes.  For raid5, the data might not be in the
      per-device queues yet.  So for barrier requests, wait for any
      pre-reading to be done so that the request will be in the per-device
      queues.
      
      We use the 'preread_active' count to check that nothing is still in
      the preread phase, and delay the decrement of this count until after
      write requests have been submitted to the underlying devices.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      729a1866
    • NeilBrown's avatar
      md: support barrier requests on all personalities. · a2826aa9
      NeilBrown authored
      Previously barriers were only supported on RAID1.  This is because
      other levels requires synchronisation across all devices and so needed
      a different approach.
      Here is that approach.
      
      When a barrier arrives, we send a zero-length barrier to every active
      device.  When that completes - and if the original request was not
      empty -  we submit the barrier request itself (with the barrier flag
      cleared) and then submit a fresh load of zero length barriers.
      
      The barrier request itself is asynchronous, but any subsequent
      request will block until the barrier completes.
      
      The reason for clearing the barrier flag is that a barrier request is
      allowed to fail.  If we pass a non-empty barrier through a striping
      raid level it is conceivable that part of it could succeed and part
      could fail.  That would be way too hard to deal with.
      So if the first run of zero length barriers succeed, we assume all is
      sufficiently well that we send the request and ignore errors in the
      second run of barriers.
      
      RAID5 needs extra care as write requests may not have been submitted
      to the underlying devices yet.  So we flush the stripe cache before
      proceeding with the barrier.
      
      Note that the second set of zero-length barriers are submitted
      immediately after the original request is submitted.  Thus when
      a personality finds mddev->barrier to be set during make_request,
      it should not return from make_request until the corresponding
      per-device request(s) have been queued.
      
      That will be done in later patches.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Reviewed-by: default avatarAndre Noll <maan@systemlinux.org>
      a2826aa9
    • NeilBrown's avatar
      md: don't reset curr_resync_completed after an interrupted resync · efa59339
      NeilBrown authored
      If a resync/recovery/check/repair is interrupted for some reason, it
      can be useful to know exactly where it got up to.
      So in that case, do not clear curr_resync_completed.
      Initialise it when starting a resync/recovery/... instead.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      efa59339
    • NeilBrown's avatar
      md: adjust resync_min usefully when resync aborts. · c07b70ad
      NeilBrown authored
      When a 'check' or 'repair' finished we should clear resync_min
      so that a future check/repair will cover the whole array (by default).
      However if it is interrupted, we should update resync_min to
      where we got up to, so that when the check/repair continues it
      just does the remainder of the array.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      c07b70ad
    • NeilBrown's avatar
      7820f9e1
    • NeilBrown's avatar
      md/raid5: remove some sparse warnings. · 8553fe7e
      NeilBrown authored
      qd_idx is previously declared and given exactly the same value!
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      8553fe7e
    • NeilBrown's avatar
      md/bitmap: protect against bitmap removal while being updated. · aa5cbd10
      NeilBrown authored
      A write intent bitmap can be removed from an array while the
      array is active.
      When this happens, all IO is suspended and flushed before the
      bitmap is removed.
      However it is possible that bitmap_daemon_work is still running to
      clear old bits from the bitmap.  If it is, it can dereference the
      bitmap after it has been freed.
      
      So introduce a new mutex to protect bitmap_daemon_work and get it
      before destroying a bitmap.
      
      This is suitable for any current -stable kernel.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Cc: stable@kernel.org
      aa5cbd10
  2. 12 Dec, 2009 32 commits