An error occurred fetching the project authors.
  1. 20 Jun, 2017 1 commit
    • Konstantin Khlebnikov's avatar
      md/raid5: limit request size according to implementation limits · 523d7696
      Konstantin Khlebnikov authored
      commit e8d7c332 upstream.
      
      Current implementation employ 16bit counter of active stripes in lower
      bits of bio->bi_phys_segments. If request is big enough to overflow
      this counter bio will be completed and freed too early.
      
      Fortunately this not happens in default configuration because several
      other limits prevent that: stripe_cache_size * nr_disks effectively
      limits count of active stripes. And small max_sectors_kb at lower
      disks prevent that during normal read/write operations.
      
      Overflow easily happens in discard if it's enabled by module parameter
      "devices_handle_discard_safely" and stripe_cache_size is set big enough.
      
      This patch limits requests size with 256Mb - 8Kb to prevent overflows.
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Neil Brown <neilb@suse.com>
      Signed-off-by: default avatarShaohua Li <shli@fb.com>
      Signed-off-by: default avatarWilly Tarreau <w@1wt.eu>
      523d7696
  2. 07 Jun, 2016 1 commit
  3. 06 Jun, 2015 1 commit
  4. 06 Mar, 2015 1 commit
  5. 30 Jan, 2015 1 commit
    • NeilBrown's avatar
      md/raid5: fetch_block must fetch all the blocks handle_stripe_dirtying wants. · a4f2a8dd
      NeilBrown authored
      commit 108cef3a upstream.
      
      It is critical that fetch_block() and handle_stripe_dirtying()
      are consistent in their analysis of what needs to be loaded.
      Otherwise raid5 can wait forever for a block that won't be loaded.
      
      Currently when writing to a RAID5 that is resyncing, to a location
      beyond the resync offset, handle_stripe_dirtying chooses a
      reconstruct-write cycle, but fetch_block() assumes a
      read-modify-write, and a lockup can happen.
      
      So treat that case just like RAID6, just as we do in
      handle_stripe_dirtying.  RAID6 always does reconstruct-write.
      
      This bug was introduced when the behaviour of handle_stripe_dirtying
      was changed in 3.7, so the patch is suitable for any kernel since,
      though it will need careful merging for some versions.
      
      Cc: stable@vger.kernel.org (v3.7+)
      Fixes: a7854487Reported-by: default avatarHenry Cai <henryplusplus@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a4f2a8dd
  6. 09 Oct, 2014 1 commit
    • NeilBrown's avatar
      md/raid5: disable 'DISCARD' by default due to safety concerns. · 06905ff8
      NeilBrown authored
      commit 8e0e99ba upstream.
      
      It has come to my attention (thanks Martin) that 'discard_zeroes_data'
      is only a hint.  Some devices in some cases don't do what it
      says on the label.
      
      The use of DISCARD in RAID5 depends on reads from discarded regions
      being predictably zero.  If a write to a previously discarded region
      performs a read-modify-write cycle it assumes that the parity block
      was consistent with the data blocks.  If all were zero, this would
      be the case.  If some are and some aren't this would not be the case.
      This could lead to data corruption after a device failure when
      data needs to be reconstructed from the parity.
      
      As we cannot trust 'discard_zeroes_data', ignore it by default
      and so disallow DISCARD on all raid4/5/6 arrays.
      
      As many devices are trustworthy, and as there are benefits to using
      DISCARD, add a module parameter to over-ride this caution and cause
      DISCARD to work if discard_zeroes_data is set.
      
      If a site want to enable DISCARD on some arrays but not on others they
      should select DISCARD support at the filesystem level, and set the
      raid456 module parameter.
          raid456.devices_handle_discard_safely=Y
      
      As this is a data-safety issue, I believe this patch is suitable for
      -stable.
      DISCARD support for RAID456 was added in 3.7
      
      Cc: Shaohua Li <shli@kernel.org>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Heinz Mauelshagen <heinzm@redhat.com>
      Acked-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Fixes: 620125f2Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      [bwh: Backported to 3.10: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      06905ff8
  7. 17 Sep, 2014 1 commit
  8. 22 Feb, 2014 1 commit
    • Oleg Nesterov's avatar
      md/raid5: Fix CPU hotplug callback registration · 4d4ef86d
      Oleg Nesterov authored
      commit 789b5e03 upstream.
      
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Interestingly, the raid5 code can actually prevent double initialization and
      hence can use the following simplified form of callback registration:
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	put_online_cpus();
      
      A hotplug operation that occurs between registering the notifier and calling
      get_online_cpus(), won't disrupt anything, because the code takes care to
      perform the memory allocations only once.
      
      So reorganize the code in raid5 this way to fix the deadlock with callback
      registration.
      
      Cc: linux-raid@vger.kernel.org
      Fixes: 36d1c647Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      [Srivatsa: Fixed the unregister_cpu_notifier() deadlock, added the
      free_scratch_buffer() helper to condense code further and wrote the changelog.]
      Signed-off-by: default avatarSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4d4ef86d
  9. 06 Feb, 2014 1 commit
    • NeilBrown's avatar
      md/raid5: fix long-standing problem with bitmap handling on write failure. · 6ba854e9
      NeilBrown authored
      commit 9f97e4b1 upstream.
      
      Before a write starts we set a bit in the write-intent bitmap.
      When the write completes we clear that bit if the write was successful
      to all devices.  However if the write wasn't fully successful we
      should not clear the bit.  If the faulty drive is subsequently
      re-added, the fact that the bit is still set ensure that we will
      re-write the data that is missing.
      
      This logic is mediated by the STRIPE_DEGRADED flag - we only clear the
      bitmap bit when this flag is not set.
      Currently we correctly set the flag if a write starts when some
      devices are failed or missing.  But we do *not* set the flag if some
      device failed during the write attempt.
      This is wrong and can result in clearing the bit inappropriately.
      
      So: set the flag when a write fails.
      
      This bug has been present since bitmaps were introduces, so the fix is
      suitable for any -stable kernel.
      Reported-by: default avatarEthan Wilson <ethan.wilson@shiftmail.org>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6ba854e9
  10. 25 Jan, 2014 1 commit
  11. 13 Nov, 2013 2 commits
  12. 04 Aug, 2013 1 commit
    • NeilBrown's avatar
      md/raid5: fix interaction of 'replace' and 'recovery'. · c1dadcc1
      NeilBrown authored
      commit f94c0b66 upstream.
      
      If a device in a RAID4/5/6 is being replaced while another is being
      recovered, then the writes to the replacement device currently don't
      happen, resulting in corruption when the replacement completes and the
      new drive takes over.
      
      This is because the replacement writes are only triggered when
      's.replacing' is set and not when the similar 's.sync' is set (which
      is the case during resync and recovery - it means all devices need to
      be read).
      
      So schedule those writes when s.replacing is set as well.
      
      In this case we cannot use "STRIPE_INSYNC" to record that the
      replacement has happened as that is needed for recording that any
      parity calculation is complete.  So introduce STRIPE_REPLACED to
      record if the replacement has happened.
      
      For safety we should also check that STRIPE_COMPUTE_RUN is not set.
      This has a similar effect to the "s.locked == 0" test.  The latter
      ensure that now IO has been flagged but not started.  The former
      checks if any parity calculation has been flagged by not started.
      We must wait for both of these to complete before triggering the
      'replace'.
      
      Add a similar test to the subsequent check for "are we finished yet".
      This possibly isn't needed (is subsumed in the STRIPE_INSYNC test),
      but it makes it more obvious that the REPLACE will happen before we
      think we are finished.
      
      Finally if a NeedReplace device is not UPTODATE then that is an
      error.  We really must trigger a warning.
      
      This bug was introduced in commit 9a3e1101
      (md/raid5:  detect and handle replacements during recovery.)
      which introduced replacement for raid5.
      That was in 3.3-rc3, so any stable kernel since then would benefit
      from this fix.
      Reported-by: default avatarqindehua <13691222965@163.com>
      Tested-by: default avatarqindehua <qindehua@163.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c1dadcc1
  13. 13 Jun, 2013 1 commit
    • H. Peter Anvin's avatar
      md/raid1,5,10: Disable WRITE SAME until a recovery strategy is in place · 5026d7a9
      H. Peter Anvin authored
      There are cases where the kernel will believe that the WRITE SAME
      command is supported by a block device which does not, in fact,
      support WRITE SAME.  This currently happens for SATA drivers behind a
      SAS controller, but there are probably a hundred other ways that can
      happen, including drive firmware bugs.
      
      After receiving an error for WRITE SAME the block layer will retry the
      request as a plain write of zeroes, but mdraid will consider the
      failure as fatal and consider the drive failed.  This has the effect
      that all the mirrors containing a specific set of data are each
      offlined in very rapid succession resulting in data loss.
      
      However, just bouncing the request back up to the block layer isn't
      ideal either, because the whole initial request-retry sequence should
      be inside the write bitmap fence, which probably means that md needs
      to do its own conversion of WRITE SAME to write zero.
      
      Until the failure scenario has been sorted out, disable WRITE SAME for
      raid1, raid5, and raid10.
      
      [neilb: added raid5]
      
      This patch is appropriate for any -stable since 3.7 when write_same
      support was added.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      5026d7a9
  14. 30 May, 2013 1 commit
  15. 24 Apr, 2013 2 commits
  16. 18 Apr, 2013 1 commit
  17. 23 Mar, 2013 3 commits
    • Kent Overstreet's avatar
      raid5: use bio_reset() · 2f6db2a7
      Kent Overstreet authored
      Had to shuffle the code around a bit (where bi_rw and bi_end_io were
      set), but shouldn't really be anything tricky here
      Signed-off-by: default avatarKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: NeilBrown <neilb@suse.de>
      2f6db2a7
    • Kent Overstreet's avatar
      block: Use bio_sectors() more consistently · aa8b57aa
      Kent Overstreet authored
      Bunch of places in the code weren't using it where they could be -
      this'll reduce the size of the patch that puts bi_sector/bi_size/bi_idx
      into a struct bvec_iter.
      Signed-off-by: default avatarKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: "Ed L. Cashin" <ecashin@coraid.com>
      CC: Nick Piggin <npiggin@kernel.dk>
      CC: Jiri Kosina <jkosina@suse.cz>
      CC: Jim Paris <jim@jtan.com>
      CC: Geoff Levand <geoff@infradead.org>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: dm-devel@redhat.com
      CC: Neil Brown <neilb@suse.de>
      CC: Steven Rostedt <rostedt@goodmis.org>
      Acked-by: default avatarEd Cashin <ecashin@coraid.com>
      aa8b57aa
    • Kent Overstreet's avatar
      block: Add bio_end_sector() · f73a1c7d
      Kent Overstreet authored
      Just a little convenience macro - main reason to add it now is preparing
      for immutable bio vecs, it'll reduce the size of the patch that puts
      bi_sector/bi_size/bi_idx into a struct bvec_iter.
      Signed-off-by: default avatarKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: Lars Ellenberg <drbd-dev@lists.linbit.com>
      CC: Jiri Kosina <jkosina@suse.cz>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: dm-devel@redhat.com
      CC: Neil Brown <neilb@suse.de>
      CC: Martin Schwidefsky <schwidefsky@de.ibm.com>
      CC: Heiko Carstens <heiko.carstens@de.ibm.com>
      CC: linux-s390@vger.kernel.org
      CC: Chris Mason <chris.mason@fusionio.com>
      CC: Steven Whitehouse <swhiteho@redhat.com>
      Acked-by: default avatarSteven Whitehouse <swhiteho@redhat.com>
      f73a1c7d
  18. 20 Mar, 2013 3 commits
    • NeilBrown's avatar
      md/raid5: ensure sync and DISCARD don't happen at the same time. · f8dfcffd
      NeilBrown authored
      A number of problems can occur due to races between
      resync/recovery and discard.
      
      - if sync_request calls handle_stripe() while a discard is
        happening on the stripe, it might call handle_stripe_clean_event
        before all of the individual discard requests have completed
        (so some devices are still locked, but not all).
        Since commit ca64cae9
           md/raid5: Make sure we clear R5_Discard when discard is finished.
        this will cause R5_Discard to be cleared for the parity device,
        so handle_stripe_clean_event() will not be called when the other
        devices do become unlocked, so their ->written will not be cleared.
        This ultimately leads to a WARN_ON in init_stripe and a lock-up.
      
      - If handle_stripe_clean_event() does clear R5_UPTODATE at an awkward
        time for resync, it can lead to s->uptodate being less than disks
        in handle_parity_checks5(), which triggers a BUG (because it is
        one).
      
      So:
       - keep R5_Discard on the parity device until all other devices have
         completed their discard request
       - make sure we don't try to have a 'discard' and a 'sync' action at
         the same time.
         This involves a new stripe flag to we know when a 'discard' is
         happening, and the use of R5_Overlap on the parity disk so when a
         discard is wanted while a sync is active, so we know to wake up
         the discard at the appropriate time.
      
      Discard support for RAID5 was added in 3.7, so this is suitable for
      any -stable kernel since 3.7.
      
      Cc: stable@vger.kernel.org (v3.7+)
      Reported-by: default avatarJes Sorensen <Jes.Sorensen@redhat.com>
      Tested-by: default avatarJes Sorensen <Jes.Sorensen@redhat.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      f8dfcffd
    • Jonathan Brassow's avatar
      MD RAID5: Avoid accessing gendisk or queue structs when not available · e3620a3a
      Jonathan Brassow authored
      MD RAID5:  Fix kernel oops when RAID4/5/6 is used via device-mapper
      
      Commit a9add5d9 (v3.8-rc1) added blktrace calls to the RAID4/5/6 driver.
      However, when device-mapper is used to create RAID4/5/6 arrays, the
      mddev->gendisk and mddev->queue fields are not setup.  Therefore, calling
      things like trace_block_bio_remap will cause a kernel oops.  This patch
      conditionalizes those calls on whether the proper fields exist to make
      the calls.  (Device-mapper will call trace_block_bio_remap on its own.)
      
      This patch is suitable for the 3.8.y stable kernel.
      
      Cc: stable@vger.kernel.org (v3.8+)
      Signed-off-by: default avatarJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e3620a3a
    • NeilBrown's avatar
      md/raid5: schedule_construction should abort if nothing to do. · ce7d363a
      NeilBrown authored
      Since commit 1ed850f3
          md/raid5: make sure to_read and to_write never go negative.
      
      It has been possible for handle_stripe_dirtying to be called
      when there isn't actually any work to do.
      It then calls schedule_reconstruction() which will set R5_LOCKED
      on the parity block(s) even when nothing else is happening.
      This then causes problems in do_release_stripe().
      
      So add checks to schedule_reconstruction() so that if it doesn't
      find anything to do, it just aborts.
      
      This bug was introduced in v3.7, so the patch is suitable
      for -stable kernels since then.
      
      Cc: stable@vger.kernel.org (v3.7+)
      Reported-by: default avatarmajianpeng <majianpeng@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      ce7d363a
  19. 28 Feb, 2013 1 commit
    • Sasha Levin's avatar
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin authored
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: default avatarPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  20. 27 Feb, 2013 1 commit
    • NeilBrown's avatar
      md: remove CONFIG_MULTICORE_RAID456 · 51acbcec
      NeilBrown authored
      This doesn't seem to actually help and we have an alternate
      multi-threading approach waiting in the wings, so just get
      rid of this config option and associated code.
      
      As a bonus, we remove one use of CONFIG_EXPERIMENTAL
      
      Cc: Dan Williams <djbw@fb.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      51acbcec
  21. 14 Jan, 2013 1 commit
    • Tejun Heo's avatar
      block: add missing block_bio_complete() tracepoint · 3a366e61
      Tejun Heo authored
      bio completion didn't kick block_bio_complete TP.  Only dm was
      explicitly triggering the TP on IO completion.  This makes
      block_bio_complete TP useless for tracers which want to know about
      bios, and all other bio based drivers skip generating blktrace
      completion events.
      
      This patch makes all bio completions via bio_endio() generate
      block_bio_complete TP.
      
      * Explicit trace_block_bio_complete() invocation removed from dm and
        the trace point is unexported.
      
      * @rq dropped from trace_block_bio_complete().  bios may fly around
        w/o queue associated.  Verifying and accessing the assocaited queue
        belongs to TP probes.
      
      * blktrace now gets both request and bio completions.  Make it ignore
        bio completions if request completion path is happening.
      
      This makes all bio based drivers generate blktrace completion events
      properly and makes the block_bio_complete TP actually useful.
      
      v2: With this change, block_bio_complete TP could be invoked on sg
          commands which have bio's with %NULL bi_bdev.  Update TP
          assignment code to check whether bio->bi_bdev is %NULL before
          dereferencing.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Original-patch-by: default avatarNamhyung Kim <namhyung@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3a366e61
  22. 17 Dec, 2012 1 commit
  23. 13 Dec, 2012 1 commit
  24. 30 Nov, 2012 1 commit
    • Lukas Czerner's avatar
      wait: add wait_event_lock_irq() interface · eed8c02e
      Lukas Czerner authored
      New wait_event{_interruptible}_lock_irq{_cmd} macros added. This commit
      moves the private wait_event_lock_irq() macro from MD to regular wait
      includes, introduces new macro wait_event_lock_irq_cmd() instead of using
      the old method with omitting cmd parameter which is ugly and makes a use
      of new macros in the MD. It also introduces the _interruptible_ variant.
      
      The use of new interface is when one have a special lock to protect data
      structures used in the condition, or one also needs to invoke "cmd"
      before putting it to sleep.
      
      All new macros are expected to be called with the lock taken. The lock
      is released before sleep and is reacquired afterwards. We will leave the
      macro with the lock held.
      
      Note to DM: IMO this should also fix theoretical race on waitqueue while
      using simultaneously wait_event_lock_irq() and wait_event() because of
      lack of locking around current state setting and wait queue removal.
      Signed-off-by: default avatarLukas Czerner <lczerner@redhat.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      eed8c02e
  25. 21 Nov, 2012 2 commits
    • NeilBrown's avatar
      md/raid5: Make sure we clear R5_Discard when discard is finished. · ca64cae9
      NeilBrown authored
      commit 9e444768
          MD: raid5 avoid unnecessary zero page for trim
      
      change raid5 to clear R5_Discard when the complete request is
      handled rather than when submitting the per-device discard request.
      However it did not clear R5_Discard for the parity device.
      
      This means that if the stripe_head was reused before it expired from
      the cache, the setting would be wrong and a hang would result.
      
      Also if the R5_Uptodate bit happens to be set, R5_Discard again
      won't be cleared.  But R5_Uptodate really should be clear at this point.
      
      So make sure R5_Discard is cleared in all cases, and clear
      R5_Uptodate when a 'discard' completes.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      ca64cae9
    • NeilBrown's avatar
      md/raid5: move resolving of reconstruct_state earlier in · ef5b7c69
      NeilBrown authored
      stripe_handle.
      
      The chunk of code in stripe_handle which responds to a
      *_result value in reconstruct_state is really the completion
      of some processing that happened outside of handle_stripe
      (possibly asynchronously) and so should be one of the first
      things done in handle_stripe().
      
      After the next patch it will be important that it happens before
      handle_stripe_clean_event(), as that will clear some dev->flags
      bit that this code tests.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      ef5b7c69
  26. 20 Nov, 2012 1 commit
  27. 29 Oct, 2012 1 commit
  28. 11 Oct, 2012 6 commits