An error occurred fetching the project authors.
  1. 23 Oct, 2007 2 commits
  2. 22 Oct, 2007 1 commit
  3. 19 Oct, 2007 2 commits
  4. 17 Oct, 2007 3 commits
  5. 16 Oct, 2007 8 commits
  6. 12 Oct, 2007 1 commit
  7. 10 Oct, 2007 10 commits
  8. 14 Sep, 2007 1 commit
  9. 13 Sep, 2007 1 commit
    • Jens Axboe's avatar
      Fix race with shared tag queue maps · f3da54ba
      Jens Axboe authored
      There's a race condition in blk_queue_end_tag() for shared tag maps,
      users include stex (promise supertrak thingy) and qla2xxx.  The former
      at least has reported bugs in this area, not sure why we haven't seen
      any for the latter.  It could be because the window is narrow and that
      other conditions in the qla2xxx code hide this.  It's a real bug,
      though, as the stex smp users can attest.
      
      We need to ensure two things - the tag bit clearing needs to happen
      AFTER we cleared the tag pointer, as the tag bit clearing/setting is
      what protects this map.  Secondly, we need to ensure that the visibility
      of the tag pointer and tag bit clear are ordered properly.
      
      [ I removed the SMP barriers - "test_and_clear_bit()" already implies
        all the required barriers.  -- Linus ]
      
      Also see http://bugzilla.kernel.org/show_bug.cgi?id=7842Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f3da54ba
  10. 11 Aug, 2007 1 commit
  11. 24 Jul, 2007 1 commit
  12. 20 Jul, 2007 1 commit
    • Paul Mundt's avatar
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt authored
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: default avatarPaul Mundt <lethal@linux-sh.org>
      20c2df83
  13. 17 Jul, 2007 1 commit
  14. 16 Jul, 2007 4 commits
  15. 10 Jul, 2007 2 commits
    • Tejun Heo's avatar
      [BLOCK] drop unnecessary bvec rewinding from flush_dry_bio_endio · f4b09303
      Tejun Heo authored
      Barrier bios are completed twice - once after the barrier write itself
      is done and again after the whole sequence is complete.
      flush_dry_bio_endio() is for the first completion.  It doesn't really
      complete the bio.  It rewinds bvec and resets bio so that it can be
      completed again when the whole barrier sequence is complete.
      
      The bvec rewinding code has the following problems.
      
      1. The rewinding code is wrong because filesystems may pass bvec with
         non zero bv_offset.
      
      2. The block layer doesn't guarantee anything about the state of
         bvec array on request completion.  bv_offset and len are updated
         iff __end_that_request_first() completes the bvec partially.
      
      Because of #2, #1 doesn't really matter (nobody cares whether bvec is
      re-wound correctly or not) but then again by not doing unwinding at
      all, we'll always give back the same bvec to the caller as full bvec
      completion doesn't alter bvecs and the final completion is always full
      completion.
      
      Drop unnecessary rewinding code.
      
      This is spotted by Neil Brown.
      Signed-off-by: default avatarTejun Heo <htejun@gmail.com>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      f4b09303
    • Jens Axboe's avatar
      blk_hw_contig_segment(): bad segment size checks · 32eef964
      Jens Axboe authored
      Two bugs in there:
      
      - The virt oversize check should use the current bio hardware back
        size and the next bio front size, not the same bio. Spotted by
        Neil Brown.
      
      - The segment size check should add hw front sizes, not total bio
        sizes. Spotted by James Bottomley
      Acked-by: default avatarJames Bottomley <James.Bottomley@SteelEye.com>
      Acked-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      32eef964
  16. 15 Jun, 2007 1 commit
    • Tejun Heo's avatar
      block: always requeue !fs requests at the front · bc90ba09
      Tejun Heo authored
      SCSI marks internal commands with REQ_PREEMPT and push it at the front
      of the request queue using blk_execute_rq().  When entering suspended
      or frozen state, SCSI devices are quiesced using
      scsi_device_quiesce().  In quiesced state, only REQ_PREEMPT requests
      are processed.  This is how SCSI blocks other requests out while
      suspending and resuming.  As all internal commands are pushed at the
      front of the queue, this usually works.
      
      Unfortunately, this interacts badly with ordered requeueing.  To
      preserve request order on requeueing (due to busy device, active EH or
      other failures), requests are sorted according to ordered sequence on
      requeue if IO barrier is in progress.
      
      The following sequence deadlocks.
      
      1. IO barrier sequence issues.
      
      2. Suspend requested.  Queue is quiesced with part or all of IO
         barrier sequence at the front.
      
      3. During suspending or resuming, SCSI issues internal command which
         gets deferred and requeued for some reason.  As the command is
         issued after the IO barrier in #1, ordered requeueing code puts the
         request after IO barrier sequence.
      
      4. The device is ready to process requests again but still is in
         quiesced state and the first request of the queue isn't
         REQ_PREEMPT, so command processing is deadlocked -
         suspending/resuming waits for the issued request to complete while
         the request can't be processed till device is put back into
         running state by resuming.
      
      This can be fixed by always putting !fs requests at the front when
      requeueing.
      
      The following thread reports this deadlock.
      
        http://thread.gmane.org/gmane.linux.kernel/537473Signed-off-by: default avatarTejun Heo <htejun@gmail.com>
      Acked-by: default avatarDavid Greaves <david@dgreaves.com>
      Acked-by: default avatarJeff Garzik <jeff@garzik.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bc90ba09