1. 10 Sep, 2010 37 commits
    • Christoph Hellwig's avatar
      ext4: do not send discards as barriers · 61002f7d
      Christoph Hellwig authored
      ext4 already uses synchronous discards, no need to add I/O barriers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      61002f7d
    • Christoph Hellwig's avatar
      jbd2: replace barriers with explicit flush / FUA usage · 9c35575b
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for journal commits and remove the
      EOPNOTSUPP detection for barriers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      9c35575b
    • Jan Kara's avatar
      jbd2: Modify ASYNC_COMMIT code to not rely on queue draining on barrier · f73bee49
      Jan Kara authored
      Currently JBD2 relies blkdev_issue_flush() draining the queue when ASYNC_COMMIT
      feature is set. This property is going away so make JBD2 wait for buffers it
      needs on its own before submitting the cache flush.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      f73bee49
    • Christoph Hellwig's avatar
      jbd: replace barriers with explicit flush / FUA usage · 4524451e
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for journal commits and remove the
      EOPNOTSUPP detection for barriers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      4524451e
    • Christoph Hellwig's avatar
      nilfs2: replace barriers with explicit flush / FUA usage · f8c131f5
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for log writes, remove the EOPNOTSUPP
      detection for barriers and stop setting the barrier flag for discards.
      
      tj: nilfs is now fixed to wait for discard completion.  Updated this
          patch accordingly and dropped warning about it.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      f8c131f5
    • Christoph Hellwig's avatar
      reiserfs: replace barriers with explicit flush / FUA usage · 7cd33ad2
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for log writes and remove the EOPNOTSUPP
      detection for barriers.  Note that reiserfs had a fairly different code
      path for barriers before as it wa the only filesystem actually making use
      of them.  The new code always uses the old non-barrier codepath and just
      sets the WRITE_FLUSH_FUA explicitly for the journal commits.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarJan Kara <jack@suse.cz>
      Acked-by: default avatarChris Mason <chris.mason@oracle.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      7cd33ad2
    • Christoph Hellwig's avatar
      gfs2: replace barriers with explicit flush / FUA usage · f1e4d518
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for log writes, remove the EOPNOTSUPP
      detection for barriers and stop setting the barrier flag for discards.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarSteven Whitehouse <swhiteho@redhat.com>
      Acked-by: default avatarBob Peterson <rpeterso@redhat.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      f1e4d518
    • Christoph Hellwig's avatar
      btrfs: replace barriers with explicit flush / FUA usage · c3b9a62c
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for log writes, remove the EOPNOTSUPP
      detection for barriers and stop setting the barrier flag for discards.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarChris Mason <chris.mason@oracle.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      c3b9a62c
    • Christoph Hellwig's avatar
      xfs: replace barriers with explicit flush / FUA usage · 80f6c29d
      Christoph Hellwig authored
      Switch to the WRITE_FLUSH_FUA flag for log writes and remove the EOPNOTSUPP
      detection for barriers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      80f6c29d
    • Christoph Hellwig's avatar
      block: pass gfp_mask and flags to sb_issue_discard · 2cf6d26a
      Christoph Hellwig authored
      We'll need to get rid of the BLKDEV_IFL_BARRIER flag, and to facilitate
      that and to make the interface less confusing pass all flags explicitly.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      2cf6d26a
    • Mike Snitzer's avatar
      dm: convey that all flushes are processed as empty · b372d360
      Mike Snitzer authored
      Rename __clone_and_map_flush to __clone_and_map_empty_flush for added
      clarity.
      
      Simplify logic associated with REQ_FLUSH conditionals.
      
      Introduce a BUG_ON() and add a few more helpful comments to the code
      so that it is clear that all flushes are empty.
      
      Cleanup __split_and_process_bio() so that an empty flush isn't processed
      by a 'sector_count' focused while loop.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      b372d360
    • Kiyoshi Ueda's avatar
      dm: fix locking context in queue_io() · 05447420
      Kiyoshi Ueda authored
      Now queue_io() is called from dec_pending(), which may be called with
      interrupts disabled, so queue_io() must not enable interrupts
      unconditionally and must save/restore the current interrupts status.
      Signed-off-by: default avatarKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: default avatarJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      05447420
    • Tejun Heo's avatar
      dm: relax ordering of bio-based flush implementation · 6a8736d1
      Tejun Heo authored
      Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA doesn't mandate any ordering
      against other bio's.  This patch relaxes ordering around flushes.
      
      * A flush bio is no longer deferred to workqueue directly.  It's
        processed like other bio's but __split_and_process_bio() uses
        md->flush_bio as the clone source.  md->flush_bio is initialized to
        empty flush during md initialization and shared for all flushes.
      
      * As a flush bio now travels through the same execution path as other
        bio's, there's no need for dedicated error handling path either.  It
        can use the same error handling path in dec_pending().  Dedicated
        error handling removed along with md->flush_error.
      
      * When dec_pending() detects that a flush has completed, it checks
        whether the original bio has data.  If so, the bio is queued to the
        deferred list w/ REQ_FLUSH cleared; otherwise, it's completed.
      
      * As flush sequencing is handled in the usual issue/completion path,
        dm_wq_work() no longer needs to handle flushes differently.  Now its
        only responsibility is re-issuing deferred bio's the same way as
        _dm_request() would.  REQ_FLUSH handling logic including
        process_flush() is dropped.
      
      * There's no reason for queue_io() and dm_wq_work() write lock
        dm->io_lock.  queue_io() now only uses md->deferred_lock and
        dm_wq_work() read locks dm->io_lock.
      
      * bio's no longer need to be queued on the deferred list while a flush
        is in progress making DMF_QUEUE_IO_TO_THREAD unncessary.  Drop it.
      
      This avoids stalling the device during flushes and simplifies the
      implementation.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      6a8736d1
    • Tejun Heo's avatar
      dm: implement REQ_FLUSH/FUA support for request-based dm · 29e4013d
      Tejun Heo authored
      This patch converts request-based dm to support the new REQ_FLUSH/FUA.
      
      The original request-based flush implementation depended on
      request_queue blocking other requests while a barrier sequence is in
      progress, which is no longer true for the new REQ_FLUSH/FUA.
      
      In general, request-based dm doesn't have infrastructure for cloning
      one source request to multiple targets, but the original flush
      implementation had a special mostly independent path which can issue
      flushes to multiple targets and sequence them.  However, the
      capability isn't currently in use and adds a lot of complexity.
      Moreoever, it's unlikely to be useful in its current form as it
      doesn't make sense to be able to send out flushes to multiple targets
      when write requests can't be.
      
      This patch rips out special flush code path and deals handles
      REQ_FLUSH/FUA requests the same way as other requests.  The only
      special treatment is that REQ_FLUSH requests use the block address 0
      when finding target, which is enough for now.
      
      * added BUG_ON(!dm_target_is_valid(ti)) in dm_request_fn() as
        suggested by Mike Snitzer
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Tested-by: default avatarKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      29e4013d
    • Tejun Heo's avatar
      dm: implement REQ_FLUSH/FUA support for bio-based dm · d87f4c14
      Tejun Heo authored
      This patch converts bio-based dm to support REQ_FLUSH/FUA instead of
      now deprecated REQ_HARDBARRIER.
      
      * -EOPNOTSUPP handling logic dropped.
      
      * Preflush is handled as before but postflush is dropped and replaced
        with passing down REQ_FUA to member request_queues.  This replaces
        one array wide cache flush w/ member specific FUA writes.
      
      * __split_and_process_bio() now calls __clone_and_map_flush() directly
        for flushes and guarantees all FLUSH bio's going to targets are zero
      `  length.
      
      * It's now guaranteed that all FLUSH bio's which are passed onto dm
        targets are zero length.  bio_empty_barrier() tests are replaced
        with REQ_FLUSH tests.
      
      * Empty WRITE_BARRIERs are replaced with WRITE_FLUSHes.
      
      * Dropped unlikely() around REQ_FLUSH tests.  Flushes are not unlikely
        enough to be marked with unlikely().
      
      * Block layer now filters out REQ_FLUSH/FUA bio's if the request_queue
        doesn't support cache flushing.  Advertise REQ_FLUSH | REQ_FUA
        capability.
      
      * Request based dm isn't converted yet.  dm_init_request_based_queue()
        resets flush support to 0 for now.  To avoid disturbing request
        based dm code, dm->flush_error is added for bio based dm while
        requested based dm continues to use dm->barrier_error.
      
      Lightly tested linear, stripe, raid1, snap and crypt targets.  Please
      proceed with caution as I'm not familiar with the code base.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: dm-devel@redhat.com
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      d87f4c14
    • Tejun Heo's avatar
      block: make __blk_rq_prep_clone() copy most command flags · 3a2edd0d
      Tejun Heo authored
      Currently __blk_rq_prep_clone() copies only REQ_WRITE and REQ_DISCARD.
      There's no reason to omit other command flags and REQ_FUA needs to be
      copied to implement FUA support in request-based dm.
      
      REQ_COMMON_MASK which specifies flags to be copied from bio to request
      already identifies all the command flags.  Define REQ_CLONE_MASK to be
      the same as REQ_COMMON_MASK for clarity and make __blk_rq_prep_clone()
      copy all flags in the mask.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      3a2edd0d
    • Tejun Heo's avatar
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo authored
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      e9c7469b
    • Tejun Heo's avatar
      lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH support · 7bc9fdda
      Tejun Heo authored
      VIRTIO_F_BARRIER is deprecated.  Replace it with VIRTIO_F_FLUSH
      support.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      7bc9fdda
    • Tejun Heo's avatar
      virtio_blk: drop REQ_HARDBARRIER support · 02c42b7a
      Tejun Heo authored
      Remove now unused REQ_HARDBARRIER support.  virtio_blk already
      supports REQ_FLUSH and the usefulness of REQ_FUA for virtio_blk is
      questionable at this point, so there's nothing else to do to support
      new REQ_FLUSH/FUA interface.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      02c42b7a
    • Tejun Heo's avatar
      block/loop: implement REQ_FLUSH/FUA support · 6259f284
      Tejun Heo authored
      Deprecate REQ_HARDBARRIER and implement REQ_FLUSH/FUA instead.  Also,
      instead of checking file->f_op->fsync() directly, look at the value of
      vfs_fsync() and ignore -EINVAL return.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      6259f284
    • Tejun Heo's avatar
      block: use REQ_FLUSH in blkdev_issue_flush() · d391a2dd
      Tejun Heo authored
      Update blkdev_issue_flush() to use new REQ_FLUSH interface.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      d391a2dd
    • Christoph Hellwig's avatar
      04ccc65c
    • Tejun Heo's avatar
      block: make sure FSEQ_DATA request has the same rq_disk as the original · 09d60c70
      Tejun Heo authored
      rq->rq_disk and bio->bi_bdev->bd_disk may differ if a request has
      passed through remapping drivers.  FSEQ_DATA request incorrectly
      followed bio->bi_bdev->bd_disk ending up being issued w/ mismatching
      rq_disk.  Make it follow orig_rq->rq_disk.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Tested-by: default avatarKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      09d60c70
    • Tejun Heo's avatar
      block: kick queue after sequencing REQ_FLUSH/FUA · 47f70d5a
      Tejun Heo authored
      While completing a request from a REQ_FLUSH/FUA sequence, another
      request can be pushed to the request queue.  If a driver tests
      elv_queue_empty() before completing a request and runs the queue again
      only if the queue wasn't empty, this may lead to hang.  Please note
      that most drivers either kick the queue unconditionally or test queue
      emptiness after completing the current request and don't have this
      problem.
      
      This patch removes this possibility by making REQ_FLUSH/FUA sequence
      code kick the queue if the queue was empty before completing a request
      from REQ_FLUSH/FUA sequence.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      47f70d5a
    • Tejun Heo's avatar
      block: initialize flush request with WRITE_FLUSH instead of REQ_FLUSH · 337238be
      Tejun Heo authored
      init_flush_request() only set REQ_FLUSH when initializing flush
      requests making them READ requests.  Use WRITE_FLUSH instead.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      337238be
    • Christoph Hellwig's avatar
      block: simplify queue_next_fseq · cde4c406
      Christoph Hellwig authored
      We need to call blk_rq_init and elv_insert for all cases in queue_next_fseq,
      so take these calls into common code.  Also move the end_io initialization
      from queue_flush into queue_next_fseq and rename queue_flush to
      init_flush_request now that it's old name doesn't apply anymore.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      cde4c406
    • Tejun Heo's avatar
      block: filter flush bio's in __generic_make_request() · 1e87901e
      Tejun Heo authored
      There are a number of make_request based drivers which don't support
      cache flushes.  Filter out flush bio's in __generic_make_request() so
      that they don't have to worry about them.  All FLUSH/FUA requests with
      data are converted to regular IO requests and empty ones are completed
      immediately.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      1e87901e
    • Tejun Heo's avatar
      block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests · 4fed947c
      Tejun Heo authored
      Now that the backend conversion is complete, export sequenced
      FLUSH/FUA capability through REQ_FLUSH/FUA flags.  REQ_FLUSH means the
      device cache should be flushed before executing the request.  REQ_FUA
      means that the data in the request should be on non-volatile media on
      completion.
      
      Block layer will choose the correct way of implementing the semantics
      and execute it.  The request may be passed to the device directly if
      the device can handle it; otherwise, it will be sequenced using one or
      more proxy requests.  Devices will never see REQ_FLUSH and/or FUA
      which it doesn't support.
      
      Also, unlike the original REQ_HARDBARRIER, REQ_FLUSH/FUA requests are
      never failed with -EOPNOTSUPP.  If the underlying device doesn't
      support FLUSH/FUA, the block layer simply make those noop.  IOW, it no
      longer distinguishes between writeback cache which doesn't support
      cache flush and writethrough/no cache.  Devices which have WB cache
      w/o flush are very difficult to come by these days and there's nothing
      much we can do anyway, so it doesn't make sense to require everyone to
      implement -EOPNOTSUPP handling.  This will simplify filesystems and
      block drivers as they can drop -EOPNOTSUPP retry logic for barriers.
      
      * QUEUE_ORDERED_* are removed and QUEUE_FSEQ_* are moved into
        blk-flush.c.
      
      * REQ_FLUSH w/o data can also be directly passed to drivers without
        sequencing but some drivers assume that zero length requests don't
        have rq->bio which isn't true for these requests requiring the use
        of proxy requests.
      
      * REQ_COMMON_MASK now includes REQ_FLUSH | REQ_FUA so that they are
        copied from bio to request.
      
      * WRITE_BARRIER is marked deprecated and WRITE_FLUSH, WRITE_FUA and
        WRITE_FLUSH_FUA are added.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      4fed947c
    • Tejun Heo's avatar
      block: rename barrier/ordered to flush · dd4c133f
      Tejun Heo authored
      With ordering requirements dropped, barrier and ordered are misnomers.
      Now all block layer does is sequencing FLUSH and FUA.  Rename them to
      flush.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      dd4c133f
    • Tejun Heo's avatar
      block: rename blk-barrier.c to blk-flush.c · 8839a0e0
      Tejun Heo authored
      Without ordering requirements, barrier and ordering are minomers.
      Rename block/blk-barrier.c to block/blk-flush.c.  Rename of symbols
      will follow.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      8839a0e0
    • Tejun Heo's avatar
      block: drop barrier ordering by queue draining · 28e7d184
      Tejun Heo authored
      Filesystems will take all the responsibilities for ordering requests
      around commit writes and will only indicate how the commit writes
      themselves should be handled by block layers.  This patch drops
      barrier ordering by queue draining from block layer.  Ordering by
      draining implementation was somewhat invasive to request handling.
      List of notable changes follow.
      
      * Each queue has 1 bit color which is flipped on each barrier issue.
        This is used to track whether a given request is issued before the
        current barrier or not.  REQ_ORDERED_COLOR flag and coloring
        implementation in __elv_add_request() are removed.
      
      * Requests which shouldn't be processed yet for draining were stalled
        by returning -EAGAIN from blk_do_ordered() according to the test
        result between blk_ordered_req_seq() and blk_blk_ordered_cur_seq().
        This logic is removed.
      
      * Draining completion logic in elv_completed_request() removed.
      
      * All barrier sequence requests were queued to request queue and then
        trckled to lower layer according to progress and thus maintaining
        request orders during requeue was necessary.  This is replaced by
        queueing the next request in the barrier sequence only after the
        current one is complete from blk_ordered_complete_seq(), which
        removes the need for multiple proxy requests in struct request_queue
        and the request sorting logic in the ELEVATOR_INSERT_REQUEUE path of
        elv_insert().
      
      * As barriers no longer have ordering constraints, there's no need to
        dump the whole elevator onto the dispatch queue on each barrier.
        Insert barriers at the front instead.
      
      * If other barrier requests come to the front of the dispatch queue
        while one is already in progress, they are stored in
        q->pending_barriers and restored to dispatch queue one-by-one after
        each barrier completion from blk_ordered_complete_seq().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      28e7d184
    • Tejun Heo's avatar
      block: misc cleanups in barrier code · dd831006
      Tejun Heo authored
      Make the following cleanups in preparation of barrier/flush update.
      
      * blk_do_ordered() declaration is moved from include/linux/blkdev.h to
        block/blk.h.
      
      * blk_do_ordered() now returns pointer to struct request, with %NULL
        meaning "try the next request" and ERR_PTR(-EAGAIN) "try again
        later".  The third case will be dropped with further changes.
      
      * In the initialization of proxy barrier request, data direction is
        already set by init_request_from_bio().  Drop unnecessary explicit
        REQ_WRITE setting and move init_request_from_bio() above REQ_FUA
        flag setting.
      
      * add_request() is collapsed into __make_request().
      
      These changes don't make any functional difference.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      dd831006
    • Tejun Heo's avatar
      block: remove spurious uses of REQ_HARDBARRIER · 9cbbdca4
      Tejun Heo authored
      REQ_HARDBARRIER is deprecated.  Remove spurious uses in the following
      users.  Please note that other than osdblk, all other uses were
      already spurious before deprecation.
      
      * osdblk: osdblk_rq_fn() won't receive any request with
        REQ_HARDBARRIER set.  Remove the test for it.
      
      * pktcdvd: use of REQ_HARDBARRIER in pkt_generic_packet() doesn't mean
        anything.  Removed.
      
      * aic7xxx_old: Setting MSG_ORDERED_Q_TAG on REQ_HARDBARRIER is
        spurious.  Removed.
      
      * sas_scsi_host: Setting TASK_ATTR_ORDERED on REQ_HARDBARRIER is
        spurious.  Removed.
      
      * scsi_tcq: The ordered tag path wasn't being used anyway.  Removed.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      Cc: James Bottomley <James.Bottomley@suse.de>
      Cc: Peter Osterlund <petero2@telia.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      9cbbdca4
    • Tejun Heo's avatar
      block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush() · 4913efe4
      Tejun Heo authored
      Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
      requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
      -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
      blk_queue_flush().
      
      blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
      device has write cache and can flush it, it should set REQ_FLUSH.  If
      the device can handle FUA writes, it should also set REQ_FUA.
      
      All blk_queue_ordered() users are converted.
      
      * ORDERED_DRAIN is mapped to 0 which is the default value.
      * ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
      * ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      4913efe4
    • Tejun Heo's avatar
      block: kill QUEUE_ORDERED_BY_TAG · 6958f145
      Tejun Heo authored
      Nobody is making meaningful use of ORDERED_BY_TAG now and queue
      draining for barrier requests will be removed soon which will render
      the advantage of tag ordering moot.  Kill ORDERED_BY_TAG.  The
      following users are affected.
      
      * brd: converted to ORDERED_DRAIN.
      * virtio_blk: ORDERED_TAG path was already marked deprecated.  Removed.
      * xen-blkfront: ORDERED_TAG case dropped.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      6958f145
    • Tejun Heo's avatar
      block/loop: queue ordered mode should be DRAIN_FLUSH · 589d7ed0
      Tejun Heo authored
      loop implements FLUSH using fsync but was incorrectly setting its
      ordered mode to DRAIN.  Change it to DRAIN_FLUSH.  In practice, this
      doesn't change anything as loop doesn't make use of the block layer
      ordered implementation.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      589d7ed0
    • Tejun Heo's avatar
      ide: remove unnecessary blk_queue_flushing() test in do_ide_request() · 0da2f509
      Tejun Heo authored
      Unplugging from a request function doesn't really help much (it's
      already in the request_fn) and soon block layer will be updated to mix
      barrier sequence with other commands, so there's no need to treat
      queue flushing any differently.
      
      ide was the only user of blk_queue_flushing().  Remove it.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      0da2f509
  2. 23 Aug, 2010 1 commit
  3. 22 Aug, 2010 2 commits
    • Linus Torvalds's avatar
      Merge branch 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm · 3dc8d7f0
      Linus Torvalds authored
      * 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
        KVM: PIT: free irq source id in handling error path
        KVM: destroy workqueue on kvm_create_pit() failures
        KVM: fix poison overwritten caused by using wrong xstate size
      3dc8d7f0
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel · 4238a417
      Linus Torvalds authored
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel: (58 commits)
        drm/i915,intel_agp: Add support for Sandybridge D0
        drm/i915: fix render pipe control notify on sandybridge
        agp/intel: set 40-bit dma mask on Sandybridge
        drm/i915: Remove the conflicting BUG_ON()
        drm/i915/suspend: s/IS_IRONLAKE/HAS_PCH_SPLIT/
        drm/i915/suspend: Flush register writes before busy-waiting.
        i915: disable DAC on Ironlake also when doing CRT load detection.
        drm/i915: wait for actual vblank, not just 20ms
        drm/i915: make sure eDP PLL is enabled at the right time
        drm/i915: fix VGA plane disable for Ironlake+
        drm/i915: eDP mode set sequence corrections
        drm/i915: add panel reset workaround
        drm/i915: Enable RC6 on Ironlake.
        drm/i915/sdvo: Only set is_lvds if we have a valid fixed mode.
        drm/i915: Set up a render context on Ironlake
        drm/i915 invalidate indirect state pointers at end of ring exec
        drm/i915: Wake-up wait_request() from elapsed hang-check (v2)
        drm/i915: Apply i830 errata for cursor alignment
        drm/i915: Only update i845/i865 CURBASE when disabled (v2)
        drm/i915: FBC is updated within set_base() so remove second call in mode_set()
        ...
      4238a417