1. 23 Jul, 2011 1 commit
  2. 13 Jul, 2011 1 commit
  3. 12 Jul, 2011 4 commits
    • Shaohua Li's avatar
      CFQ: add think time check for group · 7700fc4f
      Shaohua Li authored
      Currently when the last queue of a group has no request, we don't expire
      the queue to hope request from the group comes soon, so the group doesn't
      miss its share. But if the think time is big, the assumption isn't correct
      and we just waste bandwidth. In such case, we don't do idle.
      
      [global]
      runtime=30
      direct=1
      
      [test1]
      cgroup=test1
      cgroup_weight=1000
      rw=randread
      ioengine=libaio
      size=500m
      runtime=30
      directory=/mnt
      filename=file1
      thinktime=9000
      
      [test2]
      cgroup=test2
      cgroup_weight=1000
      rw=randread
      ioengine=libaio
      size=500m
      runtime=30
      directory=/mnt
      filename=file2
      
      	patched		base
      test1	64k		39k
      test2	548k		540k
      total	604k		578k
      
      group1 gets much better throughput because it waits less time.
      
      To check if the patch changes behavior of queue without think time. I also
      tried to give test1 2ms think time or no think time. The test result is stable.
      The thoughput doesn't change with/without the patch.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      7700fc4f
    • Shaohua Li's avatar
      CFQ: add think time check for service tree · f5f2b6ce
      Shaohua Li authored
      Currently when the last queue of a service tree has no request, we don't
      expire the queue to hope request from the service tree comes soon, so the
      service tree doesn't miss its share. But if the think time is big, the
      assumption isn't correct and we just waste bandwidth. In such case, we
      don't do idle.
      
      [global]
      runtime=10
      direct=1
      
      [test1]
      rw=randread
      ioengine=libaio
      size=500m
      directory=/mnt
      filename=file1
      thinktime=9000
      
      [test2]
      rw=read
      ioengine=libaio
      size=1G
      directory=/mnt
      filename=file2
      
      	patched		base
      test1	41k/s		33k/s
      test2	15868k/s	15789k/s
      total	15902k/s	15817k/s
      
      A slightly better
      
      To check if the patch changes behavior of queue without think time. I also
      tried to give test1 2ms think time or no think time. The test has variation
      even without the patch, but the average throughput doesn't change with/without
      the patch.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      f5f2b6ce
    • Shaohua Li's avatar
      CFQ: move think time check variables to a separate struct · 383cd721
      Shaohua Li authored
      Move the variables to do think time check to a sepatate struct. This is
      to prepare adding think time check for service tree and group. No
      functional change.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      383cd721
    • Justin TerAvest's avatar
      fixlet: Remove fs_excl from struct task. · 4aede84b
      Justin TerAvest authored
      fs_excl is a poor man's priority inheritance for filesystems to hint to
      the block layer that an operation is important. It was never clearly
      specified, not widely adopted, and will not prevent starvation in many
      cases (like across cgroups).
      
      fs_excl was introduced with the time sliced CFQ IO scheduler, to
      indicate when a process held FS exclusive resources and thus needed
      a boost.
      
      It doesn't cover all file systems, and it was never fully complete.
      Lets kill it.
      Signed-off-by: default avatarJustin TerAvest <teravest@google.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      4aede84b
  4. 10 Jul, 2011 1 commit
  5. 08 Jul, 2011 2 commits
    • Shaohua Li's avatar
      block: document blk_plug list access · 316cc67d
      Shaohua Li authored
      I'm often confused why not disable preempt when changing blk_plug list. It
      would be better to add comments here in case others have the similar concerns.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      316cc67d
    • Shaohua Li's avatar
      block: avoid building too big plug list · 55c022bb
      Shaohua Li authored
      When I test fio script with big I/O depth, I found the total throughput drops
      compared to some relative small I/O depth. The reason is the thread accumulates
      big requests in its plug list and causes some delays (surely this depends
      on CPU speed).
      I thought we'd better have a threshold for requests. When a threshold reaches,
      this means there is no request merge and queue lock contention isn't severe
      when pushing per-task requests to queue, so the main advantages of blk plug
      don't exist. We can force a plug list flush in this case.
      With this, my test throughput actually increases and almost equals to small
      I/O depth. Another side effect is irq off time decreases in blk_flush_plug_list()
      for big I/O depth.
      The BLK_MAX_REQUEST_COUNT is choosen arbitarily, but 16 is efficiently to
      reduce lock contention to me. But I'm open here, 32 is ok in my test too.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      55c022bb
  6. 07 Jul, 2011 1 commit
  7. 06 Jul, 2011 1 commit
    • Mike Snitzer's avatar
      block: eliminate potential for infinite loop in blkdev_issue_discard · 0f799603
      Mike Snitzer authored
      Due to the recently identified overflow in read_capacity_16() it was
      possible for max_discard_sectors to be zero but still have discards
      enabled on the associated device's queue.
      
      Eliminate the possibility for blkdev_issue_discard to infinitely loop.
      
      Interestingly this issue wasn't identified until a device, whose
      discard_granularity was 0 due to read_capacity_16 overflow, was consumed
      by blk_stack_limits() to construct limits for a higher-level DM
      multipath device.  The multipath device's resulting limits never had the
      discard limits stacked because blk_stack_limits() will only do so if
      the bottom device's discard_granularity != 0.  This resulted in the
      multipath device's limits.max_discard_sectors being 0.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      0f799603
  8. 01 Jul, 2011 3 commits
    • Johannes Stezenbach's avatar
      compat_ioctl: fix warning caused by qemu · 390192b3
      Johannes Stezenbach authored
      On Linux x86_64 host with 32bit userspace, running
      qemu or even just "qemu-img create -f qcow2 some.img 1G"
      causes a kernel warning:
      
      ioctl32(qemu-img:5296): Unknown cmd fd(3) cmd(00005326){t:'S';sz:0} arg(7fffffff) on some.img
      ioctl32(qemu-img:5296): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(fff77350) on some.img
      
      ioctl 00005326 is CDROM_DRIVE_STATUS,
      ioctl 801c0204 is FDGETPRM.
      
      The warning appears because the Linux compat-ioctl handler for these
      ioctls only applies to block devices, while qemu also uses the ioctls on
      plain files.
      Signed-off-by: default avatarJohannes Stezenbach <js@sig21.net>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      390192b3
    • Tejun Heo's avatar
      block: flush MEDIA_CHANGE from drivers on close(2) · 85ef06d1
      Tejun Heo authored
      Currently, only open(2) is defined as the 'clearing' point.  It has
      two roles - first, it's an acknowledgement from userland indicating
      that the event has been received and kernel can clear pending states
      and proceed to generate more events.  Secondly, it's passed on to
      device drivers as a hint indicating that a synchronization point has
      been reached and it might want to take a deeper look at the device.
      
      The latter currently is only used by sr which uses two different
      mechanisms - GET_EVENT_MEDIA_STATUS_NOTIFICATION and TEST_UNIT_READY
      to discover events, where the former is lighter weight and safe to be
      used repeatedly but may not provide full coverage.  Among other
      things, GET_EVENT can't detect media removal while TUR can.
      
      This patch makes close(2) - blkdev_put() - indicate clearing hint for
      MEDIA_CHANGE to drivers.  disk_check_events() is renamed to
      disk_flush_events() and updated to take @mask for events to flush
      which is or'd to ev->clearing and will be passed to the driver on the
      next ->check_events() invocation.
      
      This change makes sr generate MEDIA_CHANGE when media is ejected from
      userland - e.g. with eject(1).
      
      Note: Given the current usage, it seems @clearing hint is needlessly
      complex.  disk_clear_events() can simply clear all events and the hint
      can be boolean @flush.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      85ef06d1
    • Jens Axboe's avatar
      Merge branch 'for-linus' into for-3.1/core · 04bf7869
      Jens Axboe authored
      Conflicts:
      	block/blk-throttle.c
      	block/cfq-iosched.c
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      04bf7869
  9. 30 Jun, 2011 8 commits
  10. 27 Jun, 2011 4 commits
  11. 25 Jun, 2011 5 commits
  12. 24 Jun, 2011 9 commits