1. 20 Jul, 2012 3 commits
  2. 17 Jul, 2012 3 commits
  3. 16 Jul, 2012 3 commits
  4. 27 Jun, 2012 1 commit
  5. 14 Jun, 2012 1 commit
    • Heiko Carstens's avatar
      s390/smp: make absolute lowcore / cpu restart parameter accesses more robust · fbe76568
      Heiko Carstens authored
      Setting the cpu restart parameters is done in three different fashions:
      - directly setting the four parameters individually
      - copying the four parameters with memcpy (using 4 * sizeof(long))
      - copying the four parameters using a private structure
      
      In addition code in entry*.S relies on a certain order of the restart
      members of struct _lowcore.
      
      Make all of this more robust to future changes by adding a
      mem_absolute_assign(dest, val) define, which assigns val to dest
      using absolute addressing mode. Also the load multiple instructions
      in entry*.S have been split into separate load instruction so the
      order of the struct _lowcore members doesn't matter anymore.
      
      In addition move the prototypes of memcpy_real/absolute from uaccess.h
      to processor.h. These memcpy* variants are not related to uaccess at all.
      string.h doesn't seem to match as well, so lets use processor.h.
      
      Also replace the eight byte array in struct _lowcore which represents a
      misaliged u64 with a u64. The compiler will always create code that
      handles the misaligned u64 correctly.
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      fbe76568
  6. 05 Jun, 2012 9 commits
  7. 04 Jun, 2012 12 commits
  8. 03 Jun, 2012 3 commits
    • Linus Torvalds's avatar
      vfs: move inode stat information closer together · 2f9d3df8
      Linus Torvalds authored
      The comment above it says "Stat data, not accessed from path walking",
      but in fact some of inode fields we use for the common stat data was way
      down at the end of the inode, causing unnecessary cache misses for the
      common stat operations.
      
      The inode structure is pretty big, and this can change padding depending
      on field width, but at least on the common 64-bit configurations this
      doesn't change the size.  Some of our inode layout has historically been
      to tro to avoid unnecessary padding fields, but cache locality is at
      least as important for layout, if not more.
      
      Noticed by looking at kernel profiles, and noticing that the "i_blkbits"
      access stood out like a sore thumb.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2f9d3df8
    • Linus Torvalds's avatar
      Linux 3.5-rc1 · f8f5701b
      Linus Torvalds authored
      f8f5701b
    • Linus Torvalds's avatar
      Merge tag 'dm-3.5-changes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm · 912afc36
      Linus Torvalds authored
      Pull device-mapper updates from Alasdair G Kergon:
       "Improve multipath's retrying mechanism in some defined circumstances
        and provide a simple reserve/release mechanism for userspace tools to
        access thin provisioning metadata while the pool is in use."
      
      * tag 'dm-3.5-changes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm:
        dm thin: provide userspace access to pool metadata
        dm thin: use slab mempools
        dm mpath: allow ioctls to trigger pg init
        dm mpath: delay retry of bypassed pg
        dm mpath: reduce size of struct multipath
      912afc36
  9. 02 Jun, 2012 5 commits
    • Joe Thornber's avatar
      dm thin: provide userspace access to pool metadata · cc8394d8
      Joe Thornber authored
      This patch implements two new messages that can be sent to the thin
      pool target allowing it to take a snapshot of the _metadata_.  This,
      read-only snapshot can be accessed by userland, concurrently with the
      live target.
      
      Only one metadata snapshot can be held at a time.  The pool's status
      line will give the block location for the current msnap.
      
      Since version 0.1.5 of the userland thin provisioning tools, the
      thin_dump program displays the msnap as follows:
      
          thin_dump -m <msnap root> <metadata dev>
      
      Available here: https://github.com/jthornber/thin-provisioning-tools
      
      Now that userland can access the metadata we can do various things
      that have traditionally been kernel side tasks:
      
           i) Incremental backups.
      
           By using metadata snapshots we can work out what blocks have
           changed over time.  Combined with data snapshots we can ensure
           the data doesn't change while we back it up.
      
           A short proof of concept script can be found here:
      
           https://github.com/jthornber/thinp-test-suite/blob/master/incremental_backup_example.rb
      
           ii) Migration of thin devices from one pool to another.
      
           iii) Merging snapshots back into an external origin.
      
           iv) Asyncronous replication.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      cc8394d8
    • Mike Snitzer's avatar
      dm thin: use slab mempools · a24c2569
      Mike Snitzer authored
      Use dedicated caches prefixed with a "dm_" name rather than relying on
      kmalloc mempools backed by generic slab caches so the memory usage of
      thin provisioning (and any leaks) can be accounted for independently.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      a24c2569
    • Mikulas Patocka's avatar
      dm mpath: allow ioctls to trigger pg init · 35991652
      Mikulas Patocka authored
      After the failure of a group of paths, any alternative paths that
      need initialising do not become available until further I/O is sent to
      the device.  Until this has happened, ioctls return -EAGAIN.
      
      With this patch, new paths are made available in response to an ioctl
      too.  The processing of the ioctl gets delayed until this has happened.
      
      Instead of returning an error, we submit a work item to kmultipathd
      (that will potentially activate the new path) and retry in ten
      milliseconds.
      
      Note that the patch doesn't retry an ioctl if the ioctl itself fails due
      to a path failure.  Such retries should be handled intelligently by the
      code that generated the ioctl in the first place, noting that some SCSI
      commands should not be retried because they are not idempotent (XOR write
      commands).  For commands that could be retried, there is a danger that
      if the device rejected the SCSI command, the path could be errorneously
      marked as failed, and the request would be retried on another path which
      might fail too.  It can be determined if the failure happens on the
      device or on the SCSI controller, but there is no guarantee that all
      SCSI drivers set these flags correctly.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      35991652
    • Mike Christie's avatar
      dm mpath: delay retry of bypassed pg · f220fd4e
      Mike Christie authored
      If I/O needs retrying and only bypassed priority groups are available,
      set the pg_init_delay_retry flag to wait before retrying.
      
      If, for example, the reason for the bypass is that the controller is
      getting reset or there is a firmware upgrade happening, retrying right
      away would cause a flood of log messages and retries for what could be a
      few seconds or even several minutes.
      Signed-off-by: default avatarMike Christie <michaelc@cs.wisc.edu>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      f220fd4e
    • Mike Snitzer's avatar
      dm mpath: reduce size of struct multipath · 1fbdd2b3
      Mike Snitzer authored
      Move multipath structure's 'lock' and 'queue_size' members to eliminate
      two 4-byte holes.  Also use a bit within a single unsigned int for each
      existing flag (saves 8-bytes).  This allows future flags to be added
      without each consuming an unsigned int.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      1fbdd2b3