1. 01 Jun, 2013 1 commit
  2. 31 May, 2013 14 commits
    • Linus Torvalds's avatar
      Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs · 1d822d60
      Linus Torvalds authored
      Pull reiserfs fixes from Jan Kara:
       "Three reiserfs fixes.  They fix real problems spotted by users so I
        hope they are ok even at this stage."
      
      * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
        reiserfs: fix deadlock with nfs racing on create/lookup
        reiserfs: fix problems with chowning setuid file w/ xattrs
        reiserfs: fix spurious multiple-fill in reiserfs_readdir_dentry
      1d822d60
    • Linus Torvalds's avatar
      Merge tag 'for-linus-v3.10-rc4-crc-xattr-fixes' of git://oss.sgi.com/xfs/xfs · 7cfb9532
      Linus Torvalds authored
      Pull xfs extended attribute fixes for CRCs from Ben Myers:
       "Here are several fixes that are relevant on CRC enabled XFS
        filesystems.  They are followed by a rework of the remote attribute
        code so that each block of the attribute contains a header with a CRC.
      
        Previously there was a CRC header per extent in the remote attribute
        code, but this was untenable because it was not possible to know how
        many extents would be allocated for the attribute until after the
        allocation has completed, due to the fragmentation of free space.
        This became complicated because the size of the headers needs to be
        added to the length of the payload to get the overall length required
        for the allocation.  With a header per block, things are less
        complicated at the cost of a little space.
      
        I would have preferred to defer this and the rest of the CRC queue to
        3.11 to mitigate risk for existing non-crc users in 3.10.  Doing so
        would require setting a feature bit for the on-disk changes, and so I
        have been pressured into sending this pull request by Eric Sandeen and
        David Chinner from Red Hat.  I'll send another pull request or two
        with the rest of the CRC queue next week.
      
         - Remove assert on count of remote attribute CRC headers
         - Fix the number of blocks read in for remote attributes
         - Zero remote attribute tails properly
         - Fix mapping of remote attribute buffers to have correct length
         - initialize temp leaf properly in xfs_attr3_leaf_unbalance, and
           xfs_attr3_leaf_compact
         - Rework remote atttributes to have a header per block"
      
      * tag 'for-linus-v3.10-rc4-crc-xattr-fixes' of git://oss.sgi.com/xfs/xfs:
        xfs: rework remote attr CRCs
        xfs: fully initialise temp leaf in xfs_attr3_leaf_compact
        xfs: fully initialise temp leaf in xfs_attr3_leaf_unbalance
        xfs: correctly map remote attr buffers during removal
        xfs: remote attribute tail zeroing does too much
        xfs: remote attribute read too short
        xfs: remote attribute allocation may be contiguous
      7cfb9532
    • Linus Torvalds's avatar
      Merge tag 'for-linus-v3.10-rc4' of git://oss.sgi.com/xfs/xfs · e8d256ac
      Linus Torvalds authored
      Pull xfs fixes from Ben Myers:
       - Fix nested transactions in xfs_qm_scall_setqlim
       - Clear suid/sgid bits when we truncate with size update
       - Fix recovery for split buffers
       - Fix block count on remote symlinks
       - Add fsgeom flag for v5 superblock support
       - Disable XFS_IOC_SWAPEXT for CRC enabled filesystems
       - Fix dirv3 freespace block corruption
      
      * tag 'for-linus-v3.10-rc4' of git://oss.sgi.com/xfs/xfs:
        xfs: fix dir3 freespace block corruption
        xfs: disable swap extents ioctl on CRC enabled filesystems
        xfs: add fsgeom flag for v5 superblock support.
        xfs: fix incorrect remote symlink block count
        xfs: fix split buffer vector log recovery support
        xfs: kill suid/sgid through the truncate path.
        xfs: avoid nesting transactions in xfs_qm_scall_setqlim()
      e8d256ac
    • Linus Torvalds's avatar
      Merge tag 'please-pull-aertracefix' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras · 977b55cf
      Linus Torvalds authored
      Pull aer error logging fix from Tony Luck:
       "Can't call pci_get_domain_bus_and_slot() from interupt context"
      
      * tag 'please-pull-aertracefix' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
        aerdrv: Move cper_print_aer() call out of interrupt context
      977b55cf
    • Linus Torvalds's avatar
      Merge tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 · fe696b47
      Linus Torvalds authored
      Pull arm64 fixes from Catalin Marinas:
       - Module compilation issues (symbol not exported).
       - Plug a hole where user space can bring the kernel down.
      
      * tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
        arm64: don't kill the kernel on a bad esr from el0
        arm64: treat unhandled compat el0 traps as undef
        arm64: Do not report user faults for handled signals
        arm64: kernel: compiling issue, need 'EXPORT_SYMBOL(clear_page)'
      fe696b47
    • Jeff Mahoney's avatar
      reiserfs: fix deadlock with nfs racing on create/lookup · a1457c0c
      Jeff Mahoney authored
      Reiserfs is currently able to be deadlocked by having two NFS clients
      where one has removed and recreated a file and another is accessing the
      file with an open file handle.
      
      If one client deletes and recreates a file with timing such that the
      recreated file obtains the same [dirid, objectid] pair as the original
      file while another client accesses the file via file handle, the create
      and lookup can race and deadlock if the lookup manages to create the
      in-memory inode first.
      
      The create thread, in insert_inode_locked4, will hold the write lock
      while waiting on the other inode to be unlocked. The lookup thread,
      anywhere in the iget path, will release and reacquire the write lock while
      it schedules. If it needs to reacquire the lock while the create thread
      has it, it will never be able to make forward progress because it needs
      to reacquire the lock before ultimately unlocking the inode.
      
      This patch drops the write lock across the insert_inode_locked4 call so
      that the ordering of inode_wait -> write lock is retained. Since this
      would have been the case before the BKL push-down, this is safe.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      a1457c0c
    • Jeff Mahoney's avatar
      reiserfs: fix problems with chowning setuid file w/ xattrs · 4a857011
      Jeff Mahoney authored
      reiserfs_chown_xattrs() takes the iattr struct passed into ->setattr
      and uses it to iterate over all the attrs associated with a file to change
      ownership of xattrs (and transfer quota associated with the xattr files).
      
      When the setuid bit is cleared during chown, ATTR_MODE and iattr->ia_mode
      are passed to all the xattrs as well. This means that the xattr directory
      will have S_IFREG added to its mode bits.
      
      This has been prevented in practice by a missing IS_PRIVATE check
      in reiserfs_acl_chmod, which caused a double-lock to occur while holding
      the write lock. Since the file system was completely locked up, the
      writeout of the corrupted mode never happened.
      
      This patch temporarily clears everything but ATTR_UID|ATTR_GID for the
      calls to reiserfs_setattr and adds the missing IS_PRIVATE check.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      4a857011
    • Jeff Mahoney's avatar
      reiserfs: fix spurious multiple-fill in reiserfs_readdir_dentry · 0bdc7acb
      Jeff Mahoney authored
      After sleeping for filldir(), we check to see if the file system has
      changed and research. The next_pos pointer is updated but its value
      isn't pushed into the key used for the search itself. As a result,
      the search returns the same item that the last cycle of the loop did
      and filldir() is called multiple times with the same data.
      
      The end result is that the buffer can contain the same name multiple
      times. This can be returned to userspace or used internally in the
      xattr code where it can manifest with the following warning:
      
      jdm-20004 reiserfs_delete_xattrs: Couldn't delete all xattrs (-2)
      
      reiserfs_for_each_xattr uses reiserfs_readdir_dentry to iterate over
      the xattr names and ends up trying to unlink the same name twice. The
      second attempt fails with -ENOENT and the error is returned. At some
      point I'll need to add support into reiserfsck to remove the orphaned
      directories left behind when this occurs.
      
      The fix is to push the value into the key before researching.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      0bdc7acb
    • Mark Rutland's avatar
      arm64: don't kill the kernel on a bad esr from el0 · 9955ac47
      Mark Rutland authored
      Rather than completely killing the kernel if we receive an esr value we
      can't deal with in the el0 handlers, send the process a SIGILL and log
      the esr value in the hope that we can debug it. If we receive a bad esr
      from el1, we'll die() as before.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: stable@vger.kernel.org
      9955ac47
    • Mark Rutland's avatar
      arm64: treat unhandled compat el0 traps as undef · 381cc2b9
      Mark Rutland authored
      Currently, if a compat process reads or writes from/to a disabled
      cp15/cp14 register, the trap is not handled by the el0_sync_compat
      handler, and the kernel will head to bad_mode, where it will die(), and
      oops(). For 64 bit processes, disabled system register accesses are
      currently treated as unhandled instructions.
      
      This patch modifies entry.S to treat these unhandled traps as undefined
      instructions, sending a SIGILL to userspace. This gives processes a
      chance to handle this and stop using inaccessible registers, and
      prevents further issues in the kernel as a result of the die().
      Reported-by: default avatarJohannes Jensen <Johannes.Jensen@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      381cc2b9
    • Linus Torvalds's avatar
      Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux · a93cb29a
      Linus Torvalds authored
      Pull drm fixes from Dave Airlie:
       "One qxl 32-bit warning fix, the rest is a bunch of radeon fixes from
        Alex for some issues we've been seeing."
      
      * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
        drm/qxl: fix build warnings on 32-bit
        radeon: use max_bus_speed to activate gen2 speeds
        drm/radeon: narrow scope of Apple re-POST hack
        drm/radeon: don't check crtcs in card_posted() on cards without DCE
        drm/radeon: fix card_posted check for newer asics
        drm/radeon: fix typo in cu_per_sh on verde
        drm/radeon: UVD block on SUMO2 is the same as on SUMO
      a93cb29a
    • Dave Airlie's avatar
      drm/qxl: fix build warnings on 32-bit · 970fa986
      Dave Airlie authored
      Just the usual printk related warnings.
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      970fa986
    • Linus Torvalds's avatar
      Merge branch 'for-3.10' of git://linux-nfs.org/~bfields/linux · 4203afc3
      Linus Torvalds authored
      Pull nfsd fixes from Bruce Fields:
       "A couple minor fixes for the (new to 3.10) gss-proxy code.
      
        And one regression from user-namespace changes.  (XBMC clients were
        doing something admittedly weird--sending -1 gid's--but something that
        we used to allow.)"
      
      * 'for-3.10' of git://linux-nfs.org/~bfields/linux:
        svcrpc: fix failures to handle -1 uid's and gid's
        svcrpc: implement O_NONBLOCK behavior for use-gss-proxy
        svcauth_gss: fix error code in use_gss_proxy()
      4203afc3
    • Linus Torvalds's avatar
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 484b002e
      Linus Torvalds authored
      Pull x86 fixes from Peter Anvin:
      
       - Three EFI-related fixes
      
       - Two early memory initialization fixes
      
       - build fix for older binutils
      
       - fix for an eager FPU performance regression -- currently we don't
         allow the use of the FPU at interrupt time *at all* in eager mode,
         which is clearly wrong.
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86: Allow FPU to be used at interrupt time even with eagerfpu
        x86, crc32-pclmul: Fix build with older binutils
        x86-64, init: Fix a possible wraparound bug in switchover in head_64.S
        x86, range: fix missing merge during add range
        x86, efi: initial the local variable of DataSize to zero
        efivar: fix oops in efivar_update_sysfs_entries() caused by memory reuse
        efivarfs: Never return ENOENT from firmware again
      484b002e
  3. 30 May, 2013 25 commits
    • Pekka Riikonen's avatar
      x86: Allow FPU to be used at interrupt time even with eagerfpu · 5187b28f
      Pekka Riikonen authored
      With the addition of eagerfpu the irq_fpu_usable() now returns false
      negatives especially in the case of ksoftirqd and interrupted idle task,
      two common cases for FPU use for example in networking/crypto.  With
      eagerfpu=off FPU use is possible in those contexts.  This is because of
      the eagerfpu check in interrupted_kernel_fpu_idle():
      
      ...
        * For now, with eagerfpu we will return interrupted kernel FPU
        * state as not-idle. TBD: Ideally we can change the return value
        * to something like __thread_has_fpu(current). But we need to
        * be careful of doing __thread_clear_has_fpu() before saving
        * the FPU etc for supporting nested uses etc. For now, take
        * the simple route!
      ...
       	if (use_eager_fpu())
       		return 0;
      
      As eagerfpu is automatically "on" on those CPUs that also have the
      features like AES-NI this patch changes the eagerfpu check to return 1 in
      case the kernel_fpu_begin() has not been said yet.  Once it has been the
      __thread_has_fpu() will start returning 0.
      
      Notice that with eagerfpu the __thread_has_fpu is always true initially.
      FPU use is thus always possible no matter what task is under us, unless
      the state has already been saved with kernel_fpu_begin().
      
      [ hpa: this is a performance regression, not a correctness regression,
        but since it can be quite serious on CPUs which need encryption at
        interrupt time I am marking this for urgent/stable. ]
      Signed-off-by: default avatarPekka Riikonen <priikone@iki.fi>
      Link: http://lkml.kernel.org/r/alpine.GSO.2.00.1305131356320.18@git.silcnet.org
      Cc: <stable@vger.kernel.org> v3.7+
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      5187b28f
    • Jan Beulich's avatar
      x86, crc32-pclmul: Fix build with older binutils · 2baad612
      Jan Beulich authored
      binutils prior to 2.18 (e.g. the ones found on SLE10) don't support
      assembling PEXTRD, so a macro based approach like the one for PCLMULQDQ
      in the same file should be used.
      
      This requires making the helper macros capable of recognizing 32-bit
      general purpose register operands.
      
      [ hpa: tagging for stable as it is a low risk build fix ]
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Link: http://lkml.kernel.org/r/51A6142A02000078000D99D8@nat28.tlf.novell.com
      Cc: Alexander Boyko <alexander_boyko@xyratex.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: <stable@vger.kernel.org> v3.9
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      2baad612
    • Dave Chinner's avatar
      xfs: rework remote attr CRCs · 7bc0dc27
      Dave Chinner authored
      Note: this changes the on-disk remote attribute format. I assert
      that this is OK to do as CRCs are marked experimental and the first
      kernel it is included in has not yet reached release yet. Further,
      the userspace utilities are still evolving and so anyone using this
      stuff right now is a developer or tester using volatile filesystems
      for testing this feature. Hence changing the format right now to
      save longer term pain is the right thing to do.
      
      The fundamental change is to move from a header per extent in the
      attribute to a header per filesytem block in the attribute. This
      means there are more header blocks and the parsing of the attribute
      data is slightly more complex, but it has the advantage that we
      always know the size of the attribute on disk based on the length of
      the data it contains.
      
      This is where the header-per-extent method has problems. We don't
      know the size of the attribute on disk without first knowing how
      many extents are used to hold it. And we can't tell from a
      mapping lookup, either, because remote attributes can be allocated
      contiguously with other attribute blocks and so there is no obvious
      way of determining the actual size of the atribute on disk short of
      walking and mapping buffers.
      
      The problem with this approach is that if we map a buffer
      incorrectly (e.g. we make the last buffer for the attribute data too
      long), we then get buffer cache lookup failure when we map it
      correctly. i.e. we get a size mismatch on lookup. This is not
      necessarily fatal, but it's a cache coherency problem that can lead
      to returning the wrong data to userspace or writing the wrong data
      to disk. And debug kernels will assert fail if this occurs.
      
      I found lots of niggly little problems trying to fix this issue on a
      4k block size filesystem, finally getting it to pass with lots of
      fixes. The thing is, 1024 byte filesystems still failed, and it was
      getting really complex handling all the corner cases that were
      showing up. And there were clearly more that I hadn't found yet.
      
      It is complex, fragile code, and if we don't fix it now, it will be
      complex, fragile code forever more.
      
      Hence the simple fix is to add a header to each filesystem block.
      This gives us the same relationship between the attribute data
      length and the number of blocks on disk as we have without CRCs -
      it's a linear mapping and doesn't require us to guess anything. It
      is simple to implement, too - the remote block count calculated at
      lookup time can be used by the remote attribute set/get/remove code
      without modification for both CRC and non-CRC filesystems. The world
      becomes sane again.
      
      Because the copy-in and copy-out now need to iterate over each
      filesystem block, I moved them into helper functions so we separate
      the block mapping and buffer manupulations from the attribute data
      and CRC header manipulations. The code becomes much clearer as a
      result, and it is a lot easier to understand and debug. It also
      appears to be much more robust - once it worked on 4k block size
      filesystems, it has worked without failure on 1k block size
      filesystems, too.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit ad1858d7)
      7bc0dc27
    • Dave Chinner's avatar
      xfs: fully initialise temp leaf in xfs_attr3_leaf_compact · 634fd532
      Dave Chinner authored
      xfs_attr3_leaf_compact() uses a temporary buffer for compacting the
      the entries in a leaf. It copies the the original buffer into the
      temporary buffer, then zeros the original buffer completely. It then
      copies the entries back into the original buffer.  However, the
      original buffer has not been correctly initialised, and so the
      movement of the entries goes horribly wrong.
      
      Make sure the zeroed destination buffer is fully initialised, and
      once we've set up the destination incore header appropriately, write
      is back to the buffer before starting to move entries around.
      
      While debugging this, the _d/_s prefixes weren't sufficient to
      remind me what buffer was what, so rename then all _src/_dst.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit d4c712bc)
      634fd532
    • Dave Chinner's avatar
      xfs: fully initialise temp leaf in xfs_attr3_leaf_unbalance · 9e80c762
      Dave Chinner authored
      xfs_attr3_leaf_unbalance() uses a temporary buffer for recombining
      the entries in two leaves when the destination leaf requires
      compaction. The temporary buffer ends up being copied back over the
      original destination buffer, so the header in the temporary buffer
      needs to contain all the information that is in the destination
      buffer.
      
      To make sure the temporary buffer is fully initialised, once we've
      set up the temporary incore header appropriately, write is back to
      the temporary buffer before starting to move entries around.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 8517de2a)
      9e80c762
    • Dave Chinner's avatar
      xfs: correctly map remote attr buffers during removal · 58a72281
      Dave Chinner authored
      If we don't map the buffers correctly (same as for get/set
      operations) then the incore buffer lookup will fail. If a block
      number matches but a length is wrong, then debug kernels will ASSERT
      fail in _xfs_buf_find() due to the length mismatch. Ensure that we
      map the buffers correctly by basing the length of the buffer on the
      attribute data length rather than the remote block count.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 6863ef84)
      58a72281
    • Dave Chinner's avatar
      xfs: remote attribute tail zeroing does too much · 26f71445
      Dave Chinner authored
      When an attribute data does not fill then entire remote block, we
      zero the remaining part of the buffer. This, however, needs to take
      into account that the buffer has a header, and so the offset where
      zeroing starts and the length of zeroing need to take this into
      account. Otherwise we end up with zeros over the end of the
      attribute value when CRCs are enabled.
      
      While there, make sure we only ask to map an extent that covers the
      remaining range of the attribute, rather than asking every time for
      the full length of remote data. If the remote attribute blocks are
      contiguous with other parts of the attribute tree, it will map those
      blocks as well and we can potentially zero them incorrectly. We can
      also get buffer size mistmatches when trying to read or remove the
      remote attribute, and this can lead to not finding the correct
      buffer when looking it up in cache.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 4af3644c)
      26f71445
    • Dave Chinner's avatar
      xfs: remote attribute read too short · 551b382f
      Dave Chinner authored
      Reading a maximally size remote attribute fails when CRCs are
      enabled with this verification error:
      
      XFS (vdb): remote attribute header does not match required off/len/owner)
      
      There are two reasons for this, the first being that the
      length of the buffer being read is determined from the
      args->rmtblkcnt which doesn't take into account CRC headers. Hence
      the mapped length ends up being too short and so we need to
      calculate it directly from the value length.
      
      The second is that the byte count of valid data within a buffer is
      capped by the length of the data and so doesn't take into account
      that the buffer might be longer due to headers. Hence we need to
      calculate the data space in the buffer first before calculating the
      actual byte count of data.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 913e96bc)
      551b382f
    • Dave Chinner's avatar
      xfs: remote attribute allocation may be contiguous · 9531e2de
      Dave Chinner authored
      When CRCs are enabled, there may be multiple allocations made if the
      headers cause a length overflow. This, however, does not mean that
      the number of headers required increases, as the second and
      subsequent extents may be contiguous with the previous extent. Hence
      when we map the extents to write the attribute data, we may end up
      with less extents than allocations made. Hence the assertion that we
      consume the number of headers we calculated in the allocation loop
      is incorrect and needs to be removed.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 90253cf1)
      9531e2de
    • Dave Chinner's avatar
      xfs: fix dir3 freespace block corruption · e400d27d
      Dave Chinner authored
      When the directory freespace index grows to a second block (2017
      4k data blocks in the directory), the initialisation of the second
      new block header goes wrong. The write verifier fires a corruption
      error indicating that the block number in the header is zero. This
      was being tripped by xfs/110.
      
      The problem is that the initialisation of the new block is done just
      fine in xfs_dir3_free_get_buf(), but the caller then users a dirv2
      structure to zero on-disk header fields that xfs_dir3_free_get_buf()
      has already zeroed. These lined up with the block number in the dir
      v3 header format.
      
      While looking at this, I noticed that the struct xfs_dir3_free_hdr()
      had 4 bytes of padding in it that wasn't defined as padding or being
      zeroed by the initialisation. Add a pad field declaration and fully
      zero the on disk and in-core headers in xfs_dir3_free_get_buf() so
      that this is never an issue in the future. Note that this doesn't
      change the on-disk layout, just makes the 32 bits of padding in the
      layout explicit.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 5ae6e6a4)
      e400d27d
    • Dave Chinner's avatar
      xfs: disable swap extents ioctl on CRC enabled filesystems · 7c9950fd
      Dave Chinner authored
      Currently, swapping extents from one inode to another is a simple
      act of switching data and attribute forks from one inode to another.
      This, unfortunately in no longer so simple with CRC enabled
      filesystems as there is owner information embedded into the BMBT
      blocks that are swapped between inodes. Hence swapping the forks
      between inodes results in the inodes having mapping blocks that
      point to the wrong owner and hence are considered corrupt.
      
      To fix this we need an extent tree block or record based swap
      algorithm so that the BMBT block owner information can be updated
      atomically in the swap transaction. This is a significant piece of
      new work, so for the moment simply don't allow swap extent
      operations to succeed on CRC enabled filesystems.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 02f75405)
      7c9950fd
    • Dave Chinner's avatar
      xfs: add fsgeom flag for v5 superblock support. · e7927e87
      Dave Chinner authored
      Currently userspace has no way of determining that a filesystem is
      CRC enabled. Add a flag to the XFS_IOC_FSGEOMETRY ioctl output to
      indicate that the filesystem has v5 superblock support enabled.
      This will allow xfs_info to correctly report the state of the
      filesystem.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 74137fff)
      e7927e87
    • Dave Chinner's avatar
      xfs: fix incorrect remote symlink block count · 1de09d1a
      Dave Chinner authored
      When CRCs are enabled, the number of blocks needed to hold a remote
      symlink on a 1k block size filesystem may be 2 instead of 1. The
      transaction reservation for the allocated blocks was not taking this
      into account and only allocating one block. Hence when trying to
      read or invalidate such symlinks, we are mapping a hole where there
      should be a block and things go bad at that point.
      
      Fix the reservation to use the correct block count, clean up the
      block count calculation similar to the remote attribute calculation,
      and add a debug guard to detect when we don't write the entire
      symlink to disk.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 321a9583)
      1de09d1a
    • Dave Chinner's avatar
      xfs: fix split buffer vector log recovery support · 7d2ffe80
      Dave Chinner authored
      A long time ago in a galaxy far away....
      
      .. the was a commit made to fix some ilinux specific "fragmented
      buffer" log recovery problem:
      
      http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=commitdiff;h=b29c0bece51da72fb3ff3b61391a391ea54e1603
      
      That problem occurred when a contiguous dirty region of a buffer was
      split across across two pages of an unmapped buffer. It's been a
      long time since that has been done in XFS, and the changes to log
      the entire inode buffers for CRC enabled filesystems has
      re-introduced that corner case.
      
      And, of course, it turns out that the above commit didn't actually
      fix anything - it just ensured that log recovery is guaranteed to
      fail when this situation occurs. And now for the gory details.
      
      xfstest xfs/085 is failing with this assert:
      
      XFS (vdb): bad number of regions (0) in inode log format
      XFS: Assertion failed: 0, file: fs/xfs/xfs_log_recover.c, line: 1583
      
      Largely undocumented factoid #1: Log recovery depends on all log
      buffer format items starting with this format:
      
      struct foo_log_format {
      	__uint16_t	type;
      	__uint16_t	size;
      	....
      
      As recoery uses the size field and assumptions about 32 bit
      alignment in decoding format items.  So don't pay much attention to
      the fact log recovery thinks that it decoding an inode log format
      item - it just uses them to determine what the size of the item is.
      
      But why would it see a log format item with a zero size? Well,
      luckily enough xfs_logprint uses the same code and gives the same
      error, so with a bit of gdb magic, it turns out that it isn't a log
      format that is being decoded. What logprint tells us is this:
      
      Oper (130): tid: a0375e1a  len: 28  clientid: TRANS  flags: none
      BUF:  #regs: 2   start blkno: 144 (0x90)  len: 16  bmap size: 2  flags: 0x4000
      Oper (131): tid: a0375e1a  len: 4096  clientid: TRANS  flags: none
      BUF DATA
      ----------------------------------------------------------------------------
      Oper (132): tid: a0375e1a  len: 4096  clientid: TRANS  flags: none
      xfs_logprint: unknown log operation type (4e49)
      **********************************************************************
      * ERROR: data block=2                                                 *
      **********************************************************************
      
      That we've got a buffer format item (oper 130) that has two regions;
      the format item itself and one dirty region. The subsequent region
      after the buffer format item and it's data is them what we are
      tripping over, and the first bytes of it at an inode magic number.
      Not a log opheader like there is supposed to be.
      
      That means there's a problem with the buffer format item. It's dirty
      data region is 4096 bytes, and it contains - you guessed it -
      initialised inodes. But inode buffers are 8k, not 4k, and we log
      them in their entirety. So something is wrong here. The buffer
      format item contains:
      
      (gdb) p /x *(struct xfs_buf_log_format *)in_f
      $22 = {blf_type = 0x123c, blf_size = 0x2, blf_flags = 0x4000,
             blf_len = 0x10, blf_blkno = 0x90, blf_map_size = 0x2,
             blf_data_map = {0xffffffff, 0xffffffff, .... }}
      
      Two regions, and a signle dirty contiguous region of 64 bits.  64 *
      128 = 8k, so this should be followed by a single 8k region of data.
      And the blf_flags tell us that the type of buffer is a
      XFS_BLFT_DINO_BUF. It contains inodes. And because it doesn't have
      the XFS_BLF_INODE_BUF flag set, that means it's an inode allocation
      buffer. So, it should be followed by 8k of inode data.
      
      But we know that the next region has a header of:
      
      (gdb) p /x *ohead
      $25 = {oh_tid = 0x1a5e37a0, oh_len = 0x100000, oh_clientid = 0x69,
             oh_flags = 0x0, oh_res2 = 0x0}
      
      and so be32_to_cpu(oh_len) = 0x1000 = 4096 bytes. It's simply not
      long enough to hold all the logged data. There must be another
      region. There is - there's a following opheader for another 4k of
      data that contains the other half of the inode cluster data - the
      one we assert fail on because it's not a log format header.
      
      So why is the second part of the data not being accounted to the
      correct buffer log format structure? It took a little more work with
      gdb to work out that the buffer log format structure was both
      expecting it to be there but hadn't accounted for it. It was at that
      point I went to the kernel code, as clearly this wasn't a bug in
      xfs_logprint and the kernel was writing bad stuff to the log.
      
      First port of call was the buffer item formatting code, and the
      discontiguous memory/contiguous dirty region handling code
      immediately stood out. I've wondered for a long time why the code
      had this comment in it:
      
                              vecp->i_addr = xfs_buf_offset(bp, buffer_offset);
                              vecp->i_len = nbits * XFS_BLF_CHUNK;
                              vecp->i_type = XLOG_REG_TYPE_BCHUNK;
      /*
       * You would think we need to bump the nvecs here too, but we do not
       * this number is used by recovery, and it gets confused by the boundary
       * split here
       *                      nvecs++;
       */
                              vecp++;
      
      And it didn't account for the extra vector pointer. The case being
      handled here is that a contiguous dirty region lies across a
      boundary that cannot be memcpy()d across, and so has to be split
      into two separate operations for xlog_write() to perform.
      
      What this code assumes is that what is written to the log is two
      consecutive blocks of data that are accounted in the buf log format
      item as the same contiguous dirty region and so will get decoded as
      such by the log recovery code.
      
      The thing is, xlog_write() knows nothing about this, and so just
      does it's normal thing of adding an opheader for each vector. That
      means the 8k region gets written to the log as two separate regions
      of 4k each, but because nvecs has not been incremented, the buf log
      format item accounts for only one of them.
      
      Hence when we come to log recovery, we process the first 4k region
      and then expect to come across a new item that starts with a log
      format structure of some kind that tells us whenteh next data is
      going to be. Instead, we hit raw buffer data and things go bad real
      quick.
      
      So, the commit from 2002 that commented out nvecs++ is just plain
      wrong. It breaks log recovery completely, and it would seem the only
      reason this hasn't been since then is that we don't log large
      contigous regions of multi-page unmapped buffers very often. Never
      would be a closer estimate, at least until the CRC code came along....
      
      So, lets fix that by restoring the nvecs accounting for the extra
      region when we hit this case.....
      
      .... and there's the problemin log recovery it is apparently working
      around:
      
      XFS: Assertion failed: i == item->ri_total, file: fs/xfs/xfs_log_recover.c, line: 2135
      
      Yup, xlog_recover_do_reg_buffer() doesn't handle contigous dirty
      regions being broken up into multiple regions by the log formatting
      code. That's an easy fix, though - if the number of contiguous dirty
      bits exceeds the length of the region being copied out of the log,
      only account for the number of dirty bits that region covers, and
      then loop again and copy more from the next region. It's a 2 line
      fix.
      
      Now xfstests xfs/085 passes, we have one less piece of mystery
      code, and one more important piece of knowledge about how to
      structure new log format items..
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarMark Tinguely <tinguely@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 709da6a6)
      7d2ffe80
    • Dave Chinner's avatar
      xfs: kill suid/sgid through the truncate path. · 2962f5a5
      Dave Chinner authored
      XFS has failed to kill suid/sgid bits correctly when truncating
      files of non-zero size since commit c4ed4243 ("xfs: split
      xfs_setattr") introduced in the 3.1 kernel. Fix it.
      
      Fix it.
      
      cc: stable kernel <stable@vger.kernel.org>
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 56c19e89)
      2962f5a5
    • Dave Chinner's avatar
      xfs: avoid nesting transactions in xfs_qm_scall_setqlim() · 08fb3905
      Dave Chinner authored
      Lockdep reports:
      
      =============================================
      [ INFO: possible recursive locking detected ]
      3.9.0+ #3 Not tainted
      ---------------------------------------------
      setquota/28368 is trying to acquire lock:
       (sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
      
      but task is already holding lock:
       (sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
      
      from xfs_qm_scall_setqlim()->xfs_dqread() when a dquot needs to be
      allocated.
      
      xfs_qm_scall_setqlim() is starting a transaction and then not
      passing it into xfs_qm_dqet() and so it starts it's own transaction
      when allocating the dquot.  Splat!
      
      Fix this by not allocating the dquot in xfs_qm_scall_setqlim()
      inside the setqlim transaction. This requires getting the dquot
      first (and allocating it if necessary) then dropping and relocking
      the dquot before joining it to the setqlim transaction.
      Reported-by: default avatarMichael L. Semon <mlsemon35@gmail.com>
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarBen Myers <bpm@sgi.com>
      Signed-off-by: default avatarBen Myers <bpm@sgi.com>
      (cherry picked from commit f648167f)
      08fb3905
    • Linus Torvalds's avatar
      Merge tag 'stable/for-linus-3.10-rc3-tag' of... · 3655b22d
      Linus Torvalds authored
      Merge tag 'stable/for-linus-3.10-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen
      
      Pull Xen fixes from Konrad Rzeszutek Wilk:
       - Use proper error paths
       - Clean up APIC IPI usage (incorrect arguments)
       - Delay XenBus frontend resume is backend (xenstored) is not running
       - Fix build error with various combinations of CONFIG_
      
      * tag 'stable/for-linus-3.10-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
        xenbus_client.c: correct exit path for xenbus_map_ring_valloc_hvm
        xen-pciback: more uses of cached MSI-X capability offset
        xen: Clean up apic ipi interface
        xenbus: save xenstore local status for later use
        xenbus: delay xenbus frontend resume if xenstored is not running
        xmem/tmem: fix 'undefined variable' build error.
      3655b22d
    • Jean-Christophe PLAGNIOL-VILLARD's avatar
      MAINTAINERS: Framebuffer Layer maintainers update · 5489e948
      Jean-Christophe PLAGNIOL-VILLARD authored
      Tomi and I will now take care of the Framebuffer Layer
      
      The git tree is now on kernel.org
      Signed-off-by: default avatarJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      Cc: Olof Johansson <olof@lixom.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
      Cc: linux-fbdev@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5489e948
    • Linus Torvalds's avatar
      Merge tag 'sound-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound · 5c1dfc82
      Linus Torvalds authored
      Pull sound updates from Takashi Iwai:
       "Again very calm updates at this time.
      
        All small fixes for individual drivers, mostly ASoC codecs, in
        addition to soc-compress fix for capture streams which is safe to
        apply as there is no in-tree users yet."
      
      * tag 'sound-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
        ASoC: cs42l52: fix default value for MASTERA_VOL.
        ASoC: wm8994: check for array index returned
        ASoC: wm8994: Fix reporting of accessory removal on WM8958
        ASoC: wm8994: use the correct pointer to get the control value
        ASoC: wm5110: Correct DSP4R Mixer control name
        ALSA: usb-6fire: Modify firmware version check
        ASoC: cs42l52: fix master playback mute mask.
        ASoC: cs42l52: fix bogus shifts in "Speaker Volume" and "PCM Mixer Volume" controls.
        ASoC: cs42l52: microphone bias is controlled by IFACE_CTL2 register.
        ASoC: davinci: fix sample rotation
        ASoC: wm5110: Add missing speaker initialisation
        ASoC: soc-compress: Send correct stream event for capture start
        ASoC: max98090: request IRQF_ONESHOT interrupt
      5c1dfc82
    • Chuck Lever's avatar
      NFS: Fix security flavor negotiation with legacy binary mounts · eb54d437
      Chuck Lever authored
      Darrick J. Wong <darrick.wong@oracle.com> reports:
      > I have a kvm-based testing setup that netboots VMs over NFS, the
      > client end of which seems to have broken somehow in 3.10-rc1.  The
      > server's exports file looks like this:
      >
      > /storage/mtr/x64	192.168.122.0/24(ro,sync,no_root_squash,no_subtree_check)
      >
      > On the client end (inside the VM), the initrd runs the following
      > command to try to mount the rootfs over NFS:
      >
      > # mount -o nolock -o ro -o retrans=10 192.168.122.1:/storage/mtr/x64/ /root
      >
      > (Note: This is the busybox mount command.)
      >
      > The mount fails with -EINVAL.
      
      Commit 4580a92d "NFS: Use server-recommended security flavor by
      default (NFSv3)" introduced a behavior regression for NFS mounts
      done via a legacy binary mount(2) call.
      
      Ensure that a default security flavor is specified for legacy binary
      mount requests, since they do not invoke nfs_select_flavor() in the
      kernel.
      
      Busybox uses klibc's nfsmount command, which performs NFS mounts
      using the legacy binary mount data format.  /sbin/mount.nfs is not
      affected by this regression.
      Reported-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Tested-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Acked-by: default avatarWeston Andros Adamson <dros@netapp.com>
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      eb54d437
    • Lance Ortiz's avatar
      aerdrv: Move cper_print_aer() call out of interrupt context · 37448adf
      Lance Ortiz authored
      The following warning was seen on 3.9 when a corrected PCIe error was being
      handled by the AER subsystem.
      
      WARNING: at .../drivers/pci/search.c:214 pci_get_dev_by_id+0x8a/0x90()
      
      This occurred because a call to pci_get_domain_bus_and_slot() was added to
      cper_print_pcie() to setup for the call to cper_print_aer().  The warning
      showed up because cper_print_pcie() is called in an interrupt context and
      pci_get* functions are not supposed to be called in that context.
      
      The solution is to move the cper_print_aer() call out of the interrupt
      context and into aer_recover_work_func() to avoid any warnings when calling
      pci_get* functions.
      Signed-off-by: default avatarLance Ortiz <lance.ortiz@hp.com>
      Acked-by: default avatarBorislav Petkov <bp@suse.de>
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      37448adf
    • Linus Torvalds's avatar
      Merge branch 'mn10300' (mn10300 fixes from David Howells) · dcdbe33a
      Linus Torvalds authored
      Merge mn10300 fixes from David Howells.
      
      * emailed patches from David Howells <dhowells@redhat.com>:
        MN10300: Need pci_iomap() and __pci_ioport_map() defining
        MN10300: ASB2305's PCI code needs the definition of XIRQ1
        MN10300: Enable IRQs more in system call exit work path
        MN10300: Fix ret_from_kernel_thread
      dcdbe33a
    • David Howells's avatar
      MN10300: Need pci_iomap() and __pci_ioport_map() defining · 1aeeac7a
      David Howells authored
      Include the generic definitions of pci_iomap() and __pci_ioport_map()
      otherwise we can get errors like:
      
        lib/pci_iomap.c: In function 'pci_iomap':
        lib/pci_iomap.c:37: error: implicit declaration of function '__pci_ioport_map'
        lib/pci_iomap.c:37: warning: return makes pointer from integer without a cast
      
      and:
      
        drivers/pci/quirks.c: In function 'disable_igfx_irq':
        drivers/pci/quirks.c:2893: error: implicit declaration of function 'pci_iomap'
        drivers/pci/quirks.c:2893: warning: initialization makes pointer from integer without a cast
        drivers/pci/quirks.c: In function 'reset_ivb_igd':
        drivers/pci/quirks.c:3133: warning: assignment makes pointer from integer without a cast
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarKen Cox <jkc@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1aeeac7a
    • David Howells's avatar
      MN10300: ASB2305's PCI code needs the definition of XIRQ1 · b8bc9b02
      David Howells authored
      The code for PCI in the ASB2305 needs the definition of XIRQ1 from proc/irq.h
      otherwise the following error appears:
      
        arch/mn10300/unit-asb2305/pci.c: In function 'unit_pci_init':
        arch/mn10300/unit-asb2305/pci.c:481: error: 'XIRQ1' undeclared (first use in this function)
        arch/mn10300/unit-asb2305/pci.c:481: error: (Each undeclared identifier is reported only once
        arch/mn10300/unit-asb2305/pci.c:481: error: for each function it appears in.)
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarKen Cox <jkc@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b8bc9b02
    • David Howells's avatar
      MN10300: Enable IRQs more in system call exit work path · d17fc238
      David Howells authored
      Enable IRQs when calling schedule() for TIF_NEED_RESCHED and
      do_notify_resume().  If interrupts are enabled during do_notify_resume(), a
      warning can be seen (see lower down).
      
      Whilst we're at it, resume_userspace can be made local to entry.S as it is not
      called outside of there and it can be merged with the part of work_resched that
      occurs after schedule() is called.
      
        WARNING: at kernel/softirq.c:160 local_bh_enable+0x42/0xa0()
        Call Trace:
          local_bh_enable+0x42/0xa0
          unix_release_sock+0x86/0x23c
          unix_release+0x20/0x28
          sock_release+0x17/0x88
          sock_close+0x20/0x28
          __fput+0xc9/0x1fc
          ____fput+0xb/0x10
          task_work_run+0x64/0x78
          do_notify_resume+0x53d/0x544
          work_notifysig+0xa/0xc
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarKen Cox <jkc@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d17fc238