1. 30 Sep, 2014 13 commits
  2. 23 Sep, 2014 1 commit
  3. 22 Sep, 2014 9 commits
    • Ilya Dryomov's avatar
      libceph: do not hard code max auth ticket len · ae72a158
      Ilya Dryomov authored
      commit c27a3e4d upstream.
      
      We hard code cephx auth ticket buffer size to 256 bytes.  This isn't
      enough for any moderate setups and, in case tickets themselves are not
      encrypted, leads to buffer overflows (ceph_x_decrypt() errors out, but
      ceph_decode_copy() doesn't - it's just a memcpy() wrapper).  Since the
      buffer is allocated dynamically anyway, allocated it a bit later, at
      the point where we know how much is going to be needed.
      
      Fixes: http://tracker.ceph.com/issues/8979Signed-off-by: default avatarIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@redhat.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ae72a158
    • Ilya Dryomov's avatar
      libceph: add process_one_ticket() helper · 9cc937e4
      Ilya Dryomov authored
      commit 597cda35 upstream.
      
      Add a helper for processing individual cephx auth tickets.  Needed for
      the next commit, which deals with allocating ticket buffers.  (Most of
      the diff here is whitespace - view with git diff -b).
      Signed-off-by: default avatarIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@redhat.com>
      [ kamal: 3.13 stable prereq for
        c27a3e4d "libceph: do not hard code max auth ticket len" ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9cc937e4
    • Jiri Kosina's avatar
      HID: picolcd: sanity check report size in raw_event() callback · 4294cbe1
      Jiri Kosina authored
      commit 844817e4 upstream.
      
      The report passed to us from transport driver could potentially be
      arbitrarily large, therefore we better sanity-check it so that raw_data
      that we hold in picolcd_pending structure are always kept within proper
      bounds.
      Reported-by: default avatarSteven Vittitoe <scvitti@google.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      4294cbe1
    • James Forshaw's avatar
      USB: whiteheat: Added bounds checking for bulk command response · 820644e9
      James Forshaw authored
      commit 6817ae22 upstream.
      
      This patch fixes a potential security issue in the whiteheat USB driver
      which might allow a local attacker to cause kernel memory corrpution. This
      is due to an unchecked memcpy into a fixed size buffer (of 64 bytes). On
      EHCI and XHCI busses it's possible to craft responses greater than 64
      bytes leading a buffer overflow.
      Signed-off-by: default avatarJames Forshaw <forshaw@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      820644e9
    • Jiri Kosina's avatar
      HID: fix a couple of off-by-ones · b3e736dc
      Jiri Kosina authored
      commit 4ab25786 upstream.
      
      There are a few very theoretical off-by-one bugs in report descriptor size
      checking when performing a pre-parsing fixup. Fix those.
      Reported-by: default avatarBen Hawkes <hawkes@google.com>
      Reviewed-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      b3e736dc
    • Jiri Kosina's avatar
      HID: magicmouse: sanity check report size in raw_event() callback · e5cb67ba
      Jiri Kosina authored
      commit c54def7b upstream.
      
      The report passed to us from transport driver could potentially be
      arbitrarily large, therefore we better sanity-check it so that
      magicmouse_emit_touch() gets only valid values of raw_id.
      Reported-by: default avatarSteven Vittitoe <scvitti@google.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      e5cb67ba
    • Jan Kara's avatar
      udf: Avoid infinite loop when processing indirect ICBs · cf56fc67
      Jan Kara authored
      commit c03aa9f6 upstream.
      
      We did not implement any bound on number of indirect ICBs we follow when
      loading inode. Thus corrupted medium could cause kernel to go into an
      infinite loop, possibly causing a stack overflow.
      
      Fix the possible stack overflow by removing recursion from
      __udf_read_inode() and limit number of indirect ICBs we follow to avoid
      infinite loops.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Reference: CVE-2014-6410
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      cf56fc67
    • Jan Kara's avatar
      udf: Fold udf_fill_inode() into __udf_read_inode() · fd549856
      Jan Kara authored
      commit bb7720a0 upstream.
      
      There's no good reason to separate these since udf_fill_inode() is
      called only from __udf_read_inode() and both do part of the same thing.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      [ kamal: 3.13 stable prereq for
        c03aa9f6 "udf: Avoid infinite loop when processing indirect ICBs" ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      fd549856
    • David Howells's avatar
      KEYS: Fix termination condition in assoc array garbage collection · dd248e95
      David Howells authored
      commit 95389b08 upstream.
      
      This fixes CVE-2014-3631.
      
      It is possible for an associative array to end up with a shortcut node at the
      root of the tree if there are more than fan-out leaves in the tree, but they
      all crowd into the same slot in the lowest level (ie. they all have the same
      first nibble of their index keys).
      
      When assoc_array_gc() returns back up the tree after scanning some leaves, it
      can fall off of the root and crash because it assumes that the back pointer
      from a shortcut (after label ascend_old_tree) must point to a normal node -
      which isn't true of a shortcut node at the root.
      
      Should we find we're ascending rootwards over a shortcut, we should check to
      see if the backpointer is zero - and if it is, we have completed the scan.
      
      This particular bug cannot occur if the root node is not a shortcut - ie. if
      you have fewer than 17 keys in a keyring or if you have at least two keys that
      sit into separate slots (eg. a keyring and a non keyring).
      
      This can be reproduced by:
      
      	ring=`keyctl newring bar @s`
      	for ((i=1; i<=18; i++)); do last_key=`keyctl newring foo$i $ring`; done
      	keyctl timeout $last_key 2
      
      Doing this:
      
      	echo 3 >/proc/sys/kernel/keys/gc_delay
      
      first will speed things up.
      
      If we do fall off of the top of the tree, we get the following oops:
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
      IP: [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540
      PGD dae15067 PUD cfc24067 PMD 0
      Oops: 0000 [#1] SMP
      Modules linked in: xt_nat xt_mark nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_ni
      CPU: 0 PID: 26011 Comm: kworker/0:1 Not tainted 3.14.9-200.fc20.x86_64 #1
      Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
      Workqueue: events key_garbage_collector
      task: ffff8800918bd580 ti: ffff8800aac14000 task.ti: ffff8800aac14000
      RIP: 0010:[<ffffffff8136cea7>] [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540
      RSP: 0018:ffff8800aac15d40  EFLAGS: 00010206
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8800aaecacc0
      RDX: ffff8800daecf440 RSI: 0000000000000001 RDI: ffff8800aadc2bc0
      RBP: ffff8800aac15da8 R08: 0000000000000001 R09: 0000000000000003
      R10: ffffffff8136ccc7 R11: 0000000000000000 R12: 0000000000000000
      R13: 0000000000000000 R14: 0000000000000070 R15: 0000000000000001
      FS:  0000000000000000(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 0000000000000018 CR3: 00000000db10d000 CR4: 00000000000006f0
      Stack:
       ffff8800aac15d50 0000000000000011 ffff8800aac15db8 ffffffff812e2a70
       ffff880091a00600 0000000000000000 ffff8800aadc2bc3 00000000cd42c987
       ffff88003702df20 ffff88003702dfa0 0000000053b65c09 ffff8800aac15fd8
      Call Trace:
       [<ffffffff812e2a70>] ? keyring_detect_cycle_iterator+0x30/0x30
       [<ffffffff812e3e75>] keyring_gc+0x75/0x80
       [<ffffffff812e1424>] key_garbage_collector+0x154/0x3c0
       [<ffffffff810a67b6>] process_one_work+0x176/0x430
       [<ffffffff810a744b>] worker_thread+0x11b/0x3a0
       [<ffffffff810a7330>] ? rescuer_thread+0x3b0/0x3b0
       [<ffffffff810ae1a8>] kthread+0xd8/0xf0
       [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40
       [<ffffffff816ffb7c>] ret_from_fork+0x7c/0xb0
       [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40
      Code: 08 4c 8b 22 0f 84 bf 00 00 00 41 83 c7 01 49 83 e4 fc 41 83 ff 0f 4c 89 65 c0 0f 8f 5a fe ff ff 48 8b 45 c0 4d 63 cf 49 83 c1 02 <4e> 8b 34 c8 4d 85 f6 0f 84 be 00 00 00 41 f6 c6 01 0f 84 92
      RIP  [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540
       RSP <ffff8800aac15d40>
      CR2: 0000000000000018
      ---[ end trace 1129028a088c0cbd ]---
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarDon Zickus <dzickus@redhat.com>
      Signed-off-by: default avatarJames Morris <james.l.morris@oracle.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      dd248e95
  4. 18 Sep, 2014 17 commits
    • Kamal Mostafa's avatar
      Linux 3.13.11.7 · 58f8a09a
      Kamal Mostafa authored
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      58f8a09a
    • Boris Ostrovsky's avatar
      x86/espfix/xen: Fix allocation of pages for paravirt page tables · 3c4c29b3
      Boris Ostrovsky authored
      commit 8762e509 upstream.
      
      init_espfix_ap() is currently off by one level when informing hypervisor
      that allocated pages will be used for ministacks' page tables.
      
      The most immediate effect of this on a PV guest is that if
      'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor
      will refuse to use it for a page table (which it shouldn't be anyway). This will
      result in warnings by both Xen and Linux.
      
      More importantly, a subsequent write to that page (again, by a PV guest) is
      likely to result in fatal page fault.
      Signed-off-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.comReviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      3c4c29b3
    • Theodore Ts'o's avatar
      ext4: fix BUG_ON in mb_free_blocks() · ef7e0764
      Theodore Ts'o authored
      commit c99d1e6e upstream.
      
      If we suffer a block allocation failure (for example due to a memory
      allocation failure), it's possible that we will call
      ext4_discard_allocated_blocks() before we've actually allocated any
      blocks.  In that case, fe_len and fe_start in ac->ac_f_ex will still
      be zero, and this will result in mb_free_blocks(inode, e4b, 0, 0)
      triggering the BUG_ON on mb_free_blocks():
      
      	BUG_ON(last >= (sb->s_blocksize << 3));
      
      Fix this by bailing out of ext4_discard_allocated_blocks() if fs_len
      is zero.
      
      Also fix a missing ext4_mb_unload_buddy() call in
      ext4_discard_allocated_blocks().
      
      Google-Bug-Id: 16844242
      
      Fixes: 86f0afd4Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ef7e0764
    • Filipe Manana's avatar
      Btrfs: fix csum tree corruption, duplicate and outdated checksums · 157de422
      Filipe Manana authored
      commit 27b9a812 upstream.
      
      Under rare circumstances we can end up leaving 2 versions of a checksum
      for the same file extent range.
      
      The reason for this is that after calling btrfs_next_leaf we process
      slot 0 of the leaf it returns, instead of processing the slot set in
      path->slots[0]. Most of the time (by far) path->slots[0] is 0, but after
      btrfs_next_leaf() releases the path and before it searches for the next
      leaf, another task might cause a split of the next leaf, which migrates
      some of its keys to the leaf we were processing before calling
      btrfs_next_leaf(). In this case btrfs_next_leaf() returns again the
      same leaf but with path->slots[0] having a slot number corresponding
      to the first new key it got, that is, a slot number that didn't exist
      before calling btrfs_next_leaf(), as the leaf now has more keys than
      it had before. So we must really process the returned leaf starting at
      path->slots[0] always, as it isn't always 0, and the key at slot 0 can
      have an offset much lower than our search offset/bytenr.
      
      For example, consider the following scenario, where we have:
      
      sums->bytenr: 40157184, sums->len: 16384, sums end: 40173568
      four 4kb file data blocks with offsets 40157184, 40161280, 40165376, 40169472
      
        Leaf N:
      
          slot = 0                           slot = btrfs_header_nritems() - 1
        |-------------------------------------------------------------------|
        | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4] |
        |-------------------------------------------------------------------|
      
        Leaf N + 1:
      
            slot = 0                          slot = btrfs_header_nritems() - 1
        |--------------------------------------------------------------------|
        | [(CSUM CSUM 40161280), size 32] ... [((CSUM CSUM 40615936), size 8 |
        |--------------------------------------------------------------------|
      
      Because we are at the last slot of leaf N, we call btrfs_next_leaf() to
      find the next highest key, which releases the current path and then searches
      for that next key. However after releasing the path and before finding that
      next key, the item at slot 0 of leaf N + 1 gets moved to leaf N, due to a call
      to ctree.c:push_leaf_left() (via ctree.c:split_leaf()), and therefore
      btrfs_next_leaf() will returns us a path again with leaf N but with the slot
      pointing to its new last key (CSUM CSUM 40161280). This new version of leaf N
      is then:
      
          slot = 0                        slot = btrfs_header_nritems() - 2  slot = btrfs_header_nritems() - 1
        |----------------------------------------------------------------------------------------------------|
        | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4]  [(CSUM CSUM 40161280), size 32] |
        |----------------------------------------------------------------------------------------------------|
      
      And incorrecly using slot 0, makes us set next_offset to 39239680 and we jump
      into the "insert:" label, which will set tmp to:
      
          tmp = min((sums->len - total_bytes) >> blocksize_bits,
              (next_offset - file_key.offset) >> blocksize_bits) =
          min((16384 - 0) >> 12, (39239680 - 40157184) >> 12) =
          min(4, (u64)-917504 = 18446744073708634112 >> 12) = 4
      
      and
      
         ins_size = csum_size * tmp = 4 * 4 = 16 bytes.
      
      In other words, we insert a new csum item in the tree with key
      (CSUM_OBJECTID CSUM_KEY 40157184 = sums->bytenr) that contains the checksums
      for all the data (4 blocks of 4096 bytes each = sums->len). Which is wrong,
      because the item with key (CSUM CSUM 40161280) (the one that was moved from
      leaf N + 1 to the end of leaf N) contains the old checksums of the last 12288
      bytes of our data and won't get those old checksums removed.
      
      So this leaves us 2 different checksums for 3 4kb blocks of data in the tree,
      and breaks the logical rule:
      
         Key_N+1.offset >= Key_N.offset + length_of_data_its_checksums_cover
      
      An obvious bad effect of this is that a subsequent csum tree lookup to get
      the checksum of any of the blocks with logical offset of 40161280, 40165376
      or 40169472 (the last 3 4kb blocks of file data), will get the old checksums.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      157de422
    • Takashi Iwai's avatar
      Btrfs: Fix memory corruption by ulist_add_merge() on 32bit arch · a4312389
      Takashi Iwai authored
      commit 4eb1f66d upstream.
      
      We've got bug reports that btrfs crashes when quota is enabled on
      32bit kernel, typically with the Oops like below:
       BUG: unable to handle kernel NULL pointer dereference at 00000004
       IP: [<f9234590>] find_parent_nodes+0x360/0x1380 [btrfs]
       *pde = 00000000
       Oops: 0000 [#1] SMP
       CPU: 0 PID: 151 Comm: kworker/u8:2 Tainted: G S      W 3.15.2-1.gd43d97e-default #1
       Workqueue: btrfs-qgroup-rescan normal_work_helper [btrfs]
       task: f1478130 ti: f147c000 task.ti: f147c000
       EIP: 0060:[<f9234590>] EFLAGS: 00010213 CPU: 0
       EIP is at find_parent_nodes+0x360/0x1380 [btrfs]
       EAX: f147dda8 EBX: f147ddb0 ECX: 00000011 EDX: 00000000
       ESI: 00000000 EDI: f147dda4 EBP: f147ddf8 ESP: f147dd38
        DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
       CR0: 8005003b CR2: 00000004 CR3: 00bf3000 CR4: 00000690
       Stack:
        00000000 00000000 f147dda4 00000050 00000001 00000000 00000001 00000050
        00000001 00000000 d3059000 00000001 00000022 000000a8 00000000 00000000
        00000000 000000a1 00000000 00000000 00000001 00000000 00000000 11800000
       Call Trace:
        [<f923564d>] __btrfs_find_all_roots+0x9d/0xf0 [btrfs]
        [<f9237bb1>] btrfs_qgroup_rescan_worker+0x401/0x760 [btrfs]
        [<f9206148>] normal_work_helper+0xc8/0x270 [btrfs]
        [<c025e38b>] process_one_work+0x11b/0x390
        [<c025eea1>] worker_thread+0x101/0x340
        [<c026432b>] kthread+0x9b/0xb0
        [<c0712a71>] ret_from_kernel_thread+0x21/0x30
        [<c0264290>] kthread_create_on_node+0x110/0x110
      
      This indicates a NULL corruption in prefs_delayed list.  The further
      investigation and bisection pointed that the call of ulist_add_merge()
      results in the corruption.
      
      ulist_add_merge() takes u64 as aux and writes a 64bit value into
      old_aux.  The callers of this function in backref.c, however, pass a
      pointer of a pointer to old_aux.  That is, the function overwrites
      64bit value on 32bit pointer.  This caused a NULL in the adjacent
      variable, in this case, prefs_delayed.
      
      Here is a quick attempt to band-aid over this: a new function,
      ulist_add_merge_ptr() is introduced to pass/store properly a pointer
      value instead of u64.  There are still ugly void ** cast remaining
      in the callers because void ** cannot be taken implicitly.  But, it's
      safer than explicit cast to u64, anyway.
      
      Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=887046Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      a4312389
    • Aneesh Kumar K.V's avatar
      powerpc/mm: Use read barrier when creating real_pte · ff72db9c
      Aneesh Kumar K.V authored
      commit 85c1fafd upstream.
      
      On ppc64 we support 4K hash pte with 64K page size. That requires
      us to track the hash pte slot information on a per 4k basis. We do that
      by storing the slot details in the second half of pte page. The pte bit
      _PAGE_COMBO is used to indicate whether the second half need to be
      looked while building real_pte. We need to use read memory barrier while
      doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO
      check. On the store side we already do a lwsync in __hash_page_4K
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ff72db9c
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Use ACCESS_ONCE when loading pmdp · 216e985c
      Aneesh Kumar K.V authored
      commit 7e467245 upstream.
      
      We would get wrong results in compiler recomputed old_pmd. Avoid
      that by using ACCESS_ONCE
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      216e985c
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Invalidate with vpn in loop · ee1fd754
      Aneesh Kumar K.V authored
      commit 969b7b20 upstream.
      
      As per ISA, for 4k base page size we compare 14..65 bits of VA specified
      with the entry_VA in tlb. That implies we need to make sure we do a
      tlbie with all the possible 4k va we used to access the 16MB hugepage.
      With 64k base page size we compare 14..57 bits of VA. Hence we cannot
      ignore the lower 24 bits of va while tlbie .We also cannot tlb
      invalidate a 16MB entry with just one tlbie instruction because
      we don't track which va was used to instantiate the tlb entry.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ee1fd754
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Handle combo pages in invalidate · f837b291
      Aneesh Kumar K.V authored
      commit fc047955 upstream.
      
      If we changed base page size of the segment, either via sub_page_protect
      or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
      table entries. We do a lazy hash page table flush for all mapped pages
      in the demoted segment. This happens when we handle hash page fault for
      these pages.
      
      We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
      pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
      that implies that we could possibly have older 64K hash pte entries in
      the hash page table and we need to invalidate those entries.
      
      Use _PAGE_COMBO to determine the page size with which we should
      invalidate the hash table entries on unmap.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f837b291
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Invalidate old 64K based hash page mapping before insert of 4k pte · 0d03f310
      Aneesh Kumar K.V authored
      commit 629149fa upstream.
      
      If we changed base page size of the segment, either via sub_page_protect
      or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
      table entries. We do a lazy hash page table flush for all mapped pages
      in the demoted segment. This happens when we handle hash page fault
      for these pages.
      
      We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
      pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
      that implies that we could possibly have older 64K hash pte entries in
      the hash page table and we need to invalidate those entries.
      
      Handle this correctly for 16M pages
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      [ kamal: backport to 3.13-stable: context ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0d03f310
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Don't recompute vsid and ssize in loop on invalidate · 31ea77a4
      Aneesh Kumar K.V authored
      commit fa1f8ae8 upstream.
      
      The segment identifier and segment size will remain the same in
      the loop, So we can compute it outside. We also change the
      hugepage_invalidate interface so that we can use it the later patch
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      31ea77a4
    • Aneesh Kumar K.V's avatar
      powerpc/thp: Add write barrier after updating the valid bit · 5eb4b79d
      Aneesh Kumar K.V authored
      commit b0aa44a3 upstream.
      
      With hugepages, we store the hpte valid information in the pte page
      whose address is stored in the second half of the PMD. Use a
      write barrier to make sure clearing pmd busy bit and updating
      hpte valid info are ordered properly.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5eb4b79d
    • Gavin Shan's avatar
      powerpc/pseries: Failure on removing device node · f756490a
      Gavin Shan authored
      commit f1b3929c upstream.
      
      While running command "drmgr -c phb -r -s 'PHB 528'", following
      backtrace jumped out because the target device node isn't marked
      with OF_DETACHED by of_detach_node(), which caused by error
      returned from memory hotplug related reconfig notifier when
      disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it.
      
      ERROR: Bad of_node_put() on /pci@800000020000210/ethernet@0
      CPU: 14 PID: 2252 Comm: drmgr Tainted: G        W     3.16.0+ #427
      Call Trace:
      [c000000012a776a0] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable)
      [c000000012a77750] [c00000000083cd34] .dump_stack+0x7c/0x9c
      [c000000012a777d0] [c0000000006807c4] .of_node_release+0x58/0xe0
      [c000000012a77860] [c00000000038a7d0] .kobject_release+0x174/0x1b8
      [c000000012a77900] [c00000000038a884] .kobject_put+0x70/0x78
      [c000000012a77980] [c000000000681680] .of_node_put+0x28/0x34
      [c000000012a77a00] [c000000000681ea8] .__of_get_next_child+0x64/0x70
      [c000000012a77a90] [c000000000682138] .of_find_node_by_path+0x1b8/0x20c
      [c000000012a77b40] [c000000000051840] .ofdt_write+0x308/0x688
      [c000000012a77c20] [c000000000238430] .proc_reg_write+0xb8/0xd4
      [c000000012a77cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8
      [c000000012a77d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0
      [c000000012a77e30] [c00000000000a064] syscall_exit+0x0/0x98
      Signed-off-by: default avatarGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f756490a
    • Ronald Wahl's avatar
      carl9170: fix sending URBs with wrong type when using full-speed · 0aef7cc1
      Ronald Wahl authored
      commit 671796dd upstream.
      
      The driver assumes that endpoint 4 is always an interrupt endpoint.
      Unfortunately the type differs between high-speed and full-speed
      configurations while in the former case it is indeed an interrupt
      endpoint this is not true for the latter case - here it is a bulk
      endpoint. When sending URBs with the wrong type the kernel will
      generate a warning message including backtrace. In this specific
      case there will be a huge amount of warnings which can bring the system
      to freeze.
      
      To fix this we are now sending URBs to endpoint 4 using the type
      found in the endpoint descriptor.
      
      A side note: The carl9170 firmware currently specifies endpoint 4 as
      interrupt endpoint even in the full-speed configuration but this has
      no relevance because before this firmware is loaded the endpoint type
      is as described above and after the firmware is running the stick is not
      reenumerated and so the old descriptor is used.
      Signed-off-by: default avatarRonald Wahl <ronald.wahl@raritan.com>
      Signed-off-by: default avatarJohn W. Linville <linville@tuxdriver.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0aef7cc1
    • David Vrabel's avatar
      x86/xen: resume timer irqs early · bf8e7c35
      David Vrabel authored
      commit 8d5999df upstream.
      
      If the timer irqs are resumed during device resume it is possible in
      certain circumstances for the resume to hang early on, before device
      interrupts are resumed.  For an Ubuntu 14.04 PVHVM guest this would
      occur in ~0.5% of resume attempts.
      
      It is not entirely clear what is occuring the point of the hang but I
      think a task necessary for the resume calls schedule_timeout(),
      waiting for a timer interrupt (which never arrives).  This failure may
      require specific tasks to be running on the other VCPUs to trigger
      (processes are not frozen during a suspend/resume if PREEMPT is
      disabled).
      
      Add IRQF_EARLY_RESUME to the timer interrupts so they are resumed in
      syscore_resume().
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      bf8e7c35
    • Takashi Iwai's avatar
      ALSA: hda/ca0132 - Don't try loading firmware at resume when already failed · b74103cb
      Takashi Iwai authored
      commit e24aa0a4 upstream.
      
      CA0132 driver tries to reload the firmware at resume.  Usually this
      works since the firmware loader core caches the firmware contents by
      itself.  However, if the driver failed to load the firmwares
      (e.g. missing files), reloading the firmware at resume goes through
      the actual file loading code path, and triggers a kernel WARNING like:
      
       WARNING: CPU: 10 PID:11371 at drivers/base/firmware_class.c:1105 _request_firmware+0x9ab/0x9d0()
      
      For avoiding this situation, this patch makes CA0132 skipping the f/w
      loading at resume when it failed at probe time.
      Reported-and-tested-by: default avatarJanek Kozicki <cosurgi@gmail.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      b74103cb
    • Clemens Ladisch's avatar
      ALSA: usb-audio: fix BOSS ME-25 MIDI regression · 08007455
      Clemens Ladisch authored
      commit 53da5ebf upstream.
      
      The BOSS ME-25 turns out not to have any useful descriptors in its MIDI
      interface, so its needs a quirk entry after all.
      Reported-and-tested-by: default avatarKees van Veen <kees.vanveen@gmail.com>
      Fixes: 8e5ced83 ("ALSA: usb-audio: remove superfluous Roland quirks")
      Signed-off-by: default avatarClemens Ladisch <clemens@ladisch.de>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      08007455