- 30 Sep, 2014 13 commits
-
-
Arjun Sreedharan authored
commit 4dc7c76c upstream. scc_bus_softreset not necessarily should return zero. Propagate the error code. Signed-off-by:
Arjun Sreedharan <arjun024@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Joerg Roedel authored
commit 9b29d3c6 upstream. When multiple devices are detached in __detach_device, they are also removed from the domains dev_list. This makes it unsafe to use list_for_each_entry_safe, as the next pointer might also not be in the list anymore after __detach_device returns. So just repeatedly remove the first element of the list until it is empty. Tested-by:
Marti Raudsepp <marti@juffo.org> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Lothar Waßmann authored
commit fa97d2f7 upstream. The VPU on i.MX53 has two distinct clocks for register access and internal function. Signed-off-by:
Lothar Waßmann <LW@KARO-electronics.de> Fixes: fbf970f6 ("ARM: dts: mx53qsb: Enable VPU support") Signed-off-by:
Shawn Guo <shawn.guo@freescale.com> [ kamal: backport to 3.13-stable: no imx5-clock.h so fixed hardcoded value ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
NeilBrown authored
commit 9c4bdf69 upstream. During recovery of a double-degraded RAID6 it is possible for some blocks not to be recovered properly, leading to corruption. If a write happens to one block in a stripe that would be written to a missing device, and at the same time that stripe is recovering data to the other missing device, then that recovered data may not be written. This patch skips, in the double-degraded case, an optimisation that is only safe for single-degraded arrays. Bug was introduced in 2.6.32 and fix is suitable for any kernel since then. In an older kernel with separate handle_stripe5() and handle_stripe6() functions the patch must change handle_stripe6(). Fixes: 6c0069c0 Cc: Yuri Tikhonov <yur@emcraft.com> Cc: Dan Williams <dan.j.williams@intel.com> Reported-by:
"Manibalan P" <pmanibalan@amiindia.co.in> Tested-by:
"Manibalan P" <pmanibalan@amiindia.co.in> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1090423Signed-off-by:
NeilBrown <neilb@suse.de> Acked-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Pavel Shilovsky authored
commit b46799a8 upstream. When we requests rename we also need to update attributes of both source and target parent directories. Not doing it causes generic/309 xfstest to fail on SMB2 mounts. Fix this by marking these directories for force revalidating. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Pavel Shilovsky authored
commit 52755808 upstream. SMB2 servers indicates the end of a directory search with STATUS_NO_MORE_FILE error code that is not processed now. This causes generic/257 xfstest to fail. Fix this by triggering the end of search by this error code in SMB2_query_directory. Also when negotiating CIFS protocol we tell the server to close the search automatically at the end and there is no need to do it itself. In the case of SMB2 protocol, we need to close it explicitly - separate close directory checks for different protocols. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Steve French authored
commit 18f39e7b upstream. As Raphael Geissert pointed out, tcon_error_exit can dereference tcon and there is one path in which tcon can be null. Signed-off-by:
Steve French <smfrench@gmail.com> Reported-by:
Raphael Geissert <geissert@debian.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Florian Fainelli authored
commit a9ecdc0f upstream. In case the Device Tree blob passed by the boot agent supplies both an 'interrupts-extended' and an 'interrupts' property in order to allow for older kernels to be usable, prefer the new-style 'interrupts-extended' property which conveys a lot more information. This allows us to have bootloaders willingly maintaining backwards compatibility with older kernels without entirely deprecating the 'interrupts' property. Update the bindings documentation to describe a situation where both the 'interrupts-extended' and the 'interrupts' property are present, and which one takes precedence over the other. Acked-by:
Rob Herring <robh@kernel.org> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Grant Likely <grant.likely@linaro.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit f3ee07d8 upstream. ALC269 & co have many vendor-specific setups with COEF verbs. However, some verbs seem specific to some codec versions and they result in the codec stalling. Typically, such a case can be avoided by checking the return value from reading a COEF. If the return value is -1, it implies that the COEF is invalid, thus it shouldn't be written. This patch adds the invalid COEF checks in appropriate places accessing ALC269 and its variants. The patch actually fixes the resume problem on Acer AO725 laptop. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=52181Tested-by:
Francesco Muzio <muziofg@gmail.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Pali Rohár authored
commit b07a657e upstream. This fixing commit 4f2f2039 bug: https://bugzilla.kernel.org/show_bug.cgi?id=76321Signed-off-by:
Pali Rohár <pali.rohar@gmail.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Oleg Chernovskiy authored
commit 6bce8d97 upstream. Properly set the thermal min and max temp on CI. Otherwise, we end up setting the thermal ranges to 0 on resume and end up in the lowest power state. Signed-off-by:
Oleg Chernovskiy <algonkvel@gmail.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Alex Deucher authored
commit 6e909f74 upstream. Add a module paramter to enable bapm on APUs. It's disabled by default on certain APUs due to stability issues. This option makes it easier to test and to enable it on systems that are stable. bug: https://bugzilla.kernel.org/show_bug.cgi?id=81021Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> [ kamal: backport to 3.13-stable ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Daniel Mack authored
commit 9301503a upstream. This mode is unsupported, as the DMA controller can't do zero-padding of samples. Signed-off-by:
Daniel Mack <zonque@gmail.com> Reported-by:
Johannes Stezenbach <js@sig21.net> Signed-off-by:
Mark Brown <broonie@linaro.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
- 23 Sep, 2014 1 commit
-
-
Dmitry Torokhov authored
commit 5715fc76 upstream. ForcePads are found on HP EliteBook 1040 laptops. They lack any kind of physical buttons, instead they generate primary button click when user presses somewhat hard on the surface of the touchpad. Unfortunately they also report primary button click whenever there are 2 or more contacts on the pad, messing up all multi-finger gestures (2-finger scrolling, multi-finger tapping, etc). To cope with this behavior we introduce a delay (currently 50 msecs) in reporting primary press in case more contacts appear. Reviewed-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
- 22 Sep, 2014 9 commits
-
-
Ilya Dryomov authored
commit c27a3e4d upstream. We hard code cephx auth ticket buffer size to 256 bytes. This isn't enough for any moderate setups and, in case tickets themselves are not encrypted, leads to buffer overflows (ceph_x_decrypt() errors out, but ceph_decode_copy() doesn't - it's just a memcpy() wrapper). Since the buffer is allocated dynamically anyway, allocated it a bit later, at the point where we know how much is going to be needed. Fixes: http://tracker.ceph.com/issues/8979Signed-off-by:
Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by:
Sage Weil <sage@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Ilya Dryomov authored
commit 597cda35 upstream. Add a helper for processing individual cephx auth tickets. Needed for the next commit, which deals with allocating ticket buffers. (Most of the diff here is whitespace - view with git diff -b). Signed-off-by:
Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by:
Sage Weil <sage@redhat.com> [ kamal: 3.13 stable prereq for c27a3e4d "libceph: do not hard code max auth ticket len" ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jiri Kosina authored
commit 844817e4 upstream. The report passed to us from transport driver could potentially be arbitrarily large, therefore we better sanity-check it so that raw_data that we hold in picolcd_pending structure are always kept within proper bounds. Reported-by:
Steven Vittitoe <scvitti@google.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
James Forshaw authored
commit 6817ae22 upstream. This patch fixes a potential security issue in the whiteheat USB driver which might allow a local attacker to cause kernel memory corrpution. This is due to an unchecked memcpy into a fixed size buffer (of 64 bytes). On EHCI and XHCI busses it's possible to craft responses greater than 64 bytes leading a buffer overflow. Signed-off-by:
James Forshaw <forshaw@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jiri Kosina authored
commit 4ab25786 upstream. There are a few very theoretical off-by-one bugs in report descriptor size checking when performing a pre-parsing fixup. Fix those. Reported-by:
Ben Hawkes <hawkes@google.com> Reviewed-by:
Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jiri Kosina authored
commit c54def7b upstream. The report passed to us from transport driver could potentially be arbitrarily large, therefore we better sanity-check it so that magicmouse_emit_touch() gets only valid values of raw_id. Reported-by:
Steven Vittitoe <scvitti@google.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit c03aa9f6 upstream. We did not implement any bound on number of indirect ICBs we follow when loading inode. Thus corrupted medium could cause kernel to go into an infinite loop, possibly causing a stack overflow. Fix the possible stack overflow by removing recursion from __udf_read_inode() and limit number of indirect ICBs we follow to avoid infinite loops. Signed-off-by:
Jan Kara <jack@suse.cz> Reference: CVE-2014-6410 Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit bb7720a0 upstream. There's no good reason to separate these since udf_fill_inode() is called only from __udf_read_inode() and both do part of the same thing. Signed-off-by:
Jan Kara <jack@suse.cz> [ kamal: 3.13 stable prereq for c03aa9f6 "udf: Avoid infinite loop when processing indirect ICBs" ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David Howells authored
commit 95389b08 upstream. This fixes CVE-2014-3631. It is possible for an associative array to end up with a shortcut node at the root of the tree if there are more than fan-out leaves in the tree, but they all crowd into the same slot in the lowest level (ie. they all have the same first nibble of their index keys). When assoc_array_gc() returns back up the tree after scanning some leaves, it can fall off of the root and crash because it assumes that the back pointer from a shortcut (after label ascend_old_tree) must point to a normal node - which isn't true of a shortcut node at the root. Should we find we're ascending rootwards over a shortcut, we should check to see if the backpointer is zero - and if it is, we have completed the scan. This particular bug cannot occur if the root node is not a shortcut - ie. if you have fewer than 17 keys in a keyring or if you have at least two keys that sit into separate slots (eg. a keyring and a non keyring). This can be reproduced by: ring=`keyctl newring bar @s` for ((i=1; i<=18; i++)); do last_key=`keyctl newring foo$i $ring`; done keyctl timeout $last_key 2 Doing this: echo 3 >/proc/sys/kernel/keys/gc_delay first will speed things up. If we do fall off of the top of the tree, we get the following oops: BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 IP: [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 PGD dae15067 PUD cfc24067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: xt_nat xt_mark nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_ni CPU: 0 PID: 26011 Comm: kworker/0:1 Not tainted 3.14.9-200.fc20.x86_64 #1 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Workqueue: events key_garbage_collector task: ffff8800918bd580 ti: ffff8800aac14000 task.ti: ffff8800aac14000 RIP: 0010:[<ffffffff8136cea7>] [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP: 0018:ffff8800aac15d40 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8800aaecacc0 RDX: ffff8800daecf440 RSI: 0000000000000001 RDI: ffff8800aadc2bc0 RBP: ffff8800aac15da8 R08: 0000000000000001 R09: 0000000000000003 R10: ffffffff8136ccc7 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000070 R15: 0000000000000001 FS: 0000000000000000(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000018 CR3: 00000000db10d000 CR4: 00000000000006f0 Stack: ffff8800aac15d50 0000000000000011 ffff8800aac15db8 ffffffff812e2a70 ffff880091a00600 0000000000000000 ffff8800aadc2bc3 00000000cd42c987 ffff88003702df20 ffff88003702dfa0 0000000053b65c09 ffff8800aac15fd8 Call Trace: [<ffffffff812e2a70>] ? keyring_detect_cycle_iterator+0x30/0x30 [<ffffffff812e3e75>] keyring_gc+0x75/0x80 [<ffffffff812e1424>] key_garbage_collector+0x154/0x3c0 [<ffffffff810a67b6>] process_one_work+0x176/0x430 [<ffffffff810a744b>] worker_thread+0x11b/0x3a0 [<ffffffff810a7330>] ? rescuer_thread+0x3b0/0x3b0 [<ffffffff810ae1a8>] kthread+0xd8/0xf0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 [<ffffffff816ffb7c>] ret_from_fork+0x7c/0xb0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 Code: 08 4c 8b 22 0f 84 bf 00 00 00 41 83 c7 01 49 83 e4 fc 41 83 ff 0f 4c 89 65 c0 0f 8f 5a fe ff ff 48 8b 45 c0 4d 63 cf 49 83 c1 02 <4e> 8b 34 c8 4d 85 f6 0f 84 be 00 00 00 41 f6 c6 01 0f 84 92 RIP [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP <ffff8800aac15d40> CR2: 0000000000000018 ---[ end trace 1129028a088c0cbd ]--- Signed-off-by:
David Howells <dhowells@redhat.com> Acked-by:
Don Zickus <dzickus@redhat.com> Signed-off-by:
James Morris <james.l.morris@oracle.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
- 18 Sep, 2014 17 commits
-
-
Kamal Mostafa authored
Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Boris Ostrovsky authored
commit 8762e509 upstream. init_espfix_ap() is currently off by one level when informing hypervisor that allocated pages will be used for ministacks' page tables. The most immediate effect of this on a PV guest is that if 'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor will refuse to use it for a page table (which it shouldn't be anyway). This will result in warnings by both Xen and Linux. More importantly, a subsequent write to that page (again, by a PV guest) is likely to result in fatal page fault. Signed-off-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.comReviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Theodore Ts'o authored
commit c99d1e6e upstream. If we suffer a block allocation failure (for example due to a memory allocation failure), it's possible that we will call ext4_discard_allocated_blocks() before we've actually allocated any blocks. In that case, fe_len and fe_start in ac->ac_f_ex will still be zero, and this will result in mb_free_blocks(inode, e4b, 0, 0) triggering the BUG_ON on mb_free_blocks(): BUG_ON(last >= (sb->s_blocksize << 3)); Fix this by bailing out of ext4_discard_allocated_blocks() if fs_len is zero. Also fix a missing ext4_mb_unload_buddy() call in ext4_discard_allocated_blocks(). Google-Bug-Id: 16844242 Fixes: 86f0afd4Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Filipe Manana authored
commit 27b9a812 upstream. Under rare circumstances we can end up leaving 2 versions of a checksum for the same file extent range. The reason for this is that after calling btrfs_next_leaf we process slot 0 of the leaf it returns, instead of processing the slot set in path->slots[0]. Most of the time (by far) path->slots[0] is 0, but after btrfs_next_leaf() releases the path and before it searches for the next leaf, another task might cause a split of the next leaf, which migrates some of its keys to the leaf we were processing before calling btrfs_next_leaf(). In this case btrfs_next_leaf() returns again the same leaf but with path->slots[0] having a slot number corresponding to the first new key it got, that is, a slot number that didn't exist before calling btrfs_next_leaf(), as the leaf now has more keys than it had before. So we must really process the returned leaf starting at path->slots[0] always, as it isn't always 0, and the key at slot 0 can have an offset much lower than our search offset/bytenr. For example, consider the following scenario, where we have: sums->bytenr: 40157184, sums->len: 16384, sums end: 40173568 four 4kb file data blocks with offsets 40157184, 40161280, 40165376, 40169472 Leaf N: slot = 0 slot = btrfs_header_nritems() - 1 |-------------------------------------------------------------------| | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4] | |-------------------------------------------------------------------| Leaf N + 1: slot = 0 slot = btrfs_header_nritems() - 1 |--------------------------------------------------------------------| | [(CSUM CSUM 40161280), size 32] ... [((CSUM CSUM 40615936), size 8 | |--------------------------------------------------------------------| Because we are at the last slot of leaf N, we call btrfs_next_leaf() to find the next highest key, which releases the current path and then searches for that next key. However after releasing the path and before finding that next key, the item at slot 0 of leaf N + 1 gets moved to leaf N, due to a call to ctree.c:push_leaf_left() (via ctree.c:split_leaf()), and therefore btrfs_next_leaf() will returns us a path again with leaf N but with the slot pointing to its new last key (CSUM CSUM 40161280). This new version of leaf N is then: slot = 0 slot = btrfs_header_nritems() - 2 slot = btrfs_header_nritems() - 1 |----------------------------------------------------------------------------------------------------| | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4] [(CSUM CSUM 40161280), size 32] | |----------------------------------------------------------------------------------------------------| And incorrecly using slot 0, makes us set next_offset to 39239680 and we jump into the "insert:" label, which will set tmp to: tmp = min((sums->len - total_bytes) >> blocksize_bits, (next_offset - file_key.offset) >> blocksize_bits) = min((16384 - 0) >> 12, (39239680 - 40157184) >> 12) = min(4, (u64)-917504 = 18446744073708634112 >> 12) = 4 and ins_size = csum_size * tmp = 4 * 4 = 16 bytes. In other words, we insert a new csum item in the tree with key (CSUM_OBJECTID CSUM_KEY 40157184 = sums->bytenr) that contains the checksums for all the data (4 blocks of 4096 bytes each = sums->len). Which is wrong, because the item with key (CSUM CSUM 40161280) (the one that was moved from leaf N + 1 to the end of leaf N) contains the old checksums of the last 12288 bytes of our data and won't get those old checksums removed. So this leaves us 2 different checksums for 3 4kb blocks of data in the tree, and breaks the logical rule: Key_N+1.offset >= Key_N.offset + length_of_data_its_checksums_cover An obvious bad effect of this is that a subsequent csum tree lookup to get the checksum of any of the blocks with logical offset of 40161280, 40165376 or 40169472 (the last 3 4kb blocks of file data), will get the old checksums. Signed-off-by:
Filipe Manana <fdmanana@suse.com> Signed-off-by:
Chris Mason <clm@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit 4eb1f66d upstream. We've got bug reports that btrfs crashes when quota is enabled on 32bit kernel, typically with the Oops like below: BUG: unable to handle kernel NULL pointer dereference at 00000004 IP: [<f9234590>] find_parent_nodes+0x360/0x1380 [btrfs] *pde = 00000000 Oops: 0000 [#1] SMP CPU: 0 PID: 151 Comm: kworker/u8:2 Tainted: G S W 3.15.2-1.gd43d97e-default #1 Workqueue: btrfs-qgroup-rescan normal_work_helper [btrfs] task: f1478130 ti: f147c000 task.ti: f147c000 EIP: 0060:[<f9234590>] EFLAGS: 00010213 CPU: 0 EIP is at find_parent_nodes+0x360/0x1380 [btrfs] EAX: f147dda8 EBX: f147ddb0 ECX: 00000011 EDX: 00000000 ESI: 00000000 EDI: f147dda4 EBP: f147ddf8 ESP: f147dd38 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 CR0: 8005003b CR2: 00000004 CR3: 00bf3000 CR4: 00000690 Stack: 00000000 00000000 f147dda4 00000050 00000001 00000000 00000001 00000050 00000001 00000000 d3059000 00000001 00000022 000000a8 00000000 00000000 00000000 000000a1 00000000 00000000 00000001 00000000 00000000 11800000 Call Trace: [<f923564d>] __btrfs_find_all_roots+0x9d/0xf0 [btrfs] [<f9237bb1>] btrfs_qgroup_rescan_worker+0x401/0x760 [btrfs] [<f9206148>] normal_work_helper+0xc8/0x270 [btrfs] [<c025e38b>] process_one_work+0x11b/0x390 [<c025eea1>] worker_thread+0x101/0x340 [<c026432b>] kthread+0x9b/0xb0 [<c0712a71>] ret_from_kernel_thread+0x21/0x30 [<c0264290>] kthread_create_on_node+0x110/0x110 This indicates a NULL corruption in prefs_delayed list. The further investigation and bisection pointed that the call of ulist_add_merge() results in the corruption. ulist_add_merge() takes u64 as aux and writes a 64bit value into old_aux. The callers of this function in backref.c, however, pass a pointer of a pointer to old_aux. That is, the function overwrites 64bit value on 32bit pointer. This caused a NULL in the adjacent variable, in this case, prefs_delayed. Here is a quick attempt to band-aid over this: a new function, ulist_add_merge_ptr() is introduced to pass/store properly a pointer value instead of u64. There are still ugly void ** cast remaining in the callers because void ** cannot be taken implicitly. But, it's safer than explicit cast to u64, anyway. Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=887046Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Chris Mason <clm@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit 85c1fafd upstream. On ppc64 we support 4K hash pte with 64K page size. That requires us to track the hash pte slot information on a per 4k basis. We do that by storing the slot details in the second half of pte page. The pte bit _PAGE_COMBO is used to indicate whether the second half need to be looked while building real_pte. We need to use read memory barrier while doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO check. On the store side we already do a lwsync in __hash_page_4K Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit 7e467245 upstream. We would get wrong results in compiler recomputed old_pmd. Avoid that by using ACCESS_ONCE Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit 969b7b20 upstream. As per ISA, for 4k base page size we compare 14..65 bits of VA specified with the entry_VA in tlb. That implies we need to make sure we do a tlbie with all the possible 4k va we used to access the 16MB hugepage. With 64k base page size we compare 14..57 bits of VA. Hence we cannot ignore the lower 24 bits of va while tlbie .We also cannot tlb invalidate a 16MB entry with just one tlbie instruction because we don't track which va was used to instantiate the tlb entry. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit fc047955 upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Use _PAGE_COMBO to determine the page size with which we should invalidate the hash table entries on unmap. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit 629149fa upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Handle this correctly for 16M pages Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [ kamal: backport to 3.13-stable: context ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit fa1f8ae8 upstream. The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aneesh Kumar K.V authored
commit b0aa44a3 upstream. With hugepages, we store the hpte valid information in the pte page whose address is stored in the second half of the PMD. Use a write barrier to make sure clearing pmd busy bit and updating hpte valid info are ordered properly. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Gavin Shan authored
commit f1b3929c upstream. While running command "drmgr -c phb -r -s 'PHB 528'", following backtrace jumped out because the target device node isn't marked with OF_DETACHED by of_detach_node(), which caused by error returned from memory hotplug related reconfig notifier when disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it. ERROR: Bad of_node_put() on /pci@800000020000210/ethernet@0 CPU: 14 PID: 2252 Comm: drmgr Tainted: G W 3.16.0+ #427 Call Trace: [c000000012a776a0] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c000000012a77750] [c00000000083cd34] .dump_stack+0x7c/0x9c [c000000012a777d0] [c0000000006807c4] .of_node_release+0x58/0xe0 [c000000012a77860] [c00000000038a7d0] .kobject_release+0x174/0x1b8 [c000000012a77900] [c00000000038a884] .kobject_put+0x70/0x78 [c000000012a77980] [c000000000681680] .of_node_put+0x28/0x34 [c000000012a77a00] [c000000000681ea8] .__of_get_next_child+0x64/0x70 [c000000012a77a90] [c000000000682138] .of_find_node_by_path+0x1b8/0x20c [c000000012a77b40] [c000000000051840] .ofdt_write+0x308/0x688 [c000000012a77c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c000000012a77cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c000000012a77d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c000000012a77e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by:
Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Ronald Wahl authored
commit 671796dd upstream. The driver assumes that endpoint 4 is always an interrupt endpoint. Unfortunately the type differs between high-speed and full-speed configurations while in the former case it is indeed an interrupt endpoint this is not true for the latter case - here it is a bulk endpoint. When sending URBs with the wrong type the kernel will generate a warning message including backtrace. In this specific case there will be a huge amount of warnings which can bring the system to freeze. To fix this we are now sending URBs to endpoint 4 using the type found in the endpoint descriptor. A side note: The carl9170 firmware currently specifies endpoint 4 as interrupt endpoint even in the full-speed configuration but this has no relevance because before this firmware is loaded the endpoint type is as described above and after the firmware is running the stick is not reenumerated and so the old descriptor is used. Signed-off-by:
Ronald Wahl <ronald.wahl@raritan.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David Vrabel authored
commit 8d5999df upstream. If the timer irqs are resumed during device resume it is possible in certain circumstances for the resume to hang early on, before device interrupts are resumed. For an Ubuntu 14.04 PVHVM guest this would occur in ~0.5% of resume attempts. It is not entirely clear what is occuring the point of the hang but I think a task necessary for the resume calls schedule_timeout(), waiting for a timer interrupt (which never arrives). This failure may require specific tasks to be running on the other VCPUs to trigger (processes are not frozen during a suspend/resume if PREEMPT is disabled). Add IRQF_EARLY_RESUME to the timer interrupts so they are resumed in syscore_resume(). Signed-off-by:
David Vrabel <david.vrabel@citrix.com> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit e24aa0a4 upstream. CA0132 driver tries to reload the firmware at resume. Usually this works since the firmware loader core caches the firmware contents by itself. However, if the driver failed to load the firmwares (e.g. missing files), reloading the firmware at resume goes through the actual file loading code path, and triggers a kernel WARNING like: WARNING: CPU: 10 PID:11371 at drivers/base/firmware_class.c:1105 _request_firmware+0x9ab/0x9d0() For avoiding this situation, this patch makes CA0132 skipping the f/w loading at resume when it failed at probe time. Reported-and-tested-by:
Janek Kozicki <cosurgi@gmail.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Clemens Ladisch authored
commit 53da5ebf upstream. The BOSS ME-25 turns out not to have any useful descriptors in its MIDI interface, so its needs a quirk entry after all. Reported-and-tested-by:
Kees van Veen <kees.vanveen@gmail.com> Fixes: 8e5ced83 ("ALSA: usb-audio: remove superfluous Roland quirks") Signed-off-by:
Clemens Ladisch <clemens@ladisch.de> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-