- 23 Apr, 2015 9 commits
-
-
Dan Carpenter authored
[ Upstream commit bc3b5b47 ] I don't have this hardware but it looks like we weren't adding bridge devices as intended. Maybe the bridge is always the last device? Fixes: 05b12500 ("PCI: cpcihp: Iterate over all devices in slot, not functions 0-7") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Yijing Wang <wangyijing@huawei.com> CC: stable@vger.kernel.org # v3.9+ Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Rasmus Villemoes authored
[ Upstream commit a1b7f2f6 ] Commit fab4c256 ("PCI/AER: Add a TLP header print helper") introduced the helper function __print_tlp_header(), but contrary to the intention, the behaviour did change: Since we're taking the address of the parameter t, the first 4 or 8 bytes printed will be the value of the pointer t itself, and the remaining 12 or 8 bytes will be who-knows-what (something from the stack). We want to show the values of the four members of the struct aer_header_log_regs; that can be done without ugly and error-prone casts. On little-endian this should produce the same output as originally intended, and since no-one has complained about getting garbage output so far, I think big-endian should be ok too. Fixes: fab4c256 ("PCI/AER: Add a TLP header print helper") Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Borislav Petkov <bp@suse.de> CC: stable@vger.kernel.org # v3.14+ Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Takashi Iwai authored
[ Upstream commit cc7016ab ] Some BIOS version of Fujitsu Lifebook T731 seems to set up the headphone pin (0x21) without the assoc number 0x0f while it's set only to the output on the docking port (0x1a). With the recent commit [03ad6a8c: ALSA: hda - Fix "PCM" name being used on one DAC when there are two DACs], this resulted in the weird mixer element mapping where the headphone on the laptop is assigned as a shared volume with the speaker and the docking port is assigned as an individual headphone. This patch improves the situation by correcting the headphone pin config to the more appropriate value. Reported-and-tested-by: Taylor Smock <smocktaylor@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Kailang Yang authored
[ Upstream commit a59d7199 ] Pin sense will active when power pin is wake up. Power pin will not wake up immediately during resume state. Add some delay to wait for power pin activated. Signed-off-by: Kailang Yang <kailang@realtek.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Takashi Sakamoto authored
[ Upstream commit a053fc31 ] Some M-Audio devices require to receive bootup command just after powering on, while codes in BeBoB driver doesn't work properly in big-endian machine because the command should be aligned by little-endian. This commit fixes this bug. This fix should go to stable kernel. Cc: Takayuki Shiroma <t.shiroma.oki@gmail.com> Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dmitry M. Fedin authored
[ Upstream commit 3dc8523f ] Adds an entry for Creative USB X-Fi to the rc_config array in mixer_quirks.c to allow use of volume knob on the device. Adds support for newer X-Fi Pro card, known as "Model No. SB1095" with USB ID "041e:3237" Signed-off-by: Dmitry M. Fedin <dmitry.fedin@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hui Wang authored
[ Upstream commit af95b414 ] We have a HP machine which use the codec node 0x17 connecting the internal speaker, and from the node capability, we saw the EAPD, if we don't set the EAPD on for this node, the internal speaker can't output any sound. Cc: <stable@vger.kernel.org> BugLink: https://bugs.launchpad.net/bugs/1436745Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sebastian Wicki authored
[ Upstream commit 80b311d3 ] This model uses the same dock port as the previous generation. Signed-off-by: Sebastian Wicki <gandro@gmx.net> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Peter Hurley authored
[ Upstream commit fb5ef9e7 ] BugLink: http://bugs.launchpad.net/bugs/1381005 In canon mode, the read buffer head will advance over the buffer tail if the input > 4095 bytes without receiving a line termination char. Discard additional input until a line termination is received. Before evaluating for overflow, the 'room' value is normalized for I_PARMRK and 1 byte is reserved for line termination (even in !icanon mode, in case the mode is switched). The following table shows the transform: actual buffer | 'room' value before overflow calc space avail | !I_PARMRK | I_PARMRK -------------------------------------------------- 0 | -1 | -1 1 | 0 | 0 2 | 1 | 0 3 | 2 | 0 4+ | 3 | 1 When !icanon or when icanon and the read buffer contains newlines, normalized 'room' values of -1 and 0 are clamped to 0, and 'overflow' is 0, so read_head is not adjusted and the input i/o loop exits (setting no_room if called from flush_to_ldisc()). No input is discarded since the reader does have input available to read which ensures forward progress. When icanon and the read buffer does not contain newlines and the normalized 'room' value is 0, then overflow and room are reset to 1, so that the i/o loop will process the next input char normally (except for parity errors which are ignored). Thus, erasures, signalling chars, 7-bit mode, etc. will continue to be handled properly. If the input char processed was not a line termination char, then the canon_head index will not have advanced, so the normalized 'room' value will now be -1 and 'overflow' will be set, which indicates the read_head can safely be reset, effectively erasing the last char processed. If the input char processed was a line termination, then the canon_head index will have advanced, so 'overflow' is cleared to 0, the read_head is not reset, and 'room' is cleared to 0, which exits the i/o loop (because the reader now have input available to read which ensures forward progress). Note that it is possible for a line termination to be received, and for the reader to copy the line to the user buffer before the input i/o loop is ready to process the next input char. This is why the i/o loop recomputes the room/overflow state with every input char while handling overflow. Finally, if the input data was processed without receiving a line termination (so that overflow is still set), the pty driver must receive a write wakeup. A pty writer may be waiting to write more data in n_tty_write() but without unthrottling here that wakeup will not arrive, and forward progress will halt. (Normally, the pty writer is woken when the reader reads data out of the buffer and more space become available). Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (backported from commit fb5ef9e7) Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>
-
- 20 Apr, 2015 1 commit
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 17 Apr, 2015 30 commits
-
-
Ameya Palande authored
[ Upstream commit c8648508 ] On success, callback function returns 0. So invert the if condition check so that we can break out of loop. Cc: stable@vger.kernel.org Signed-off-by: Ameya Palande <2ameya@gmail.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Markos Chandras authored
[ Upstream commit 87f966d9 ] On a MIPS Malta board, tons of fifo underflow errors have been observed when using u-boot as bootloader instead of YAMON. The reason for that is that YAMON used to set the pcnet device to SRAM mode but u-boot does not. As a result, the default Tx threshold (64 bytes) is now too small to keep the fifo relatively used and it can result to Tx fifo underflow errors. As a result of which, it's best to setup the SRAM on supported controllers so we can always use the NOUFLO bit. Cc: <netdev@vger.kernel.org> Cc: <stable@vger.kernel.org> Cc: <linux-kernel@vger.kernel.org> Cc: Don Fry <pcnet32@frontier.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Scott Wood authored
[ Upstream commit bb344ca5 ] Commit 746c9e9f "of/base: Fix PowerPC address parsing hack" limited the applicability of the workaround whereby a missing ranges is treated as an empty ranges. This workaround was hiding a bug in the etsec2 device tree nodes, which have children with reg, but did not have ranges. Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Alexander Graf <agraf@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tyrel Datwyler authored
[ Upstream commit f6ff0414 ] We currently use the device tree update code in the kernel after resuming from a suspend operation to re-sync the kernels view of the device tree with that of the hypervisor. The code as it stands is not endian safe as it relies on parsing buffers returned by RTAS calls that thusly contains data in big endian format. This patch annotates variables and structure members with __be types as well as performing necessary byte swaps to cpu endian for data that needs to be parsed. Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com> Cc: Cyril Bur <cyrilbur@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Catalin Marinas authored
[ Upstream commit e53f21bc ] The idle_task_exit() function may call switch_mm() with next == &init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so this patch simply sets the reserved TTBR0. Cc: <stable@vger.kernel.org> Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org> Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Keerthy authored
[ Upstream commit e03826d5 ] The register offset for REGEN2_CTRL in different for TPS659038 chip as when compared with other Palmas family PMICs. In the case of TPS659038 the wrong offset pointed to PLLEN_CTRL and was causing a hang. Correcting the same. Signed-off-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mahesh Salgaonkar authored
[ Upstream commit 44d5f6f5 ] commit id 2ba9f0d8 has changed CONFIG_KVM_BOOK3S_64_HV to tristate to allow HV/PR bits to be built as modules. But the MCE code still depends on CONFIG_KVM_BOOK3S_64_HV which is wrong. When user selects CONFIG_KVM_BOOK3S_64_HV=m to build HV/PR bits as a separate module the relevant MCE code gets excluded. This patch fixes the MCE code to use CONFIG_KVM_BOOK3S_64_HANDLER. This makes sure that the relevant MCE code is included when HV/PR bits are built as a separate modules. Fixes: 2ba9f0d8 ("kvm: powerpc: book3s: Support building HV and PR KVM as module") Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tony Luck authored
[ Upstream commit fec53af5 ] Code will always think there are 16 banks because of a typo Reported-by: Misha Signed-off-by: Tony Luck <tony.luck@intel.com> Acked-by: Aristeu Rozanski <aris@redhat.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tony Luck authored
[ Upstream commit f7cf2a22 ] Haswell moved the TOLM/TOHM registers to a different device and offset. The sb_edac driver accounted for the change of device, but not for the new offset. There was also a typo in the constant to fill in the low 26 bits (was 0x1ffffff, should be 0x3ffffff). This resulted in a bogus value for the top of low memory: EDAC DEBUG: get_memory_layout: TOLM: 0.032 GB (0x0000000001ffffff) which would result in EDAC refusing to translate addresses for errors above the bogus value and below 4GB: sbridge MC3: HANDLING MCE MEMORY ERROR sbridge MC3: CPU 0: Machine Check Event: 0 Bank 7: 8c00004000010090 sbridge MC3: TSC 0 sbridge MC3: ADDR 2000000 sbridge MC3: MISC 523eac86 sbridge MC3: PROCESSOR 0:306f3 TIME 1414600951 SOCKET 0 APIC 0 MC3: 1 CE Error at TOLM area, on addr 0x02000000 on any memory ( page:0x0 offset:0x0 grain:32 syndrome:0x0) With the fix we see the correct TOLM value: DEBUG: get_memory_layout: TOLM: 2.048 GB (0x000000007fffffff) and we decode address 2000000 correctly: sbridge MC3: HANDLING MCE MEMORY ERROR sbridge MC3: CPU 0: Machine Check Event: 0 Bank 7: 8c00004000010090 sbridge MC3: TSC 0 sbridge MC3: ADDR 2000000 sbridge MC3: MISC 523e1086 sbridge MC3: PROCESSOR 0:306f3 TIME 1414601319 SOCKET 0 APIC 0 DEBUG: get_memory_error_data: SAD interleave package: 0 = CPU socket 0, HA 0, shiftup: 0 DEBUG: get_memory_error_data: TAD#0: address 0x0000000002000000 < 0x000000007fffffff, socket interleave 1, channel interleave 4 (offset 0x00000000), index 0, base ch: 0, ch mask: 0x01 DEBUG: get_memory_error_data: RIR#0, limit: 4.095 GB (0x00000000ffffffff), way: 1 DEBUG: get_memory_error_data: RIR#0: channel address 0x00200000 < 0xffffffff, RIR interleave 0, index 0 DEBUG: sbridge_mce_output_error: area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0 MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x2000 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0) Signed-off-by: Tony Luck <tony.luck@intel.com> Acked-by: Aristeu Rozanski <aris@redhat.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ilya Dryomov authored
[ Upstream commit 6d7fdb0a ] This reverts commit 89baaa57. Dirty page throttling should be sufficient for us in the general case so there is no need to use __GFP_MEMALLOC - it would be needed only in the swap-over-rbd case, which we currently don't support. (It would probably take approximately the commit that is being reverted to add that support, but we would also need the "swap" option to distinguish from the general case and make sure swap ceph_client-s aren't shared with anything else.) See ceph-devel threads [1] and [2] for the details of why enabling pfmemalloc reserves for all cases is a bad thing. On top of potential system lockups related to drained emergency reserves, this turned out to cause ceph lockups in case peers are on the same host and communicating via loopback due to sk_filter() dropping pfmemalloc skbs on the receiving side because the receiving loopback socket is not tagged with SOCK_MEMALLOC. [1] "SOCK_MEMALLOC vs loopback" http://www.spinics.net/lists/ceph-devel/msg22998.html [2] "[PATCH] libceph: don't set memalloc flags in loopback case" http://www.spinics.net/lists/ceph-devel/msg23392.html Conflicts: net/ceph/messenger.c [ context: tcp_nodelay option ] Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Mel Gorman <mgorman@suse.de> Cc: Sage Weil <sage@redhat.com> Cc: stable@vger.kernel.org # 3.18+, needs backporting Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Acked-by: Mike Christie <michaelc@cs.wisc.edu> Acked-by: Mel Gorman <mgorman@suse.de> [idryomov@gmail.com: backport to 3.18, 3.19: context] Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sergei Antonov authored
[ Upstream commit 98cf21c6 ] Fix B-tree corruption when a new record is inserted at position 0 in the node in hfs_brec_insert(). In this case a hfs_brec_update_parent() is called to update the parent index node (if exists) and it is passed hfs_find_data with a search_key containing a newly inserted key instead of the key to be updated. This results in an inconsistent index node. The bug reproduces on my machine after an extents overflow record for the catalog file (CNID=4) is inserted into the extents overflow B-tree. Because of a low (reserved) value of CNID=4, it has to become the first record in the first leaf node. The resulting first leaf node is correct: ---------------------------------------------------- | key0.CNID=4 | key1.CNID=123 | key2.CNID=456, ... | ---------------------------------------------------- But the parent index key0 still contains the previous key CNID=123: ----------------------- | key0.CNID=123 | ... | ----------------------- A change in hfs_brec_insert() makes hfs_brec_update_parent() work correctly by preventing it from getting fd->record=-1 value from __hfs_brec_find(). Along the way, I removed duplicate code with unification of the if condition. The resulting code is equivalent to the original code because node is never 0. Also hfs_brec_update_parent() will now return an error after getting a negative fd->record value. However, the return value of hfs_brec_update_parent() is not checked anywhere in the file and I'm leaving it unchanged by this patch. brec.c lacks error checking after some other calls too, but this issue is of less importance than the one being fixed by this patch. Signed-off-by: Sergei Antonov <saproj@gmail.com> Cc: Joe Perches <joe@perches.com> Reviewed-by: Vyacheslav Dubeyko <slava@dubeyko.com> Acked-by: Hin-Tak Leung <htl10@users.sourceforge.net> Cc: Anton Altaparmakov <aia21@cam.ac.uk> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Uwe Kleine-König authored
[ Upstream commit 391949b6 ] With spidev the mesg->complete callback points to spidev_complete. Calling this unblocks spidev_sync and so spidev_sync_write finishes. As the struct spi_message just read is a local variable in spidev_sync_write and recording the trace event accesses this message the recording is better done first. The same can happen for spidev_sync_read. This fixes an oops observed on a 3.14-rt system with spidev activity after echo 1 > /sys/kernel/debug/tracing/events/spi/enable . Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ivan T. Ivanov authored
[ Upstream commit 12cb89e3 ] num-cs is 32 bit property, don't read just upper 16 bits. Fixes: 4a8573ab (spi: qup: Remove chip select function) Signed-off-by: Ivan T. Ivanov <iivanov@mm-sol.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mikulas Patocka authored
[ Upstream commit 09ee96b2 ] The "dm snapshot: suspend origin when doing exception handover" commit fixed a exception store handover bug associated with pending exceptions to the "snapshot-origin" target. However, a similar problem exists in snapshot merging. When snapshot merging is in progress, we use the target "snapshot-merge" instead of "snapshot-origin". Consequently, during exception store handover, we must find the snapshot-merge target and suspend its associated mapped_device. To avoid lockdep warnings, the target must be suspended and resumed without holding _origins_lock. Introduce a dm_hold() function that grabs a reference on a mapped_device, but unlike dm_get(), it doesn't crash if the device has the DMF_FREEING flag set, it returns an error in this case. In snapshot_resume() we grab the reference to the origin device using dm_hold() while holding _origins_lock (_origins_lock guarantees that the device won't disappear). Then we release _origins_lock, suspend the device and grab _origins_lock again. NOTE to stable@ people: When backporting to kernels 3.18 and older, use dm_internal_suspend and dm_internal_resume instead of dm_internal_suspend_fast and dm_internal_resume_fast. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mikulas Patocka authored
[ Upstream commit b735fede ] In the function snapshot_resume we perform exception store handover. If there is another active snapshot target, the exception store is moved from this target to the target that is being resumed. The problem is that if there is some pending exception, it will point to an incorrect exception store after that handover, causing a crash due to dm-snap-persistent.c:get_exception()'s BUG_ON. This bug can be triggered by repeatedly changing snapshot permissions with "lvchange -p r" and "lvchange -p rw" while there are writes on the associated origin device. To fix this bug, we must suspend the origin device when doing the exception store handover to make sure that there are no pending exceptions: - introduce _origin_hash that keeps track of dm_origin structures. - introduce functions __lookup_dm_origin, __insert_dm_origin and __remove_dm_origin that manipulate the origin hash. - modify snapshot_resume so that it calls dm_internal_suspend_fast() and dm_internal_resume_fast() on the origin device. NOTE to stable@ people: When backporting to kernels 3.12-3.18, use dm_internal_suspend and dm_internal_resume instead of dm_internal_suspend_fast and dm_internal_resume_fast. When backporting to kernels older than 3.12, you need to pick functions dm_internal_suspend and dm_internal_resume from the commit fd2ed4d2. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sasha Levin authored
[ Upstream commit 5f027a3b ] It was always intended that a read to an unprovisioned block will return zeroes regardless of whether the pool is in read-only or read-write mode. thin_bio_map() was inconsistent with its handling of such reads when the pool is in read-only mode, it now properly zero-fills the bios it returns in response to unprovisioned block reads. Eliminate thin_bio_map()'s special read-only mode handling of -ENODATA and just allow the IO to be deferred to the worker which will result in pool->process_bio() handling the IO (which already properly zero-fills reads to unprovisioned blocks). Reported-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Darrick J. Wong authored
[ Upstream commit e5db2980 ] Since it's possible for the discard and write same queue limits to change while the upper level command is being sliced and diced, fix up both of them (a) to reject IO if the special command is unsupported at the start of the function and (b) read the limits once and let the commands error out on their own if the status happens to change. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mikulas Patocka authored
[ Upstream commit ab7c7bb6 ] __dm_destroy() must take the suspend_lock so that its presuspend and postsuspend calls do not race with an internal suspend. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Andy Shevchenko authored
[ Upstream commit a104a45b ] The commit 9cade1a4 (dma: dw: split driver to library part and platform code) introduced a separate platform driver but missed to add a MODULE_ALIAS("platform:dw_dmac"); to that module. The patch adds this to get driver loaded automatically if platform device is registered. Reported-by: "Blin, Jerome" <jerome.blin@intel.com> Fixes: 9cade1a4 (dma: dw: split driver to library part and platform code) Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Malcolm Priestley authored
[ Upstream commit 40c8790b ] When the driver sets this rate a power of zero value is set causing data flow stoppage until another rate is tried. Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Malcolm Priestley authored
[ Upstream commit 163fe301 ] When the driver sets this rate a power of zero value is set causing data flow stoppage until another rate is tried. Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com> Cc: <stable@vger.kernel.org> # v3.17+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Peter Zijlstra authored
[ Upstream commit d525211f ] Vince reported a watchdog lockup like: [<ffffffff8115e114>] perf_tp_event+0xc4/0x210 [<ffffffff810b4f8a>] perf_trace_lock+0x12a/0x160 [<ffffffff810b7f10>] lock_release+0x130/0x260 [<ffffffff816c7474>] _raw_spin_unlock_irqrestore+0x24/0x40 [<ffffffff8107bb4d>] do_send_sig_info+0x5d/0x80 [<ffffffff811f69df>] send_sigio_to_task+0x12f/0x1a0 [<ffffffff811f71ce>] send_sigio+0xae/0x100 [<ffffffff811f72b7>] kill_fasync+0x97/0xf0 [<ffffffff8115d0b4>] perf_event_wakeup+0xd4/0xf0 [<ffffffff8115d103>] perf_pending_event+0x33/0x60 [<ffffffff8114e3fc>] irq_work_run_list+0x4c/0x80 [<ffffffff8114e448>] irq_work_run+0x18/0x40 [<ffffffff810196af>] smp_trace_irq_work_interrupt+0x3f/0xc0 [<ffffffff816c99bd>] trace_irq_work_interrupt+0x6d/0x80 Which is caused by an irq_work generating new irq_work and therefore not allowing forward progress. This happens because processing the perf irq_work triggers another perf event (tracepoint stuff) which in turn generates an irq_work ad infinitum. Avoid this by raising the recursion counter in the irq_work -- which effectively disables all software events (including tracepoints) from actually triggering again. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20150219170311.GH21418@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Laurent Pinchart authored
[ Upstream commit d7c14605 ] The error code paths that require cleanup use a goto to jump to the cleanup code and return an error code. However, the error code variable res, which is initialized to -EINVAL when declared, is then overwritten with the return value of of_parse_phandle_with_args(), and reused as the return code from of_irq_parse_one(). This leads to an undetermined error being returned instead of the expected -EINVAL value. Fix it. Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Cc: stable@vger.kernel.org # 3.13+ Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Gregory CLEMENT authored
[ Upstream commit 43b68879 ] As stated in kernel/cpu_pm.c, "Platform is responsible for ensuring that cpu_pm_enter is not called twice on the same CPU before cpu_pm_exit is called.". In the current code in case of failure when calling mvebu_v7_cpu_suspend, the function cpu_pm_exit() is never called whereas cpu_pm_enter() was called just before. This patch moves the cpu_pm_exit() in order to balance the cpu_pm_enter() calls. Cc: stable@vger.kernel.org Reported-by: Fulvio Benini <fbf@libero.it> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Larry Finger authored
[ Upstream commit c8f03455 ] Routine rtl_is_special_data() is supposed to identify packets that need to use a low bit rate so that the probability of successful transmission is high. The current version has a bug that causes all IPv6 packets to be labelled as special, with a corresponding low rate of transmission. A complete fix will be quite intrusive, but until that is available, all IPv6 packets are identified as regular. This patch also removes a magic number. Reported-and-tested-by: Alan Fisher <acf@unixcube.org> Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Cc: Stable <stable@vger.kernel.org> [3.18+] Cc: Alan Fisher <acf@unixcube.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Thierry Reding authored
[ Upstream commit 2f1bce48 ] devm_phy_create() stores the pointer to the new PHY at the address returned by devres_alloc(). The res parameter passed to devm_phy_match() is therefore the location where the pointer to the PHY is stored, hence it needs to be dereferenced before comparing to the match data in order to find the correct match. Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Peter Chen authored
[ Upstream commit a886bd92 ] We should signal connect (pull up dp) after we have already at peripheral mode, otherwise, the dp may be toggled due to we reset controller or do disconnect during the initialization for peripheral, then, the host may be confused during the enumeration, eg, it finds the reset can't succeed, but the device is still there, see below error message. hub 1-0:1.0: USB hub found hub 1-0:1.0: 1 port detected hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: Cannot enable port 1. Maybe the USB cable is bad? hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: Cannot enable port 1. Maybe the USB cable is bad? hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: Cannot enable port 1. Maybe the USB cable is bad? hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: cannot reset port 1 (err = -32) hub 1-0:1.0: Cannot enable port 1. Maybe the USB cable is bad? hub 1-0:1.0: unable to enumerate USB device on port 1 Fixes: the issue existed when the otg fsm code was added. Cc: <stable@vger.kernel.org> # v3.16+ Signed-off-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Li Jun authored
[ Upstream commit d20f7807 ] This patch adds response to a_alt_hnp_support set feature request from legacy A device, that is, B-device can provide a message to the user indicating that the user needs to connect the B-device to an alternate port on the A-device. A device sets this feature indicates to the B-device that it is connected to an A-device port that is not capable of HNP, but that the A-device does have an alternate port that is capable of HNP. [Peter] Without this patch, the OTG B device can't be enumerated on non-HNP port at A device, see below log: [ 2.287464] usb 1-1: Dual-Role OTG device on non-HNP port [ 2.293105] usb 1-1: can't set HNP mode: -32 [ 2.417422] usb 1-1: new high-speed USB device number 4 using ci_hdrc [ 2.460635] usb 1-1: Dual-Role OTG device on non-HNP port [ 2.466424] usb 1-1: can't set HNP mode: -32 [ 2.587464] usb 1-1: new high-speed USB device number 5 using ci_hdrc [ 2.630649] usb 1-1: Dual-Role OTG device on non-HNP port [ 2.636436] usb 1-1: can't set HNP mode: -32 [ 2.641003] usb usb1-port1: unable to enumerate USB device Cc: stable <stable@vger.kernel.org> Acked-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Li Jun <b47624@freescale.com> Signed-off-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
David Dueck authored
[ Upstream commit d0f347d6 ] This fixes a potential null pointer dereference. Cc: <stable@vger.kernel.org> # v3.16+ Fixes: d4332013 ("driver core: dev_get_drvdata: Don't check for NULL dev") Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David Dueck <davidcdueck@googlemail.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
[ Upstream commit bda13e35 ] A new uas compatible controller has shown up in some people's devices from the manufacturer Initio Corporation, this controller needs the US_FL_NO_ATA_1X quirk to work properly with uas, so add it to the uas quirks table. Reported-and-tested-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com> Cc: stable@vger.kernel.org # 3.16 Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-