- 17 Sep, 2014 40 commits
-
-
Alex Smith authored
commit 65768a1a upstream. task_user_regset_view() should test for TIF_32BIT_REGS in the flags of the specified task, not of the current task. Signed-off-by: Alex Smith <alex@alex-smith.me.uk> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7450/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Smith authored
commit e90e6fdd upstream. On 32-bit/O32, pt_regs has a padding area at the beginning into which the syscall arguments passed via the user stack are copied. 4 arguments totalling 16 bytes are copied to offset 16 bytes into this area, however the area is only 24 bytes long. This means the last 2 arguments overwrite pt_regs->regs[{0,1}]. If a syscall function returns an error, handle_sys stores the original syscall number in pt_regs->regs[0] for syscall restart. signal.c checks whether regs[0] is non-zero, if it is it will check whether the syscall return value is one of the ERESTART* codes to see if it must be restarted. Should a syscall be made that results in a non-zero value being copied off the user stack into regs[0], and then returns a positive (non-error) value that matches one of the ERESTART* error codes, this can be mistaken for requiring a syscall restart. While the possibility for this to occur has always existed, it is made much more likely to occur by commit 46e12c07 ("MIPS: O32 / 32-bit: Always copy 4 stack arguments."), since now every syscall will copy 4 arguments and overwrite regs[0], rather than just those with 7 or 8 arguments. Since that commit, booting Debian under a 32-bit MIPS kernel almost always results in a hang early in boot, due to a wait4 syscall returning a PID that matches one of the ERESTART* codes, which then causes an incorrect restart of the syscall. The problem is fixed by increasing the size of the padding area so that arguments copied off the stack will not overwrite pt_regs->regs[{0,1}]. Signed-off-by: Alex Smith <alex.smith@imgtec.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Aurelien Jarno <aurelien@aurel32.net> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7454/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeffrey Deans authored
commit ffc8415a upstream. A GIC interrupt which is declared as having a GIC_MAP_TO_NMI_MSK mapping causes the cpu parameter to gic_setup_intr() to be increased to 32, causing memory corruption when pcpu_masks[] is written to again later in the function. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7375/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Battersby authored
commit 2ba136da upstream. blk_rq_set_block_pc() memsets rq->cmd to 0, so it should come immediately after blk_get_request() to avoid overwriting the user-supplied CDB. Also check for failure to allocate rq. Fixes: f27b087b ("block: add blk_rq_set_block_pc()") Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Juergen Gross authored
commit f6105c08 upstream. If a scsi host driver specifies .cmd_len in it's scsi_host_template, a driver's private command pool is needed. scsi_find_host_cmd_pool() will locate it, but scsi_alloc_host_cmd_pool() isn't saving the pool address in the host template. This will result in an access error when the host is removed. Avoid the problem by saving the address of a new allocated command pool where it is expected. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Fixes: 89d9a567Signed-off-by: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit cd53eb68 upstream. If scsi_remove_host() is called while an rport is in the blocked state then scsi_remove_host() will only finish if the rport is unblocked from inside a timer function. Make sure that an rport only enters the blocked state if a timer will be started that will unblock it. This avoids that unloading the ib_srp kernel module after having disconnected the initiator from the target system results in a deadlock if both the fast_io_fail_tmo and dev_loss_tmo parameters have been set to "off". Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Reviewed-by: David Dillow <dave@thedillows.org> Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Janusz Dziemidowicz authored
commit 0213436a upstream. Some devices don't like REPORT SUPPORTED OPERATION CODES and will simply timeout causing sd_mod init to take a very very long time. Introduce BLIST_NO_RSOC scsi scan flag, that stops RSOC from being issued. Add it to Promise Vtrak E610f entry in scsi scan blacklist. Fixes bug #79901 reported at https://bugzilla.kernel.org/show_bug.cgi?id=79901 Fixes: 98dcc294 ("SCSI: sd: Update WRITE SAME heuristics") Signed-off-by: Janusz Dziemidowicz <rraptorr@nails.eu.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martin K. Petersen authored
commit c1d40a52 upstream. Despite supporting modern SCSI features some storage devices continue to claim conformance to an older version of the SPC spec. This is done for compatibility with legacy operating systems. Linux by default will not attempt to read VPD pages on devices that claim SPC-2 or older. Introduce a blacklist flag that can be used to trigger VPD page inquiries on devices that are known to support them. Reported-by: KY Srinivasan <kys@microsoft.com> Tested-by: KY Srinivasan <kys@microsoft.com> Reviewed-by: KY Srinivasan <kys@microsoft.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hannes Reinecke authored
commit 22ffeb48 upstream. Sequential scan for more than 256 LUNs is very fragile as LUNs might not be numbered sequentially after that point. SAM revisions later than SCSI-3 impose a structure on LUNs larger than 256, making LUN numbers between 256 and 16384 illegal. SCSI-3, however allows for plain 64-bit numbers with no internal structure. So restrict sequential LUN scan to 256 LUNs and add a new blacklist flag 'BLIST_SCSI3LUN' to scan up to max_lun devices. Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ewan Milne <emilne@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 3533f860 upstream. On some Windows hosts on FC SANs, TEST_UNIT_READY can return SRB_STATUS_ERROR. Correctly handle this. Note that there is sufficient sense information to support scsi error handling even in this case. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit f885fb73 upstream. Correctly set SRB flags for all valid I/O directions. Some IHV drivers on the Windows host require this. The host validates the command and SRB flags prior to passing the command down to native driver stack. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit adb6f9e1 upstream. Based on the negotiated VMBUS protocol version, we adjust the size of the storage protocol messages. The two sizes we currently handle are pre-win8 and post-win8. In WS2012 R2, we are negotiating higher VMBUS protocol version than the win8 version. Make adjustments to correctly handle this. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 52f9614d upstream. Set cmd_per_lun to reflect value supported by the Host. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 4cd83ecd upstream. Hyper-V hosts can support multiple targets and multiple channels and larger number of LUNs per target. Update the code to reflect this. With this patch we can correctly enumerate all the paths in a multi-path storage environment. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 8caf92d8 upstream. Going forward it is possible that some of the commands that are not currently implemented will be implemented on future Windows hosts. Even if they are not implemented, we are told the host will corrrectly handle unsupported commands (by returning appropriate return code and sense information). Make command filtering depend on the host version. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 56b26e69 upstream. On Azure, we have seen instances of unbounded I/O latencies. To deal with this issue, implement handler that can reset the timeout. Note that the host gaurantees that it will respond to each command that has been issued. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> [hch: added a better comment explaining the issue] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Bottomley authored
commit 884ffee0 upstream. hostt->name might contain space, so use the ->proc_name short name instead when creating per-driver command slabs. Signed-off-by: James Bottomley <JBottomley@Parallels.com> Reported-by: poma <pomidorabelisima@gmail.com> Tested-by: poma <pomidorabelisima@gmail.com> Reviewed-by: Vladimir Davydov <vdavydov@parallels.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 7e467245 upstream. We would get wrong results in compiler recomputed old_pmd. Avoid that by using ACCESS_ONCE Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 969b7b20 upstream. As per ISA, for 4k base page size we compare 14..65 bits of VA specified with the entry_VA in tlb. That implies we need to make sure we do a tlbie with all the possible 4k va we used to access the 16MB hugepage. With 64k base page size we compare 14..57 bits of VA. Hence we cannot ignore the lower 24 bits of va while tlbie .We also cannot tlb invalidate a 16MB entry with just one tlbie instruction because we don't track which va was used to instantiate the tlb entry. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit fc047955 upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Use _PAGE_COMBO to determine the page size with which we should invalidate the hash table entries on unmap. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 629149fa upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Handle this correctly for 16M pages Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit fa1f8ae8 upstream. The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit b0aa44a3 upstream. With hugepages, we store the hpte valid information in the pte page whose address is stored in the second half of the PMD. Use a write barrier to make sure clearing pmd busy bit and updating hpte valid info are ordered properly. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit 5efbabe0 upstream. Function remove_ddw() could be called in of_reconfig_notifier and we potentially remove the dynamic DMA window property, which invokes of_reconfig_notifier again. Eventually, it leads to the deadlock as following backtrace shows. The patch fixes the above issue by deferring releasing the dynamic DMA window property while releasing the device node. ============================================= [ INFO: possible recursive locking detected ] 3.16.0+ #428 Tainted: G W --------------------------------------------- drmgr/2273 is trying to acquire lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 but task is already holding lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock((of_reconfig_chain).rwsem); lock((of_reconfig_chain).rwsem); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by drmgr/2273: #0: (sb_writers#4){.+.+.+}, at: [<c0000000001cbe70>] \ .vfs_write+0xb0/0x1f8 #1: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 stack backtrace: CPU: 17 PID: 2273 Comm: drmgr Tainted: G W 3.16.0+ #428 Call Trace: [c0000000137e7000] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c0000000137e70b0] [c00000000083cd34] .dump_stack+0x7c/0x9c [c0000000137e7130] [c0000000000b8afc] .__lock_acquire+0x128c/0x1c68 [c0000000137e7280] [c0000000000b9a4c] .lock_acquire+0xe8/0x104 [c0000000137e7350] [c00000000083588c] .down_read+0x4c/0x90 [c0000000137e73e0] [c000000000091890] .__blocking_notifier_call_chain+0x40/0x78 [c0000000137e7490] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7520] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e75b0] [c000000000682a9c] .of_property_notify+0x4c/0x54 [c0000000137e7650] [c000000000682bf0] .of_remove_property+0x30/0xd4 [c0000000137e76f0] [c000000000052a44] .remove_ddw+0x144/0x168 [c0000000137e7790] [c000000000053204] .iommu_reconfig_notifier+0x30/0xe0 [c0000000137e7820] [c00000000009137c] .notifier_call_chain+0x6c/0xb4 [c0000000137e78c0] [c0000000000918ac] .__blocking_notifier_call_chain+0x5c/0x78 [c0000000137e7970] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7a00] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e7a90] [c000000000682e14] .of_detach_node+0x44/0x1fc [c0000000137e7b40] [c0000000000518e4] .ofdt_write+0x3ac/0x688 [c0000000137e7c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c0000000137e7cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c0000000137e7d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c0000000137e7e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit f1b3929c upstream. While running command "drmgr -c phb -r -s 'PHB 528'", following backtrace jumped out because the target device node isn't marked with OF_DETACHED by of_detach_node(), which caused by error returned from memory hotplug related reconfig notifier when disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it. ERROR: Bad of_node_put() on /pci@800000020000210/ethernet@0 CPU: 14 PID: 2252 Comm: drmgr Tainted: G W 3.16.0+ #427 Call Trace: [c000000012a776a0] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c000000012a77750] [c00000000083cd34] .dump_stack+0x7c/0x9c [c000000012a777d0] [c0000000006807c4] .of_node_release+0x58/0xe0 [c000000012a77860] [c00000000038a7d0] .kobject_release+0x174/0x1b8 [c000000012a77900] [c00000000038a884] .kobject_put+0x70/0x78 [c000000012a77980] [c000000000681680] .of_node_put+0x28/0x34 [c000000012a77a00] [c000000000681ea8] .__of_get_next_child+0x64/0x70 [c000000012a77a90] [c000000000682138] .of_find_node_by_path+0x1b8/0x20c [c000000012a77b40] [c000000000051840] .ofdt_write+0x308/0x688 [c000000012a77c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c000000012a77cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c000000012a77d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c000000012a77e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 85c1fafd upstream. On ppc64 we support 4K hash pte with 64K page size. That requires us to track the hash pte slot information on a per 4k basis. We do that by storing the slot details in the second half of pte page. The pte bit _PAGE_COMBO is used to indicate whether the second half need to be looked while building real_pte. We need to use read memory barrier while doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO check. On the store side we already do a lwsync in __hash_page_4K Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrey Utkin authored
commit b00fc6ec upstream. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81631Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vaidyanathan Srinivasan authored
commit 95707d85 upstream. Flags from device-tree need to be parsed with accessors for interpreting correct value in little-endian. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Reviewed-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Felipe Balbi authored
commit 42ab0f39 upstream. The second range of this particular regulator, starts at 1.60V, not as 1.55V as it was originally implied by code. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nikesh Oswal authored
commit 5b919f3e upstream. WM5110/8280 devices do not support bypass mode for LDO1 so remove the bypass callbacks registered with regulator core. Signed-off-by: Nikesh Oswal <nikesh@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Lindgren authored
commit daebabd5 upstream. Commit 43fef47f (mfd: twl4030-power: Add a configuration to turn off oscillator during off-idle) added support for configuring the PMIC to cut off resources during deeper idle states to save power. This however caused regression for n900 display power that needed the PMIC configuration to be disabled with commit d937678a (ARM: dts: Revert enabling of twl configuration for n900). Turns out the root cause of the problem is that we must use TWL4030_RESCONFIG_UNDEF instead of DEV_GRP_NULL to avoid disabling regulators that may have been enabled before the init function for twl4030-power.c runs. With TWL4030_RESCONFIG_UNDEF we let the regulator framework control the regulators like it should. Here we need to only configure the sys_clken and sys_off_mode triggers for the regulators that cannot be done by the regulator framework as it's not running at that point. This allows us to enable the PMIC configuration for n900. Fixes: 43fef47f (mfd: twl4030-power: Add a configuration to turn off oscillator during off-idle) Signed-off-by: Tony Lindgren <tony@atomide.com> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeff Mahoney authored
commit 18139089 upstream. The rtsx_usb driver contains the table for the devices it supports but doesn't export it. As a result, no alias is generated and it doesn't get loaded automatically. Via https://bugzilla.novell.com/show_bug.cgi?id=890096Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reported-by: Marcel Witte <wittemar@googlemail.com> Cc: Roger Tseng <rogerable@realtek.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Welling authored
commit 46de8ff8 upstream. single-ulpi-bypass is a flag used for older OMAP3 silicon. The flag when set, can excite code that improperly uses the OMAP_UHH_HOSTCONFIG_UPLI_BYPASS define to clear the corresponding bit. Instead it clears all of the other bits disabling all of the ports in the process. Signed-off-by: Michael Welling <mwelling@emacinc.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sasha Levin authored
commit 618fde87 upstream. The rarely-executed memry-allocation-failed callback path generates a WARN_ON_ONCE() when smp_call_function_single() succeeds. Presumably it's supposed to warn on failures. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@gentwo.org> Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Li Zhong authored
commit d0177639 upstream. It is possible for some platforms, such as powerpc to set HPAGE_SHIFT to 0 to indicate huge pages not supported. When this is the case, hugetlbfs could be disabled during boot time: hugetlbfs: disabling because there are no supported hugepage sizes Then in dissolve_free_huge_pages(), order is kept maximum (64 for 64bits), and the for loop below won't end: for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) As suggested by Naoya, below fix checks hugepages_supported() before calling dissolve_free_huge_pages(). [rientjes@google.com: no legitimate reason to call dissolve_free_huge_pages() when !hugepages_supported()] Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pranith Kumar authored
commit e04aca4a upstream. Fix build error as reported by Geert Uytterhoeven here: http://kisskb.ellerman.id.au/kisskb/buildresult/11607865/ The error happens when CONFIG_HAS_IOPORT_MAP=n because of which there are missing definitions of ioport_map/unmap(). Fix this build error by adding these prototypes. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
commit 2b462638 upstream. If we failed to copy from the structure, writing back the flags leaks 31 bits of kernel memory (the rest of the ir_flags field). In any case, if we cannot copy from/to the structure, why should we expect putting just the flags to work? Also make sure ocfs2_info_handle_freeinode() returns the right error code if the copy_to_user() fails. Fixes: ddee5cdb ('Ocfs2: Add new OCFS2_IOC_INFO ioctl for ocfs2 v8.') Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Joel Becker <jlbec@evilplan.org> Acked-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 5838d444 upstream. Commit 85816794 ("fanotify: Fix use after free for permission events") introduced a double free issue for permission events which are pending in group's notification queue while group is being destroyed. These events are freed from fanotify_handle_event() but they are not removed from groups notification queue and thus they get freed again from fsnotify_flush_notify(). Fix the problem by removing permission events from notification queue before freeing them if we skip processing access response. Also expand comments in fanotify_release() to explain group shutdown in detail. Fixes: 85816794Signed-off-by: Jan Kara <jack@suse.cz> Reported-by: Douglas Leeder <douglas.leeder@sophos.com> Tested-by: Douglas Leeder <douglas.leeder@sophos.com> Reported-by: Heinrich Schuchard <xypron.glpk@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Paris authored
commit 7d8b6c63 upstream. This is effectively a revert of 7b9a7ec5 plus fixing it a different way... We found, when trying to run an application from an application which had dropped privs that the kernel does security checks on undefined capability bits. This was ESPECIALLY difficult to debug as those undefined bits are hidden from /proc/$PID/status. Consider a root application which drops all capabilities from ALL 4 capability sets. We assume, since the application is going to set eff/perm/inh from an array that it will clear not only the defined caps less than CAP_LAST_CAP, but also the higher 28ish bits which are undefined future capabilities. The BSET gets cleared differently. Instead it is cleared one bit at a time. The problem here is that in security/commoncap.c::cap_task_prctl() we actually check the validity of a capability being read. So any task which attempts to 'read all things set in bset' followed by 'unset all things set in bset' will not even attempt to unset the undefined bits higher than CAP_LAST_CAP. So the 'parent' will look something like: CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffc000000000 All of this 'should' be fine. Given that these are undefined bits that aren't supposed to have anything to do with permissions. But they do... So lets now consider a task which cleared the eff/perm/inh completely and cleared all of the valid caps in the bset (but not the invalid caps it couldn't read out of the kernel). We know that this is exactly what the libcap-ng library does and what the go capabilities library does. They both leave you in that above situation if you try to clear all of you capapabilities from all 4 sets. If that root task calls execve() the child task will pick up all caps not blocked by the bset. The bset however does not block bits higher than CAP_LAST_CAP. So now the child task has bits in eff which are not in the parent. These are 'meaningless' undefined bits, but still bits which the parent doesn't have. The problem is now in cred_cap_issubset() (or any operation which does a subset test) as the child, while a subset for valid cap bits, is not a subset for invalid cap bits! So now we set durring commit creds that the child is not dumpable. Given it is 'more priv' than its parent. It also means the parent cannot ptrace the child and other stupidity. The solution here: 1) stop hiding capability bits in status This makes debugging easier! 2) stop giving any task undefined capability bits. it's simple, it you don't put those invalid bits in CAP_FULL_SET you won't get them in init and you won't get them in any other task either. This fixes the cap_issubset() tests and resulting fallout (which made the init task in a docker container untraceable among other things) 3) mask out undefined bits when sys_capset() is called as it might use ~0, ~0 to denote 'all capabilities' for backward/forward compatibility. This lets 'capsh --caps="all=eip" -- -c /bin/bash' run. 4) mask out undefined bit when we read a file capability off of disk as again likely all bits are set in the xattr for forward/backward compatibility. This lets 'setcap all+pe /bin/bash; /bin/bash' run Signed-off-by: Eric Paris <eparis@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Andrew Vagin <avagin@openvz.org> Cc: Andrew G. Morgan <morgan@kernel.org> Cc: Serge E. Hallyn <serge.hallyn@canonical.com> Cc: Kees Cook <keescook@chromium.org> Cc: Steve Grubb <sgrubb@redhat.com> Cc: Dan Walsh <dwalsh@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stefan Berger authored
commit b49e1043 upstream. Properly clean the sysfs entries in the error path Reported-by: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-