- 12 Feb, 2019 24 commits
-
-
Eric Biggers authored
[ Upstream commit 0a6a40c2 ] In the "aes-fixed-time" AES implementation, disable interrupts while accessing the S-box, in order to make cache-timing attacks more difficult. Previously it was possible for the CPU to be interrupted while the S-box was loaded into L1 cache, potentially evicting the cachelines and causing later table lookups to be time-variant. In tests I did on x86 and ARM, this doesn't affect performance significantly. Responsiveness is potentially a concern, but interrupts are only disabled for a single AES block. Note that even after this change, the implementation still isn't necessarily guaranteed to be constant-time; see https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion of the many difficulties involved in writing truly constant-time AES software. But it's valuable to make such attacks more difficult. Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Frank Rowand authored
[ Upstream commit 5b3f5c40 ] The previous commit, "of: overlay: add missing of_node_get() in __of_attach_node_sysfs" added a missing of_node_get() to __of_attach_node_sysfs(). This results in a refcount imbalance for nodes attached with dlpar_attach_node(). The calling sequence from dlpar_attach_node() to __of_attach_node_sysfs() is: dlpar_attach_node() of_attach_node() __of_attach_node_sysfs() For more detailed description of the node refcount, see commit 68baf692 ("powerpc/pseries: Fix of_node_put() underflow during DLPAR remove"). Tested-by:
Alan Tull <atull@kernel.org> Acked-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Frank Rowand <frank.rowand@sony.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Colin Ian King authored
[ Upstream commit 53bb565f ] In the expression "word1 << 16", word1 starts as u16, but is promoted to a signed int, then sign-extended to resource_size_t, which is probably not what was intended. Cast to resource_size_t to avoid the sign extension. This fixes an identical issue as fixed by commit 0b2d7076 ("x86/PCI: Fix Broadcom CNB20LE unintended sign extension") back in 2014. Detected by CoverityScan, CID#138749, 138750 ("Unintended sign extension") Fixes: 3f6ea84a ("PCI: read memory ranges out of Broadcom CNB20LE host bridge") Signed-off-by:
Colin Ian King <colin.king@canonical.com> Signed-off-by:
Bjorn Helgaas <helgaas@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Bob Peterson authored
[ Upstream commit 216f0efd ] Before this patch, recovery would cause all callbacks to be delayed, put on a queue, and afterward they were all queued to the callback work queue. This patch does the same thing, but occasionally takes a break after 25 of them so it won't swamp the CPU at the expense of other RT processes like corosync. Signed-off-by:
Bob Peterson <rpeterso@redhat.com> Signed-off-by:
David Teigland <teigland@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yi Wang authored
[ Upstream commit 46fda5b5 ] Smatch report warnings: drivers/clk/imgtec/clk-boston.c:76 clk_boston_setup() warn: possible memory leak of 'onecell' drivers/clk/imgtec/clk-boston.c:83 clk_boston_setup() warn: possible memory leak of 'onecell' drivers/clk/imgtec/clk-boston.c:90 clk_boston_setup() warn: possible memory leak of 'onecell' 'onecell' is malloced in clk_boston_setup(), but not be freed before leaving from the error handling cases. Signed-off-by:
Yi Wang <wang.yi59@zte.com.cn> Signed-off-by:
Stephen Boyd <sboyd@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yufen Wang authored
[ Upstream commit 82c08c3e ] In case panic() and panic() called at the same time on different CPUS. For example: CPU 0: panic() __crash_kexec machine_crash_shutdown crash_smp_send_stop machine_kexec BUG_ON(num_online_cpus() > 1); CPU 1: panic() local_irq_disable panic_smp_self_stop If CPU 1 calls panic_smp_self_stop() before crash_smp_send_stop(), kdump fails. CPU1 can't receive the ipi irq, CPU1 will be always online. To fix this problem, this patch split out the panic_smp_self_stop() and add set_cpu_online(smp_processor_id(), false). Signed-off-by:
Yufen Wang <wangyufen@huawei.com> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
James Smart authored
[ Upstream commit 30e196ca ] After a LOGO in response to an ABTS timeout, a PLOGI wasn't issued to re-establish the login. An nlp_type check in the LOGO completion handler failed to restart discovery for NVME targets. Revised the nlp_type check for NVME as well as SCSI. While reviewing the LOGO handling a few other issues were seen and were addressed: - Better lock synchronization around ndlp data types - When the ABTS times out, unregister the RPI before sending the LOGO so that all local exchange contexts are cleared and nothing received while awaiting LOGO/PLOGI handling will be accepted. - LOGO handling optimized to: Wait only R_A_TOV for a response. It doesn't need to be retried on timeout. If there wasn't a response, a PLOGI will be sent, thus an implicit logout applies as well when the other port sees it. If there is a response, any kind of response is considered "good" and the XRI quarantined for a exchange qualifier window. - PLOGI is issued as soon a LOGO state is resolved. Signed-off-by:
Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by:
James Smart <jsmart2021@gmail.com> Reviewed-by:
Hannes Reinecke <hare@suse.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Suganath Prabu authored
[ Upstream commit dc730212 ] Call sas_remove_host() before removing the target devices in the driver's .remove() callback function(i.e. during driver unload time). So that driver can provide a way to allow SYNC CACHE, START STOP unit commands etc. (which are issued from SML) to the target drives during driver unload time. Once sas_remove_host() is called before removing the target drives then driver can just clean up the resources allocated for target devices and no need to call sas_port_delete_phy(), sas_port_delete() API's as these API's internally called from sas_remove_host(). Signed-off-by:
Suganath Prabu <suganath-prabu.subramani@broadcom.com> Reviewed-by:
Bjorn Helgaas <bhelgaas@google.com> Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
James Smart authored
[ Upstream commit b114d900 ] When LCB's are rejected, if beaconing was already in progress, the Reason Code Explanation was not being set. Should have been set to command in progress. Signed-off-by:
Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by:
James Smart <jsmart2021@gmail.com> Reviewed-by:
Hannes Reinecke <hare@suse.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Lorenzo Bianconi authored
[ Upstream commit 3831a2a0 ] In order to properly support dynack in ad-hoc mode running wpa_supplicant, take into account authentication frames for 'late ack' detection. This patch has been tested on devices mounted on offshore high-voltage stations connected through ~24Km link Reported-by:
Koen Vandeputte <koen.vandeputte@ncentric.com> Tested-by:
Koen Vandeputte <koen.vandeputte@ncentric.com> Signed-off-by:
Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Signed-off-by:
Kalle Valo <kvalo@codeaurora.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Brian Norris authored
[ Upstream commit 2bd345cd ] Commit 2ea9f12c ("ath10k: add new cipher suite support") added a new n_cipher_suites HW param with a fallback value and a warning log. Commit 03a72288 ("ath10k: wmi: add hw params entry for wcn3990") later added WCN3990 HW entries, but it missed the n_cipher_suites. Rather than seeing this warning every boot ath10k_snoc 18800000.wifi: invalid hw_params.n_cipher_suites 0 let's provide the appropriate value. Cc: Rakesh Pillai <pillair@qti.qualcomm.com> Cc: Govind Singh <govinds@qti.qualcomm.com> Signed-off-by:
Brian Norris <briannorris@chromium.org> Signed-off-by:
Kalle Valo <kvalo@codeaurora.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Lior David authored
[ Upstream commit 66449740 ] A successful call to wil_tx_ring takes skb reference so it will only be freed in wil_tx_complete. Consume the skb in wil_find_tx_bcast_2 to prevent memory leak. Signed-off-by:
Lior David <liord@codeaurora.org> Signed-off-by:
Maya Erez <merez@codeaurora.org> Signed-off-by:
Kalle Valo <kvalo@codeaurora.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Alexei Avshalom Lazar authored
[ Upstream commit d083b2e2 ] With current reset flow, Talyn sometimes get stuck causing PCIe enumeration to fail. Fix this by removing some reset flow operations that are not relevant for Talyn. Setting bit 15 in RGF_HP_CTRL is WBE specific and is not in use for all wil6210 devices. For Sparrow, BIT_HPAL_PERST_FROM_PAD and BIT_CAR_PERST_RST were set as a WA an HW issue. Signed-off-by:
Alexei Avshalom Lazar <ailizaro@codeaurora.org> Signed-off-by:
Maya Erez <merez@codeaurora.org> Signed-off-by:
Kalle Valo <kvalo@codeaurora.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Nickhu authored
[ Upstream commit 4c3d6174 ] When the kernel configs of ftrace and frame pointer options are choosed, the compiler option of kernel will incompatible. Error message: nds32le-linux-gcc: error: -pg and -fomit-frame-pointer are incompatible Signed-off-by:
Nickhu <nickhu@andestech.com> Signed-off-by:
Zong Li <zong@andestech.com> Acked-by:
Greentime Hu <greentime@andestech.com> Signed-off-by:
Greentime Hu <greentime@andestech.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Steve Longerbeam authored
[ Upstream commit 819bec35 ] Prevent possible race by parallel threads between ipu_image_convert_run() and ipu_image_convert_unprepare(). This involves setting ctx->aborting to true unconditionally so that no new job runs can be queued during unprepare, and holding the ctx->aborting flag until the context is freed. Note that the "normal" ipu_image_convert_abort() case (e.g. not during context unprepare) should clear the ctx->aborting flag after aborting any active run and clearing the context's pending queue. This is because it should be possible to continue to use the conversion context and queue more runs after an abort. Signed-off-by:
Steve Longerbeam <slongerbeam@gmail.com> Tested-by:
Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by:
Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Long Li authored
[ Upstream commit b8259219 ] If the number of NUMA nodes exceeds the number of MSI/MSI-X interrupts which are allocated for a device, the interrupt affinity spreading code fails to spread them across all nodes. The reason is, that the spreading code starts from node 0 and continues up to the number of interrupts requested for allocation. This leaves the nodes past the last interrupt unused. This results in interrupt concentration on the first nodes which violates the assumption of the block layer that all nodes are covered evenly. As a consequence the NUMA nodes above the number of interrupts are all assigned to hardware queue 0 and therefore NUMA node 0, which results in bad performance and has CPU hotplug implications, because queue 0 gets shut down when the last CPU of node 0 is offlined. Go over all NUMA nodes and assign them round-robin to all requested interrupts to solve this. [ tglx: Massaged changelog ] Signed-off-by:
Long Li <longli@microsoft.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Cc: Michael Kelley <mikelley@microsoft.com> Link: https://lkml.kernel.org/r/20181102180248.13583-1-longli@linuxonhyperv.comSigned-off-by:
Sasha Levin <sashal@kernel.org>
-
Jernej Skrabec authored
[ Upstream commit c96d6221 ] It turns out that TCON TOP registers in H6 SoC have non-zero reset value. This may cause issues if bits are not changed during configuration. To prevent that, initialize registers to 0. Signed-off-by:
Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by:
Maxime Ripard <maxime.ripard@bootlin.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181104182705.18047-24-jernej.skrabec@siol.netSigned-off-by:
Sasha Levin <sashal@kernel.org>
-
Muchun Song authored
[ Upstream commit 18534df4 ] gpiod_request_commit() copies the pointer to the label passed as an argument only to be used later. But there's a chance the caller could immediately free the passed string(e.g., local variable). This could trigger a use after free when we use gpio label(e.g., gpiochip_unlock_as_irq(), gpiochip_is_requested()). To be on the safe side: duplicate the string with kstrdup_const() so that if an unaware user passes an address to a stack-allocated buffer, we won't get the arbitrary label. Also fix gpiod_set_consumer_name(). Signed-off-by:
Muchun Song <smuchun@gmail.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Arnd Bergmann authored
[ Upstream commit 1539c7f2 ] Randconfig testing revealed a very old bug, with gcc-8: sound/soc/intel/atom/sst/sst_loader.c: In function 'sst_load_fw': sound/soc/intel/atom/sst/sst_loader.c:357:5: error: 'fw' may be used uninitialized in this function [-Werror=maybe-uninitialized] if (fw == NULL) { ^ sound/soc/intel/atom/sst/sst_loader.c:354:25: note: 'fw' was declared here const struct firmware *fw; We must check the return code of request_firmware() before we look at the pointer result that may be uninitialized when the function fails. Fixes: 9012c954 ("ASoC: Intel: mrfld - Add DSP load and management") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Lukas Wunner authored
[ Upstream commit 3c7b30f7 ] The BCM2835 pinctrl driver acquires a spinlock in its ->irq_enable, ->irq_disable and ->irq_set_type callbacks. Spinlocks become sleeping locks with CONFIG_PREEMPT_RT_FULL=y, therefore invocation of one of the callbacks in atomic context may cause a hard lockup if at least two GPIO pins in the same bank are used as interrupts. The issue doesn't occur with just a single interrupt pin per bank because the lock is never contended. I'm experiencing such lockups with GPIO 8 and 28 used as level-triggered interrupts, i.e. with ->irq_disable being invoked on reception of every IRQ. The critical section protected by the spinlock is very small (one bitop and one RMW of an MMIO register), hence converting to a raw spinlock seems a better trade-off than converting the driver to threaded IRQ handling (which would increase latency to handle an interrupt). Cc: Mathias Duckeck <m.duckeck@kunbus.de> Signed-off-by:
Lukas Wunner <lukas@wunner.de> Acked-by:
Julia Cartwright <julia@ni.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Deepak Sharma authored
[ Upstream commit d5c04dff ] Modify vgem_init to take platform dev as parent in drm_dev_init. This will make drm device available at "/sys/devices/platform/vgem" in x86 chromebook. v2: rebase, address checkpatch typo and line over 80 characters Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Deepak Sharma <deepak.sharma@amd.com> Reviewed-by:
Sean Paul <seanpaul@chromium.org> Signed-off-by:
Emil Velikov <emil.velikov@collabora.com> Reviewed-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20181023163550.15211-1-emil.l.velikov@gmail.comSigned-off-by:
Sasha Levin <sashal@kernel.org>
-
Slawomir Stepien authored
[ Upstream commit 0559ef7f ] Inside __ad7280_read32(), the spi_sync_transfer() can fail with negative error code. This change will ensure that this error is being passed up in the call stack, so it can be handled. Signed-off-by:
Slawomir Stepien <sst@poczta.fm> Signed-off-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Gustavo A. R. Silva authored
[ Upstream commit a3780509 ] idx can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/gpu/drm/drm_bufs.c:1420 drm_legacy_freebufs() warn: potential spectre issue 'dma->buflist' [r] (local cap) Fix this by sanitizing idx before using it to index dma->buflist Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2Signed-off-by:
Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20181016095549.GA23586@embeddedor.comSigned-off-by:
Sasha Levin <sashal@kernel.org>
-
Alexey Brodkin authored
commit a66d9724 upstream. Initially we bumped into problem with 32-bit aligned atomic64_t on ARC, see [1]. And then during quite lengthly discussion Peter Z. mentioned ARCH_KMALLOC_MINALIGN which IMHO makes perfect sense. If allocation is done by plain kmalloc() obtained buffer will be ARCH_KMALLOC_MINALIGN aligned and then why buffer obtained via devm_kmalloc() should have any other alignment? This way we at least get the same behavior for both types of allocation. [1] http://lists.infradead.org/pipermail/linux-snps-arc/2018-July/004009.html [2] http://lists.infradead.org/pipermail/linux-snps-arc/2018-July/004036.htmlSigned-off-by:
Alexey Brodkin <abrodkin@synopsys.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: David Laight <David.Laight@ACULAB.COM> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Greg KH <greg@kroah.com> Cc: <stable@vger.kernel.org> # 4.8+ Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 Feb, 2019 16 commits
-
-
Greg Kroah-Hartman authored
-
Paulo Alcantara authored
commit 28eb24ff upstream. In case a hostname resolves to a different IP address (e.g. long running mounts), make sure to resolve it every time prior to calling generic_ip_connect() in reconnect. Suggested-by:
Steve French <stfrench@microsoft.com> Signed-off-by:
Paulo Alcantara <palcantara@suse.de> Signed-off-by:
Steve French <stfrench@microsoft.com> Signed-off-by:
Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexei Naberezhnov authored
commit 483cbbed upstream. This fixes the case when md array assembly fails because of raid cache recovery unable to allocate a stripe, despite attempts to replay stripes and increase cache size. This happens because stripes released by r5c_recovery_replay_stripes and raid5_set_cache_size don't become available for allocation immediately. Released stripes first are placed on conf->released_stripes list and require md thread to merge them on conf->inactive_list before they can be allocated. Patch allows final allocation attempt during cache recovery to wait for new stripes to become availabe for allocation. Cc: linux-raid@vger.kernel.org Cc: Shaohua Li <shli@kernel.org> Cc: linux-stable <stable@vger.kernel.org> # 4.10+ Fixes: b4c625c6 ("md/r5cache: r5cache recovery: part 1") Signed-off-by:
Alexei Naberezhnov <anaberezhnov@fb.com> Signed-off-by:
Song Liu <songliubraving@fb.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Frank Rowand authored
commit 8814dc46 upstream. When allocating a new node, add_changeset_node() was duplicating the properties from the respective node in the overlay instead of allocating a node with no properties. When this patch is applied the errors reported by the devictree unittest from patch "of: overlay: add tests to validate kfrees from overlay removal" will no longer occur. These error messages are of the form: "OF: ERROR: ..." and the unittest results will change from: ### dt-test ### end of unittest - 203 passed, 7 failed to ### dt-test ### end of unittest - 210 passed, 0 failed Tested-by:
Alan Tull <atull@kernel.org> Signed-off-by:
Frank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Frank Rowand authored
commit 6b4955ba upstream. The changeset entry 'update property' was used for new properties in an overlay instead of 'add property'. The decision of whether to use 'update property' was based on whether the property already exists in the subtree where the node is being spliced into. At the top level of creating a changeset describing the overlay, the target node is in the live devicetree, so checking whether the property exists in the target node returns the correct result. As soon as the changeset creation algorithm recurses into a new node, the target is no longer in the live devicetree, but is instead in the detached overlay tree, thus all properties are incorrectly found to already exist in the target. This fix will expose another devicetree bug that will be fixed in the following patch in the series. When this patch is applied the errors reported by the devictree unittest will change, and the unittest results will change from: ### dt-test ### end of unittest - 210 passed, 0 failed to ### dt-test ### end of unittest - 203 passed, 7 failed Tested-by:
Alan Tull <atull@kernel.org> Signed-off-by:
Frank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Frank Rowand authored
commit 5b2c2f5a upstream. There is a matching of_node_put() in __of_detach_node_sysfs() Remove misleading comment from function header comment for of_detach_node(). This patch may result in memory leaks from code that directly calls the dynamic node add and delete functions directly instead of using changesets. This commit should result in powerpc systems that dynamically allocate a node, then later deallocate the node to have a memory leak when the node is deallocated. The next commit will fix the leak. Tested-by:
Alan Tull <atull@kernel.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by:
Frank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Frank Rowand authored
commit 144552c7 upstream. Add checks: - attempted kfree due to refcount reaching zero before overlay is removed - properties linked to an overlay node when the node is removed - node refcount > one during node removal in a changeset destroy, if the node was created by the changeset After applying this patch, several validation warnings will be reported from the devicetree unittest during boot due to pre-existing devicetree bugs. The warnings will be similar to: OF: ERROR: of_node_release(), unexpected properties in /testcase-data/overlay-node/test-bus/test-unittest11 OF: ERROR: memory leak, expected refcount 1 instead of 2, of_node_get()/of_node_put() unbalanced - destroy cset entry: attach overlay node /testcase-data-2/substation@100/ hvac-medium-2 Tested-by:
Alan Tull <atull@kernel.org> Signed-off-by:
Frank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rob Herring authored
commit a613b26a upstream. In preparation to remove the node name pointer from struct device_node, convert printf users to use the %pOFn format specifier. Reviewed-by:
Frank Rowand <frank.rowand@sony.com> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: netdev@vger.kernel.org Signed-off-by:
Rob Herring <robh@kernel.org> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Hildenbrand authored
commit e0a352fa upstream. We had a race in the old balloon compaction code before b1123ea6 ("mm: balloon: use general non-lru movable page feature") refactored it that became visible after backporting 195a8c43 ("virtio-balloon: deflate via a page list") without the refactoring. The bug existed from commit d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") till b1123ea6 ("mm: balloon: use general non-lru movable page feature"). d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") was backported to 3.12, so the broken kernels are stable kernels [3.12 - 4.7]. There was a subtle race between dropping the page lock of the newpage in __unmap_and_move() and checking for __is_movable_balloon_page(newpage). Just after dropping this page lock, virtio-balloon could go ahead and deflate the newpage, effectively dequeueing it and clearing PageBalloon, in turn making __is_movable_balloon_page(newpage) fail. This resulted in dropping the reference of the newpage via putback_lru_page(newpage) instead of put_page(newpage), leading to page->lru getting modified and a !LRU page ending up in the LRU lists. With 195a8c43 ("virtio-balloon: deflate via a page list") backported, one would suddenly get corrupted lists in release_pages_balloon(): - WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0 - list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100 Nowadays this race is no longer possible, but it is hidden behind very ugly handling of __ClearPageMovable() and __PageMovable(). __ClearPageMovable() will not make __PageMovable() fail, only PageMovable(). So the new check (__PageMovable(newpage)) will still hold even after newpage was dequeued by virtio-balloon. If anybody would ever change that special handling, the BUG would be introduced again. So instead, make it explicit and use the information of the original isolated page before migration. This patch can be backported fairly easy to stable kernels (in contrast to the refactoring). Link: http://lkml.kernel.org/r/20190129233217.10747-1-david@redhat.com Fixes: d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") Signed-off-by:
David Hildenbrand <david@redhat.com> Reported-by:
Vratislav Bendel <vbendel@redhat.com> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Rafael Aquini <aquini@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Jan Kara <jack@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Vratislav Bendel <vbendel@redhat.com> Cc: Rafael Aquini <aquini@redhat.com> Cc: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Cc: Minchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> [3.12 - 4.7] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Naoya Horiguchi authored
commit 6376360e upstream. Currently memory_failure() is racy against process's exiting, which results in kernel crash by null pointer dereference. The root cause is that memory_failure() uses force_sig() to forcibly kill asynchronous (meaning not in the current context) processes. As discussed in thread https://lkml.org/lkml/2010/6/8/236 years ago for OOM fixes, this is not a right thing to do. OOM solves this issue by using do_send_sig_info() as done in commit d2d39309 ("signal: oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this patch is suggesting to do the same for hwpoison. do_send_sig_info() properly accesses to siglock with lock_task_sighand(), so is free from the reported race. I confirmed that the reported bug reproduces with inserting some delay in kill_procs(), and it never reproduces with this patch. Note that memory_failure() can send another type of signal using force_sig_mceerr(), and the reported race shouldn't happen on it because force_sig_mceerr() is called only for synchronous processes (i.e. BUS_MCEERR_AR happens only when some process accesses to the corrupted memory.) Link: http://lkml.kernel.org/r/20190116093046.GA29835@hori1.linux.bs1.fc.nec.co.jpSigned-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by:
Jane Chu <jane.chu@oracle.com> Reviewed-by:
Dan Williams <dan.j.williams@intel.com> Reviewed-by:
William Kucharski <william.kucharski@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shakeel Butt authored
commit cefc7ef3 upstream. Syzbot instance running on upstream kernel found a use-after-free bug in oom_kill_process. On further inspection it seems like the process selected to be oom-killed has exited even before reaching read_lock(&tasklist_lock) in oom_kill_process(). More specifically the tsk->usage is 1 which is due to get_task_struct() in oom_evaluate_task() and the put_task_struct within for_each_thread() frees the tsk and for_each_thread() tries to access the tsk. The easiest fix is to do get/put across the for_each_thread() on the selected task. Now the next question is should we continue with the oom-kill as the previously selected task has exited? However before adding more complexity and heuristics, let's answer why we even look at the children of oom-kill selected task? The select_bad_process() has already selected the worst process in the system/memcg. Due to race, the selected process might not be the worst at the kill time but does that matter? The userspace can use the oom_score_adj interface to prefer children to be killed before the parent. I looked at the history but it seems like this is there before git history. Link: http://lkml.kernel.org/r/20190121215850.221745-1-shakeelb@google.com Reported-by: syzbot+7fbbfa368521945f0e3d@syzkaller.appspotmail.com Fixes: 6b0c81b3 ("mm, oom: reduce dependency on tasklist_lock") Signed-off-by:
Shakeel Butt <shakeelb@google.com> Reviewed-by:
Roman Gushchin <guro@fb.com> Acked-by:
Michal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oscar Salvador authored
commit eeb0efd0 upstream. This is the same sort of error we saw in commit 17e2e7d7 ("mm, page_alloc: fix has_unmovable_pages for HugePages"). Gigantic hugepages cross several memblocks, so it can be that the page we get in scan_movable_pages() is a page-tail belonging to a 1G-hugepage. If that happens, page_hstate()->size_to_hstate() will return NULL, and we will blow up in hugepage_migration_supported(). The splat is as follows: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 #PF error: [normal kernel read fault] PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 1350 Comm: bash Tainted: G E 5.0.0-rc1-mm1-1-default+ #27 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014 RIP: 0010:__offline_pages+0x6ae/0x900 Call Trace: memory_subsys_offline+0x42/0x60 device_offline+0x80/0xa0 state_store+0xab/0xc0 kernfs_fop_write+0x102/0x180 __vfs_write+0x26/0x190 vfs_write+0xad/0x1b0 ksys_write+0x42/0x90 do_syscall_64+0x5b/0x180 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Modules linked in: af_packet(E) xt_tcpudp(E) ipt_REJECT(E) xt_conntrack(E) nf_conntrack(E) nf_defrag_ipv4(E) ip_set(E) nfnetlink(E) ebtable_nat(E) ebtable_broute(E) bridge(E) stp(E) llc(E) iptable_mangle(E) iptable_raw(E) iptable_security(E) ebtable_filter(E) ebtables(E) iptable_filter(E) ip_tables(E) x_tables(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) bochs_drm(E) ttm(E) aesni_intel(E) drm_kms_helper(E) aes_x86_64(E) crypto_simd(E) cryptd(E) glue_helper(E) drm(E) virtio_net(E) syscopyarea(E) sysfillrect(E) net_failover(E) sysimgblt(E) pcspkr(E) failover(E) i2c_piix4(E) fb_sys_fops(E) parport_pc(E) parport(E) button(E) btrfs(E) libcrc32c(E) xor(E) zstd_decompress(E) zstd_compress(E) xxhash(E) raid6_pq(E) sd_mod(E) ata_generic(E) ata_piix(E) ahci(E) libahci(E) libata(E) crc32c_intel(E) serio_raw(E) virtio_pci(E) virtio_ring(E) virtio(E) sg(E) scsi_mod(E) autofs4(E) [akpm@linux-foundation.org: fix brace layout, per David. Reduce indentation] Link: http://lkml.kernel.org/r/20190122154407.18417-1-osalvador@suse.deSigned-off-by:
Oscar Salvador <osalvador@suse.de> Reviewed-by:
Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by:
Michal Hocko <mhocko@suse.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 9bcdeb51 upstream. Arkadiusz reported that enabling memcg's group oom killing causes strange memcg statistics where there is no task in a memcg despite the number of tasks in that memcg is not 0. It turned out that there is a bug in wake_oom_reaper() which allows enqueuing same task twice which makes impossible to decrease the number of tasks in that memcg due to a refcount leak. This bug existed since the OOM reaper became invokable from task_will_free_mem(current) path in out_of_memory() in Linux 4.7, T1@P1 |T2@P1 |T3@P1 |OOM reaper ----------+----------+----------+------------ # Processing an OOM victim in a different memcg domain. try_charge() mem_cgroup_out_of_memory() mutex_lock(&oom_lock) try_charge() mem_cgroup_out_of_memory() mutex_lock(&oom_lock) try_charge() mem_cgroup_out_of_memory() mutex_lock(&oom_lock) out_of_memory() oom_kill_process(P1) do_send_sig_info(SIGKILL, @P1) mark_oom_victim(T1@P1) wake_oom_reaper(T1@P1) # T1@P1 is enqueued. mutex_unlock(&oom_lock) out_of_memory() mark_oom_victim(T2@P1) wake_oom_reaper(T2@P1) # T2@P1 is enqueued. mutex_unlock(&oom_lock) out_of_memory() mark_oom_victim(T1@P1) wake_oom_reaper(T1@P1) # T1@P1 is enqueued again due to oom_reaper_list == T2@P1 && T1@P1->oom_reaper_list == NULL. mutex_unlock(&oom_lock) # Completed processing an OOM victim in a different memcg domain. spin_lock(&oom_reaper_lock) # T1P1 is dequeued. spin_unlock(&oom_reaper_lock) but memcg's group oom killing made it easier to trigger this bug by calling wake_oom_reaper() on the same task from one out_of_memory() request. Fix this bug using an approach used by commit 855b0183 ("oom, oom_reaper: disable oom_reaper for oom_kill_allocating_task"). As a side effect of this patch, this patch also avoids enqueuing multiple threads sharing memory via task_will_free_mem(current) path. Link: http://lkml.kernel.org/r/e865a044-2c10-9858-f4ef-254bc71d6cc2@i-love.sakura.ne.jp Link: http://lkml.kernel.org/r/5ee34fc6-1485-34f8-8790-903ddabaa809@i-love.sakura.ne.jp Fixes: af8e15cc ("oom, oom_reaper: do not enqueue task if it is on the oom_reaper_list head") Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reported-by:
Arkadiusz Miskiewicz <arekm@maven.pl> Tested-by:
Arkadiusz Miskiewicz <arekm@maven.pl> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Roman Gushchin <guro@fb.com> Cc: Tejun Heo <tj@kernel.org> Cc: Aleksa Sarai <asarai@suse.de> Cc: Jay Kamat <jgkamat@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrea Arcangeli authored
commit 1ac25013 upstream. hugetlb needs the same fix as faultin_nopage (which was applied in commit 96312e61 ("mm/gup.c: teach get_user_pages_unlocked to handle FOLL_NOWAIT")) or KVM hangs because it thinks the mmap_sem was already released by hugetlb_fault() if it returned VM_FAULT_RETRY, but it wasn't in the FOLL_NOWAIT case. Link: http://lkml.kernel.org/r/20190109020203.26669-2-aarcange@redhat.com Fixes: ce53053c ("kvm: switch get_user_page_nowait() to get_user_pages_unlocked()") Signed-off-by:
Andrea Arcangeli <aarcange@redhat.com> Tested-by:
"Dr. David Alan Gilbert" <dgilbert@redhat.com> Reported-by:
"Dr. David Alan Gilbert" <dgilbert@redhat.com> Reviewed-by:
Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by:
Peter Xu <peterx@redhat.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrei Vagin authored
commit 8fb335e0 upstream. Currently, exit_ptrace() adds all ptraced tasks in a dead list, then zap_pid_ns_processes() waits on all tasks in a current pidns, and only then are tasks from the dead list released. zap_pid_ns_processes() can get stuck on waiting tasks from the dead list. In this case, we will have one unkillable process with one or more dead children. Thanks to Oleg for the advice to release tasks in find_child_reaper(). Link: http://lkml.kernel.org/r/20190110175200.12442-1-avagin@gmail.com Fixes: 7c8bd232 ("exit: ptrace: shift "reap dead" code from exit_ptrace() to forget_original_parent()") Signed-off-by:
Andrei Vagin <avagin@gmail.com> Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric W. Biederman authored
commit 532b618b upstream. The subvol_name is allocated in btrfs_parse_subvol_options and is consumed and freed in mount_subvol. Add a free to the error paths that don't call mount_subvol so that it is guaranteed that subvol_name is freed when an error happens. Fixes: 312c89fb ("btrfs: cleanup btrfs_mount() using btrfs_mount_root()") Cc: stable@vger.kernel.org # v4.19+ Reviewed-by:
Nikolay Borisov <nborisov@suse.com> Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com> Reviewed-by:
David Sterba <dsterba@suse.com> Signed-off-by:
David Sterba <dsterba@suse.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-