- 26 Jul, 2019 10 commits
-
-
Gustavo A. R. Silva authored
[ Upstream commit bfabdd69 ] Notice that *rc* can evaluate to up to 5, include/linux/netdevice.h: enum gro_result { GRO_MERGED, GRO_MERGED_FREE, GRO_HELD, GRO_NORMAL, GRO_DROP, GRO_CONSUMED, }; typedef enum gro_result gro_result_t; In case *rc* evaluates to 5, we end up having an out-of-bounds read at drivers/net/wireless/ath/wil6210/txrx.c:821: wil_dbg_txrx(wil, "Rx complete %d bytes => %s\n", len, gro_res_str[rc]); Fix this by adding element "GRO_CONSUMED" to array gro_res_str. Addresses-Coverity-ID: 1444666 ("Out-of-bounds read") Fixes: 194b482b ("wil6210: Debug print GRO Rx result") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: Maya Erez <merez@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Toke Høiland-Jørgensen authored
[ Upstream commit 389b72e5 ] As already noted a comment in ath_tx_complete_aggr(), the hardware will occasionally send a TX status with the wrong tid number. If we trust the value, airtime usage will be reported to the wrong AC, which can cause the deficit on that AC to become very low, blocking subsequent attempts to transmit. To fix this, account airtime usage to the TID number from the original skb, instead of the one in the hardware TX status report. Reported-by: Miguel Catalan Cid <miguel.catalan@i2cat.net> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Pradeep kumar Chitrapu authored
[ Upstream commit 93ee3d10 ] Invalid rate code is sent to firmware when multicast rate value of 0 is sent to driver indicating disabled case, causing broken mesh path. so fix that. Tested on QCA9984 with firmware 10.4-3.6.1-00827 Sven tested on IPQ4019 with 10.4-3.5.3-00057 and QCA9888 with 10.4-3.5.3-00053 (ath10k-firmware) and 10.4-3.6-00140 (linux-firmware 2018-12-16-211de167). Fixes: cd93b83a ("ath10k: support for multicast rate control") Co-developed-by: Zhi Chen <zhichen@codeaurora.org> Signed-off-by: Zhi Chen <zhichen@codeaurora.org> Signed-off-by: Pradeep Kumar Chitrapu <pradeepc@codeaurora.org> Tested-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Alagu Sankar authored
[ Upstream commit e2a6b711 ] HTT High Latency (ATH10K_DEV_TYPE_HL) does not use txdone_fifo at all, we don't even initialise it by skipping ath10k_htt_tx_alloc_buf() in ath10k_htt_tx_start(). Because of this using QCA6174 SDIO ath10k_htt_rx_tx_compl_ind() will crash when it accesses unitialised txdone_fifo. So skip txdone_fifo when using High Latency mode. Tested with QCA6174 SDIO with firmware WLAN.RMH.4.4.1-00007-QCARMSWP-1. Co-developed-by: Wen Gong <wgong@codeaurora.org> Signed-off-by: Alagu Sankar <alagusankar@silex-india.com> Signed-off-by: Wen Gong <wgong@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Yingying Tang authored
[ Upstream commit 9e7251fa ] tx_stats will be freed and set to NULL before debugfs_sta node is removed in station disconnetion process. So if read the debugfs_sta node there may be NULL pointer error. Add check for tx_stats before use it to resove this issue. Signed-off-by: Yingying Tang <yintang@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Sven Van Asbroeck authored
[ Upstream commit 2b8066c3 ] If probe() fails anywhere beyond the point where sdma_get_firmware() is called, then a kernel oops may occur. Problematic sequence of events: 1. probe() calls sdma_get_firmware(), which schedules the firmware callback to run when firmware becomes available, using the sdma instance structure as the context 2. probe() encounters an error, which deallocates the sdma instance structure 3. firmware becomes available, firmware callback is called with deallocated sdma instance structure 4. use after free - kernel oops ! Solution: only attempt to load firmware when we're certain that probe() will succeed. This guarantees that the firmware callback's context will remain valid. Note that the remove() path is unaffected by this issue: the firmware loader will increment the driver module's use count, ensuring that the module cannot be unloaded while the firmware callback is pending or running. Signed-off-by: Sven Van Asbroeck <TheSven73@gmail.com> Reviewed-by: Robin Gong <yibin.gong@nxp.com> [vkoul: fixed braces for if condition] Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Maurizio Lombardi authored
[ Upstream commit 5dd6c493 ] If the CHAP_A value is not supported, the chap_server_open() function should free the auth_protocol pointer and set it to NULL, or we will leave a dangling pointer around. [ 66.010905] Unsupported CHAP_A value [ 66.011660] Security negotiation failed. [ 66.012443] iSCSI Login negotiation failed. [ 68.413924] general protection fault: 0000 [#1] SMP PTI [ 68.414962] CPU: 0 PID: 1562 Comm: targetcli Kdump: loaded Not tainted 4.18.0-80.el8.x86_64 #1 [ 68.416589] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 68.417677] RIP: 0010:__kmalloc_track_caller+0xc2/0x210 Signed-off-by: Maurizio Lombardi <mlombard@redhat.com> Reviewed-by: Chris Leech <cleech@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Nathan Chancellor authored
[ Upstream commit aa69fb62 ] After r363059 and r363928 in LLVM, a build using ld.lld as the linker with CONFIG_RANDOMIZE_BASE enabled fails like so: ld.lld: error: relocation R_AARCH64_ABS32 cannot be used against symbol __efistub_stext_offset; recompile with -fPIC Fangrui and Peter figured out that ld.lld is incorrectly considering __efistub_stext_offset as a relative symbol because of the order in which symbols are evaluated. _text is treated as an absolute symbol and stext is a relative symbol, making __efistub_stext_offset a relative symbol. Adding ABSOLUTE will force ld.lld to evalute this expression in the right context and does not change ld.bfd's behavior. ld.lld will need to be fixed but the developers do not see a quick or simple fix without some research (see the linked issue for further explanation). Add this simple workaround so that ld.lld can continue to link kernels. Link: https://github.com/ClangBuiltLinux/linux/issues/561 Link: https://github.com/llvm/llvm-project/commit/025a815d75d2356f2944136269aa5874721ec236 Link: https://github.com/llvm/llvm-project/commit/249fde85832c33f8b06c6b4ac65d1c4b96d23b83Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Debugged-by: Fangrui Song <maskray@google.com> Debugged-by: Peter Smith <peter.smith@linaro.org> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> [will: add comment] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Kevin Darbyshire-Bryant authored
[ Upstream commit 1196364f ] calc_vmlinuz_load_addr.c requires SZ_64K to be defined for alignment purposes. It included "../../../../include/linux/sizes.h" to define that size, however "sizes.h" tries to include <linux/const.h> which assumes linux system headers. These may not exist eg. the following error was encountered when building Linux for OpenWrt under macOS: In file included from arch/mips/boot/compressed/calc_vmlinuz_load_addr.c:16: arch/mips/boot/compressed/../../../../include/linux/sizes.h:11:10: fatal error: 'linux/const.h' file not found ^~~~~~~~~~ Change makefile to force building on local linux headers instead of system headers. Also change eye-watering relative reference in include file spec. Thanks to Jo-Philip Wich & Petr Štetiar for assistance in tracking this down & fixing. Suggested-by: Jo-Philipp Wich <jo@mein.io> Signed-off-by: Petr Štetiar <ynezz@true.cz> Signed-off-by: Kevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Stefan Hellermann authored
[ Upstream commit db13a5ba ] While trying to get the uart with parity working I found setting even parity enabled odd parity insted. Fix the register settings to match the datasheet of AR9331. A similar patch was created by 8devices, but not sent upstream. https://github.com/8devices/openwrt-8devices/commit/77c5586ade3bb72cda010afad3f209ed0c98ea7cSigned-off-by: Stefan Hellermann <stefan@the2masters.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
-
- 21 Jul, 2019 30 commits
-
-
Greg Kroah-Hartman authored
-
Jiri Slaby authored
[ Upstream commit 1cbec37b ] common_spurious is currently ENDed erroneously. common_interrupt is used in its ENDPROC. So fix this mistake. Found by my asm macros rewrite patchset. Fixes: f8a8fe61 ("x86/irq: Seperate unused system vectors from spurious entry again") Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190709063402.19847-1-jslaby@suse.czSigned-off-by: Sasha Levin <sashal@kernel.org>
-
Haren Myneni authored
commit e52d484d upstream. System gets checkstop if RxFIFO overruns with more requests than the maximum possible number of CRBs in FIFO at the same time. The max number of requests per window is controlled by window credits. So find max CRBs from FIFO size and set it to receive window credits. Fixes: b0d6c9ba ("crypto/nx: Add P9 NX support for 842 compression engine") CC: stable@vger.kernel.org # v4.14+ Signed-off-by: Haren Myneni <haren@us.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christophe Leroy authored
commit 58cdbc6d upstream. On SEC1, hash provides wrong result when performing hashing in several steps with input data SG list has more than one element. This was detected with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS: [ 44.185947] alg: hash: md5-talitos test failed (wrong result) on test vector 6, cfg="random: may_sleep use_finup src_divs=[<reimport>25.88%@+8063, <flush>24.19%@+9588, 28.63%@+16333, <reimport>4.60%@+6756, 16.70%@+16281] dst_divs=[71.61%@alignmask+16361, 14.36%@+7756, 14.3%@+" [ 44.325122] alg: hash: sha1-talitos test failed (wrong result) on test vector 3, cfg="random: inplace use_final src_divs=[<flush,nosimd>16.56%@+16378, <reimport>52.0%@+16329, 21.42%@alignmask+16380, 10.2%@alignmask+16380] iv_offset=39" [ 44.493500] alg: hash: sha224-talitos test failed (wrong result) on test vector 4, cfg="random: use_final nosimd src_divs=[<reimport>52.27%@+7401, <reimport>17.34%@+16285, <flush>17.71%@+26, 12.68%@+10644] iv_offset=43" [ 44.673262] alg: hash: sha256-talitos test failed (wrong result) on test vector 4, cfg="random: may_sleep use_finup src_divs=[<reimport>60.6%@+12790, 17.86%@+1329, <reimport>12.64%@alignmask+16300, 8.29%@+15, 0.40%@+13506, <reimport>0.51%@+16322, <reimport>0.24%@+16339] dst_divs" This is due to two issues: - We have an overlap between the buffer used for copying the input data (SEC1 doesn't do scatter/gather) and the chained descriptor. - Data copy is wrong when the previous hash left less than one blocksize of data to hash, implying a complement of the previous block with a few bytes from the new request. Fix it by: - Moving the second descriptor after the buffer, as moving the buffer after the descriptor would make it more complex for other cipher operations (AEAD, ABLKCIPHER) - Skip the bytes taken from the new request to complete the previous one by moving the SG list forward. Fixes: 37b5e889 ("crypto: talitos - chain in buffered data for ahash on SEC1") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christophe Leroy authored
commit d44769e4 upstream. Moves struct talitos_edesc into talitos.h so that it can be used from any place in talitos.c It will be required for next patch ("crypto: talitos - fix hash on SEC1") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Julian Wiedmann authored
commit ac6639cd upstream. Current code sets the dsci to 0x00000080. Which doesn't make any sense, as the indicator area is located in the _left-most_ byte. Worse: if the dsci is the _shared_ indicator, this potentially clears the indication of activity for a _different_ device. tiqdio_thinint_handler() will then have no reason to call that device's IRQ handler, and the device ends up stalling. Fixes: d0c9d4a8 ("[S390] qdio: set correct bit in dsci") Cc: <stable@vger.kernel.org> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Julian Wiedmann authored
commit e54e4785 upstream. When tiqdio_remove_input_queues() removes a queue from the tiq_list as part of qdio_shutdown(), it doesn't re-initialize the queue's list entry and the prev/next pointers go stale. If a subsequent qdio_establish() fails while sending the ESTABLISH cmd, it calls qdio_shutdown() again in QDIO_IRQ_STATE_ERR state and tiqdio_remove_input_queues() will attempt to remove the queue entry a second time. This dereferences the stale pointers, and bad things ensue. Fix this by re-initializing the list entry after removing it from the list. For good practice also initialize the list entry when the queue is first allocated, and remove the quirky checks that papered over this omission. Note that prior to commit e5218134 ("s390/qdio: fix access to uninitialized qdio_q fields"), these checks were bogus anyway. setup_queues_misc() clears the whole queue struct, and thus needs to re-init the prev/next pointers as well. Fixes: 779e6e1c ("[S390] qdio: new qdio driver.") Cc: <stable@vger.kernel.org> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Heiko Carstens authored
commit 4f18d869 upstream. The stfle inline assembly returns the number of double words written (condition code 0) or the double words it would have written (condition code 3), if the memory array it got as parameter would have been large enough. The current stfle implementation assumes that the array is always large enough and clears those parts of the array that have not been written to with a subsequent memset call. If however the array is not large enough memset will get a negative length parameter, which means that memset clears memory until it gets an exception and the kernel crashes. To fix this simply limit the maximum length. Move also the inline assembly to an extra function to avoid clobbering of register 0, which might happen because of the added min_t invocation together with code instrumentation. The bug was introduced with commit 14375bc4 ("[S390] cleanup facility list handling") but was rather harmless, since it would only write to a rather large array. It became a potential problem with commit 3ab121ab ("[S390] kernel: Add z/VM LGR detection"). Since then it writes to an array with only four double words, while some machines already deliver three double words. As soon as machines have a facility bit within the fifth double a crash on IPL would happen. Fixes: 14375bc4 ("[S390] cleanup facility list handling") Cc: <stable@vger.kernel.org> # v2.6.37+ Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arnd Bergmann authored
commit fd5de272 upstream. As kernelci.org reports, this function is not used in vdk_hs38_defconfig: arch/arc/kernel/unwind.c:188:14: warning: 'unw_hdr_alloc' defined but not used [-Wunused-function] Fixes: bc79c9a7 ("ARC: dw2 unwind: Reinstante unwinding out of modules") Link: https://kernelci.org/build/id/5d1cae3f59b514300340c132/logs/ Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit f8a8fe61 upstream. Quite some time ago the interrupt entry stubs for unused vectors in the system vector range got removed and directly mapped to the spurious interrupt vector entry point. Sounds reasonable, but it's subtly broken. The spurious interrupt vector entry point pushes vector number 0xFF on the stack which makes the whole logic in __smp_spurious_interrupt() pointless. As a consequence any spurious interrupt which comes from a vector != 0xFF is treated as a real spurious interrupt (vector 0xFF) and not acknowledged. That subsequently stalls all interrupt vectors of equal and lower priority, which brings the system to a grinding halt. This can happen because even on 64-bit the system vector space is not guaranteed to be fully populated. A full compile time handling of the unused vectors is not possible because quite some of them are conditonally populated at runtime. Bring the entry stubs back, which wastes 160 bytes if all stubs are unused, but gains the proper handling back. There is no point to selectively spare some of the stubs which are known at compile time as the required code in the IDT management would be way larger and convoluted. Do not route the spurious entries through common_interrupt and do_IRQ() as the original code did. Route it to smp_spurious_interrupt() which evaluates the vector number and acts accordingly now that the real vector numbers are handed in. Fixup the pr_warn so the actual spurious vector (0xff) is clearly distiguished from the other vectors and also note for the vectored case whether it was pending in the ISR or not. "Spurious APIC interrupt (vector 0xFF) on CPU#0, should never happen." "Spurious interrupt vector 0xed on CPU#1. Acked." "Spurious interrupt vector 0xee on CPU#1. Not pending!." Fixes: 2414e021 ("x86: Avoid building unused IRQ entry stubs") Reported-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Jan Beulich <jbeulich@suse.com> Link: https://lkml.kernel.org/r/20190628111440.550568228@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit b7107a67 upstream. Since the rework of the vector management, warnings about spurious interrupts have been reported. Robert provided some more information and did an initial analysis. The following situation leads to these warnings: CPU 0 CPU 1 IO_APIC interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() clear_vector() do_IRQ() -> vector is clear Before the rework the vector entries of legacy interrupts were statically assigned and occupied precious vector space while most of them were unused. Due to that the above situation was handled silently because the vector was handled and the core handler of the assigned interrupt descriptor noticed that it is shut down and returned. While this has been usually observed with legacy interrupts, this situation is not limited to them. Any other interrupt source, e.g. MSI, can cause the same issue. After adding proper synchronization for level triggered interrupts, this can only happen for edge triggered interrupts where the IO-APIC obviously cannot provide information about interrupts in flight. While the spurious warning is actually harmless in this case it worries users and driver developers. Handle it gracefully by marking the vector entry as VECTOR_SHUTDOWN instead of VECTOR_UNUSED when the vector is freed up. If that above late handling happens the spurious detector will not complain and switch the entry to VECTOR_UNUSED. Any subsequent spurious interrupt on that line will trigger the spurious warning as before. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>- Tested-by: Robert Hodaszi <Robert.Hodaszi@digi.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.459647741@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit dfe0cf8b upstream. When an interrupt is shut down in free_irq() there might be an inflight interrupt pending in the IO-APIC remote IRR which is not yet serviced. That means the interrupt has been sent to the target CPUs local APIC, but the target CPU is in a state which delays the servicing. So free_irq() would proceed to free resources and to clear the vector because synchronize_hardirq() does not see an interrupt handler in progress. That can trigger a spurious interrupt warning, which is harmless and just confuses users, but it also can leave the remote IRR in a stale state because once the handler is invoked the interrupt resources might be freed already and therefore acknowledgement is not possible anymore. Implement the irq_get_irqchip_state() callback for the IO-APIC irq chip. The callback is invoked from free_irq() via __synchronize_hardirq(). Check the remote IRR bit of the interrupt and return 'in flight' if it is set and the interrupt is configured in level mode. For edge mode the remote IRR has no meaning. As this is only meaningful for level triggered interrupts this won't cure the potential spurious interrupt warning for edge triggered interrupts, but the edge trigger case does not result in stale hardware state. This has to be addressed at the vector/interrupt entry level seperately. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.370295517@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 62e04686 upstream. free_irq() ensures that no hardware interrupt handler is executing on a different CPU before actually releasing resources and deactivating the interrupt completely in a domain hierarchy. But that does not catch the case where the interrupt is on flight at the hardware level but not yet serviced by the target CPU. That creates an interesing race condition: CPU 0 CPU 1 IRQ CHIP interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() release_resources() do_IRQ() -> resources are not available That might be harmless and just trigger a spurious interrupt warning, but some interrupt chips might get into a wedged state. Utilize the existing irq_get_irqchip_state() callback for the synchronization in free_irq(). synchronize_hardirq() is not using this mechanism as it might actually deadlock unter certain conditions, e.g. when called with interrupts disabled and the target CPU is the one on which the synchronization is invoked. synchronize_irq() uses it because that function cannot be called from non preemtible contexts as it might sleep. No functional change intended and according to Marc the existing GIC implementations where the driver supports the callback should be able to cope with that core change. Famous last words. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.279463375@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 1d21f2af upstream. The function might sleep, so it cannot be called from interrupt context. Not even with care. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.189241552@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 4001d8e8 upstream. When interrupts are shutdown, they are immediately deactivated in the irqdomain hierarchy. While this looks obviously correct there is a subtle issue: There might be an interrupt in flight when free_irq() is invoking the shutdown. This is properly handled at the irq descriptor / primary handler level, but the deactivation might completely disable resources which are required to acknowledge the interrupt. Split the shutdown code and deactivate the interrupt after synchronization in free_irq(). Fixup all other usage sites where this is not an issue to invoke the combined shutdown_and_deactivate() function instead. This still might be an issue if the interrupt in flight servicing is delayed on a remote CPU beyond the invocation of synchronize_irq(), but that cannot be handled at that level and needs to be handled in the synchronize_irq() context. Fixes: f8264e34 ("irqdomain: Introduce new interfaces to support hierarchy irqdomains") Reported-by: Robert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.098196390@linutronix.deSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vinod Koul authored
[ Upstream commit 8f9fab48 ] DIV_ROUND_UP_ULL adds the two arguments and then invokes DIV_ROUND_DOWN_ULL. But on a 32bit system the addition of two 32 bit values can overflow. DIV_ROUND_DOWN_ULL does it correctly and stashes the addition into a unsigned long long so cast the result to unsigned long long here to avoid the overflow condition. [akpm@linux-foundation.org: DIV_ROUND_UP_ULL must be an rval] Link: http://lkml.kernel.org/r/20190625100518.30753-1-vkoul@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Andrea Arcangeli authored
[ Upstream commit 1bf4580e ] Commit 5eed6f1d ("fork,memcg: fix crash in free_thread_stack on memcg charge fail") corrected two instances, but there was a third instance of this bug. Without setting tsk->stack, if memcg_charge_kernel_stack fails, it'll execute free_thread_stack() on a dangling pointer. Enterprise kernels are compiled with VMAP_STACK=y so this isn't critical, but custom VMAP_STACK=n builds should have some performance advantage, with the drawback of risking to fail fork because compaction didn't succeed. So as long as VMAP_STACK=n is a supported option it's worth fixing it upstream. Link: http://lkml.kernel.org/r/20190619011450.28048-1-aarcange@redhat.com Fixes: 9b6f7e16 ("mm: rework memcg kernel stack accounting") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Rik van Riel <riel@surriel.com> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Yafang Shao authored
[ Upstream commit 432b1de0 ] In dump_oom_summary() oc->constraint is used to show oom_constraint_text, but it hasn't been set before. So the value of it is always the default value 0. We should inititialize it before. Bellow is the output when memcg oom occurs, before this patch: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null), cpuset=/,mems_allowed=0,oom_memcg=/foo,task_memcg=/foo,task=bash,pid=7997,uid=0 after this patch: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null), cpuset=/,mems_allowed=0,oom_memcg=/foo,task_memcg=/foo,task=bash,pid=13681,uid=0 Link: http://lkml.kernel.org/r/1560522038-15879-1-git-send-email-laoar.shao@gmail.com Fixes: ef8444ea ("mm, oom: reorganize the oom report in dump_header") Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Wind Yu <yuzhoujian@didichuxing.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Nicolas Boichat authored
[ Upstream commit 9d957a95 ] During suspend/resume, mtk_eint_mask may be called while wake_mask is active. For example, this happens if a wake-source with an active interrupt handler wakes the system: irq/pm.c:irq_pm_check_wakeup would disable the interrupt, so that it can be handled later on in the resume flow. However, this may happen before mtk_eint_do_resume is called: in this case, wake_mask is loaded, and cur_mask is restored from an older copy, re-enabling the interrupt, and causing an interrupt storm (especially for level interrupts). Step by step, for a line that has both wake and interrupt enabled: 1. cur_mask[irq] = 1; wake_mask[irq] = 1; EINT_EN[irq] = 1 (interrupt enabled at hardware level) 2. System suspends, resumes due to that line (at this stage EINT_EN == wake_mask) 3. irq_pm_check_wakeup is called, and disables the interrupt => EINT_EN[irq] = 0, but we still have cur_mask[irq] = 1 4. mtk_eint_do_resume is called, and restores EINT_EN = cur_mask, so it reenables EINT_EN[irq] = 1 => interrupt storm as the driver is not yet ready to handle the interrupt. This patch fixes the issue in step 3, by recording all mask/unmask changes in cur_mask. This also avoids the need to read the current mask in eint_do_suspend, and we can remove mtk_eint_chip_read_mask function. The interrupt will be re-enabled properly later on, sometimes after mtk_eint_do_resume, when the driver is ready to handle it. Fixes: 58a5e1b6 ("pinctrl: mediatek: Implement wake handler and suspend resume") Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> Acked-by: Sean Wang <sean.wang@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Eiichi Tsukata authored
[ Upstream commit 33d4a5a7 ] Setting invalid value to /sys/devices/system/cpu/cpuX/hotplug/fail can control `struct cpuhp_step *sp` address, results in the following global-out-of-bounds read. Reproducer: # echo -2 > /sys/devices/system/cpu/cpu0/hotplug/fail KASAN report: BUG: KASAN: global-out-of-bounds in write_cpuhp_fail+0x2cd/0x2e0 Read of size 8 at addr ffffffff89734438 by task bash/1941 CPU: 0 PID: 1941 Comm: bash Not tainted 5.2.0-rc6+ #31 Call Trace: write_cpuhp_fail+0x2cd/0x2e0 dev_attr_store+0x58/0x80 sysfs_kf_write+0x13d/0x1a0 kernfs_fop_write+0x2bc/0x460 vfs_write+0x1e1/0x560 ksys_write+0x126/0x250 do_syscall_64+0xc1/0x390 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f05e4f4c970 The buggy address belongs to the variable: cpu_hotplug_lock+0x98/0xa0 Memory state around the buggy address: ffffffff89734300: fa fa fa fa 00 00 00 00 00 00 00 00 00 00 00 00 ffffffff89734380: fa fa fa fa 00 00 00 00 00 00 00 00 00 00 00 00 >ffffffff89734400: 00 00 00 00 fa fa fa fa 00 00 00 00 fa fa fa fa ^ ffffffff89734480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffffffff89734500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Add a sanity check for the value written from user space. Fixes: 1db49484 ("smp/hotplug: Hotplug state fail injection") Signed-off-by: Eiichi Tsukata <devel@etsukata.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: peterz@infradead.org Link: https://lkml.kernel.org/r/20190627024732.31672-1-devel@etsukata.comSigned-off-by: Sasha Levin <sashal@kernel.org>
-
Nicolas Boichat authored
[ Upstream commit 35594bc7 ] Before suspending, mtk-eint would set the interrupt mask to the one in wake_mask. However, some of these interrupts may not have a corresponding interrupt handler, or the interrupt may be disabled. On resume, the eint irq handler would trigger nevertheless, and irq/pm.c:irq_pm_check_wakeup would be called, which would try to call irq_disable. However, if the interrupt is not enabled (irqd_irq_disabled(&desc->irq_data) is true), the call does nothing, and the interrupt is left enabled in the eint driver. Especially for level-sensitive interrupts, this will lead to an interrupt storm on resume. If we detect that an interrupt is only in wake_mask, but not in cur_mask, we can just mask it out immediately (as mtk_eint_resume would do anyway at a later stage in the resume sequence, when restoring cur_mask). Fixes: bf22ff45 ("genirq: Avoid unnecessary low level irq function calls") Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> Acked-by: Sean Wang <sean.wang@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Kai-Heng Feng authored
[ Upstream commit 0a95fc73 ] There's a new ALPS touchpad/pointstick combo device that requires MT_CLS_WIN_8_DUAL to make its pointsitck work as a mouse. The device can be found on HP ZBook 17 G5. Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Kyle Godbey authored
[ Upstream commit 315ffcc9 ] Add support for Huion HS64 drawing tablet to hid-uclogic Signed-off-by: Kyle Godbey <me@kyle.ee> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Oleksandr Natalenko authored
[ Upstream commit dcf768b0 ] I've spotted another Chicony PixArt mouse in the wild, which requires HID_QUIRK_ALWAYS_POLL quirk, otherwise it disconnects each minute. USB ID of this device is 0x04f2:0x0939. We've introduced quirks like this for other models before, so lets add this mouse too. Link: https://github.com/sriemer/fix-linux-mouse#usb-mouse-disconnectsreconnects-every-minute-on-linuxSigned-off-by: Oleksandr Natalenko <oleksandr@redhat.com> Acked-by: Sebastian Parschauer <s.parschauer@gmx.de> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Kirill A. Shutemov authored
[ Upstream commit c1887159 ] __startup_64() uses fixup_pointer() to access global variables in a position-independent fashion. Access to next_early_pgt was wrapped into the helper, but one instance in the 5-level paging branch was missed. GCC generates a R_X86_64_PC32 PC-relative relocation for the access which doesn't trigger the issue, but Clang emmits a R_X86_64_32S which leads to an invalid memory access and system reboot. Fixes: 187e91fe ("x86/boot/64/clang: Use fixup_pointer() to access 'next_early_pgt'") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Potapenko <glider@google.com> Link: https://lkml.kernel.org/r/20190620112422.29264-1-kirill.shutemov@linux.intel.comSigned-off-by: Sasha Levin <sashal@kernel.org>
-
Kirill A. Shutemov authored
[ Upstream commit 81c7ed29 ] A kernel which boots in 5-level paging mode crashes in a small percentage of cases if KASLR is enabled. This issue was tracked down to the case when the kernel image unpacks in a way that it crosses an 1G boundary. The crash is caused by an overrun of the PMD page table in __startup_64() and corruption of P4D page table allocated next to it. This particular issue is not visible with 4-level paging as P4D page tables are not used. But the P4D and the PUD calculation have similar problems. The PMD index calculation is wrong due to operator precedence, which fails to confine the PMDs in the PMD array on wrap around. The P4D calculation for 5-level paging and the PUD calculation calculate the first index correctly, but then blindly increment it which causes the same issue when a kernel image is located across a 512G and for 5-level paging across a 46T boundary. This wrap around mishandling was introduced when these parts moved from assembly to C. Restore it to the correct behaviour. Fixes: c88d7150 ("x86/boot/64: Rewrite startup_64() in C") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190620112345.28833-1-kirill.shutemov@linux.intel.comSigned-off-by: Sasha Levin <sashal@kernel.org>
-
Milan Broz authored
[ Upstream commit 2eba4e64 ] DM verity should also use DMERR_LIMIT to limit repeat data block corruption messages. Signed-off-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Jerome Marchand authored
[ Upstream commit a0651926 ] For the first call to realloc_argv() in dm_split_args(), old_argv is NULL and size is zero. Then memcpy is called, with the NULL old_argv as the source argument and a zero size argument. AFAIK, this is undefined behavior and generates the following warning when compiled with UBSAN on ppc64le: In file included from ./arch/powerpc/include/asm/paca.h:19, from ./arch/powerpc/include/asm/current.h:16, from ./include/linux/sched.h:12, from ./include/linux/kthread.h:6, from drivers/md/dm-core.h:12, from drivers/md/dm-table.c:8: In function 'memcpy', inlined from 'realloc_argv' at drivers/md/dm-table.c:565:3, inlined from 'dm_split_args' at drivers/md/dm-table.c:588:9: ./include/linux/string.h:345:9: error: argument 2 null where non-null expected [-Werror=nonnull] return __builtin_memcpy(p, q, size); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-table.c: In function 'dm_split_args': ./include/linux/string.h:345:9: note: in a call to built-in function '__builtin_memcpy' Signed-off-by: Jerome Marchand <jmarchan@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Alexandre Belloni authored
[ Upstream commit 4b36082e ] The actual layout for OCELOT_GPIO_ALT[01] when there are more than 32 pins is interleaved, i.e. OCELOT_GPIO_ALT0[0], OCELOT_GPIO_ALT1[0], OCELOT_GPIO_ALT0[1], OCELOT_GPIO_ALT1[1]. Introduce a new REG_ALT macro to facilitate the register offset calculation and use it where necessary. Fixes: da801ab5 pinctrl: ocelot: add MSCC Jaguar2 support Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-
Alexandre Belloni authored
[ Upstream commit f2818ba3 ] The third argument passed to REG is not the correct one and ocelot_gpio_set_direction is not working for pins after 31. Fix that by passing the pin number instead of the modulo 32 value. Fixes: da801ab5 pinctrl: ocelot: add MSCC Jaguar2 support Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-