- 06 Feb, 2017 40 commits
-
-
Al Viro authored
commit 8ae95ed4 upstream. Acked-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 2561d309 upstream. it should clear the destination even when access_ok() fails. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 2545e5da upstream. ... in all cases, including the failing access_ok() Note that some architectures using asm-generic/uaccess.h have __copy_from_user() not zeroing the tail on failure halfway through. This variant works either way. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> [wt: s/might_fault/might_sleep] Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit e69d7005 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit f35c1e06 upstream. It's -EFAULT, not -1 (and contrary to the comment in there, __strnlen_user() can return 0 - on faults). Acked-by: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 6e050503 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit b615e3c7 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 8f035983 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit eb47e029 upstream. * copy_from_user() on access_ok() failure ought to zero the destination * none of those primitives should skip the access_ok() check in case of small constant size. Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 3b8767a8 upstream. It should check access_ok(). Otherwise a bunch of places turn into trivially exploitable rootholes. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 9ad18b75 upstream. both for access_ok() failures and for faults halfway through Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Vineet Gupta authored
commit 05d9d0b9 upstream. Al reported potential issue with ARC get_user() as it wasn't clearing out destination pointer in case of fault due to bad address etc. Verified using following | { | u32 bogus1 = 0xdeadbeef; | u64 bogus2 = 0xdead; | int rc1, rc2; | | pr_info("Orig values %x %llx\n", bogus1, bogus2); | rc1 = get_user(bogus1, (u32 __user *)0x40000000); | rc2 = get_user(bogus2, (u64 __user *)0x50000000); | pr_info("access %d %d, new values %x %llx\n", | rc1, rc2, bogus1, bogus2); | } | [ARCLinux]# insmod /mnt/kernel-module/qtn.ko | Orig values deadbeef dead | access -14 -14, new values 0 0 Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit fd2d2b19 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit c2f18fa4 upstream. * should zero on any failure * __get_user() should use __copy_from_user(), not copy_from_user() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit c6852389 upstream. It could be done in exception-handling bits in __get_user_b() et.al., but the surgery involved would take more knowledge of sh64 details than I have or _want_ to have. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit c90a3bc5 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 43403eab upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit d0cf3851 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> [wt: s/might_fault/might_sleep] Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit e98b9e37 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
John David Anglin authored
commit 6ed51832 upstream. We have one critical section in the syscall entry path in which we switch from the userspace stack to kernel stack. In the event of an external interrupt, the interrupt code distinguishes between those two states by analyzing the value of sr7. If sr7 is zero, it uses the kernel stack. Therefore it's important, that the value of sr7 is in sync with the currently enabled stack. This patch now disables interrupts while executing the critical section. This prevents the interrupt handler to possibly see an inconsistent state which in the worst case can lead to crashes. Interestingly, in the syscall exit path interrupts were already disabled in the critical section which switches back to the userspace stack. Signed-off-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Stefan Haberland authored
commit 9ba333dc upstream. When a device is in a status where CIO has killed all I/O by itself the interrupt for a clear request may not contain an irb to determine the clear function. Instead it contains an error pointer -EIO. This was ignored by the DASD int_handler leading to a hanging device waiting for a clear interrupt. Handle -EIO error pointer correctly for requests that are clear pending and treat the clear as successful. Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Dan Carpenter authored
commit 55f1cf83 upstream. The pio_dev[] array has MAX_NR_PIO_DEVICES elements so the > should be >=. Fixes: 5f97f7f9 ('[PATCH] avr32 architecture') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Guenter Roeck authored
commit 65c0044c upstream. avr32 builds fail with: arch/avr32/kernel/built-in.o: In function `arch_ptrace': (.text+0x650): undefined reference to `___copy_from_user' arch/avr32/kernel/built-in.o:(___ksymtab+___copy_from_user+0x0): undefined reference to `___copy_from_user' kernel/built-in.o: In function `proc_doulongvec_ms_jiffies_minmax': (.text+0x5dd8): undefined reference to `___copy_from_user' kernel/built-in.o: In function `proc_dointvec_minmax_sysadmin': sysctl.c:(.text+0x6174): undefined reference to `___copy_from_user' kernel/built-in.o: In function `ptrace_has_cap': ptrace.c:(.text+0x69c0): undefined reference to `___copy_from_user' kernel/built-in.o:ptrace.c:(.text+0x6b90): more undefined references to `___copy_from_user' follow Fixes: 8630c322 ("avr32: fix copy_from_user()") Cc: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Havard Skinnemoen <hskinnemoen@gmail.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Al Viro authored
commit 8630c322 upstream. really ugly, but apparently avr32 compilers turns access_ok() into something so bad that they want it in assembler. Left that way, zeroing added in inline wrapper. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Pan Xinhui authored
commit 11b7e154 upstream. When we merge two contiguous partitions whose signatures are marked NVRAM_SIG_FREE, We need update prev's length and checksum, then write it to nvram, not cur's. So lets fix this mistake now. Also use memset instead of strncpy to set the partition's name. It's more readable if we want to fill up with duplicate chars . Fixes: fa2b4e54 ("powerpc/nvram: Improve partition removal") Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Paul Mackerras authored
commit 1a34439e upstream. Debugging a data corruption issue with virtio-net/vhost-net led to the observation that __copy_tofrom_user was occasionally returning a value 16 larger than it should. Since the return value from __copy_tofrom_user is the number of bytes not copied, this means that __copy_tofrom_user can occasionally return a value larger than the number of bytes it was asked to copy. In turn this can cause higher-level copy functions such as copy_page_to_iter_iovec to corrupt memory by copying data into the wrong memory locations. It turns out that the failing case involves a fault on the store at label 79, and at that point the first unmodified byte of the destination is at R3 + 16. Consequently the exception handler for that store needs to add 16 to R3 before using it to work out how many bytes were not copied, but in this one case it was not adding the offset to R3. To fix it, this moves the label 179 to the point where we add 16 to R3. I have checked manually all the exception handlers for the loads and stores in this code and the rest of them are correct (it would be excellent to have an automated test of all the exception cases). This bug has been present since this code was initially committed in May 2002 to Linux version 2.5.20. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Gavin Shan authored
commit 5adaf862 upstream. This fixes the warnings reported from sparse: pci.c:312:33: warning: restricted __be64 degrades to integer pci.c:313:33: warning: restricted __be64 degrades to integer Fixes: cee72d5b ("powerpc/powernv: Display diag data on p7ioc EEH errors") Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Anton Blanchard authored
commit 5045ea37 upstream. __kernel_get_syscall_map() and __kernel_clock_getres() use cmpli to check if the passed in pointer is non zero. cmpli maps to a 32 bit compare on binutils, so we ignore the top 32 bits. A simple test case can be created by passing in a bogus pointer with the bottom 32 bits clear. Using a clk_id that is handled by the VDSO, then one that is handled by the kernel shows the problem: printf("%d\n", clock_getres(CLOCK_REALTIME, (void *)0x100000000)); printf("%d\n", clock_getres(CLOCK_BOOTTIME, (void *)0x100000000)); And we get: 0 -1 The bigger issue is if we pass a valid pointer with the bottom 32 bits clear, in this case we will return success but won't write any data to the pointer. I stumbled across this issue because the LLVM integrated assembler doesn't accept cmpli with 3 arguments. Fix this by converting them to cmpldi. Fixes: a7f290da ("[PATCH] powerpc: Merge vdso's and add vdso support to 32 bits kernel") Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Paul Mackerras authored
commit f077aaf0 upstream. In commit c60ac569 ("powerpc: Update kernel VSID range", 2013-03-13) we lost a check on the region number (the top four bits of the effective address) for addresses below PAGE_OFFSET. That commit replaced a check that the top 18 bits were all zero with a check that bits 46 - 59 were zero (performed for all addresses, not just user addresses). This means that userspace can access an address like 0x1000_0xxx_xxxx_xxxx and we will insert a valid SLB entry for it. The VSID used will be the same as if the top 4 bits were 0, but the page size will be some random value obtained by indexing beyond the end of the mm_ctx_high_slices_psize array in the paca. If that page size is the same as would be used for region 0, then userspace just has an alias of the region 0 space. If the page size is different, then no HPTE will be found for the access, and the process will get a SIGSEGV (since hash_page_mm() will refuse to create a HPTE for the bogus address). The access beyond the end of the mm_ctx_high_slices_psize can be at most 5.5MB past the array, and so will be in RAM somewhere. Since the access is a load performed in real mode, it won't fault or crash the kernel. At most this bug could perhaps leak a little bit of information about blocks of 32 bytes of memory located at offsets of i * 512kB past the paca->mm_ctx_high_slices_psize array, for 1 <= i <= 11. Fixes: c60ac569 ("powerpc: Update kernel VSID range") Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Marcin Nowakowski authored
commit 74f1077b upstream. Currently regs_return_value always negates reg[2] if it determines the syscall has failed, but when called in kernel context this check is invalid and may result in returning a wrong value. This fixes errors reported by CONFIG_KPROBES_SANITY_TEST Fixes: d7e7528b ("Audit: push audit success and retcode into arch ptrace.h") Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14381/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Paul Burton authored
commit 305723ab upstream. Malta boards used with CPU emulators feature a switch to disable use of an IOCU. Software has to check this switch & ignore any present IOCU if the switch is closed. The read used to do this was unsafe for 64 bit kernels, as it simply casted the address 0xbf403000 to a pointer & dereferenced it. Whilst in a 32 bit kernel this would access kseg1, in a 64 bit kernel this attempts to access xuseg & results in an address error exception. Fix by accessing a correctly formed ckseg1 address generated using the CKSEG1ADDR macro. Whilst modifying this code, define the name of the register and the bit we care about within it, which indicates whether PCI DMA is routed to the IOCU or straight to DRAM. The code previously checked that bit 0 was also set, but the least significant 7 bits of the CONFIG_GEN0 register contain the value of the MReqInfo signal provided to the IOCU OCP bus, so singling out bit 0 makes little sense & that part of the check is dropped. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b6d92b4a ("MIPS: Add option to disable software I/O coherency.") Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14187/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Will Deacon authored
commit 3a402a70 upstream. When TIF_SINGLESTEP is set for a task, the single-step state machine is enabled and we must take care not to reset it to the active-not-pending state if it is already in the active-pending state. Unfortunately, that's exactly what user_enable_single_step does, by unconditionally setting the SS bit in the SPSR for the current task. This causes failures in the GDB testsuite, where GDB ends up missing expected step traps if the instruction being stepped generates another trap, e.g. PTRACE_EVENT_FORK from an SVC instruction. This patch fixes the problem by preserving the current state of the stepping state machine when TIF_SINGLESTEP is set on the current thread. Cc: <stable@vger.kernel.org> Reported-by: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Will Deacon authored
commit 872c63fb upstream. smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation to a full barrier, such that prior stores are ordered with respect to loads and stores occuring inside the critical section. Unfortunately, the core code defines the barrier as smp_wmb(), which is insufficient to provide the required ordering guarantees when used in conjunction with our load-acquire-based spinlock implementation. This patch overrides the arm64 definition of smp_mb__before_spinlock() to map to a full smp_mb(). Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
James Hogan authored
commit 3146bc64 upstream. AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address. This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add the comment above ARCH_DLINFO as found in several other architectures to remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to date. Fixes: f668cd16 ("arm64: ELF definitions") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Mark Rutland authored
commit 7d9e8f71 upstream. Generally, taking an unexpected exception should be a fatal event, and bad_mode is intended to cater for this. However, it should be possible to contain unexpected synchronous exceptions from EL0 without bringing the kernel down, by sending a SIGILL to the task. We tried to apply this approach in commit 9955ac47 ("arm64: don't kill the kernel on a bad esr from el0"), by sending a signal for any bad_mode call resulting from an EL0 exception. However, this also applies to other unexpected exceptions, such as SError and FIQ. The entry paths for these exceptions branch to bad_mode without configuring the link register, and have no kernel_exit. Thus, if we take one of these exceptions from EL0, bad_mode will eventually return to the original user link register value. This patch fixes this by introducing a new bad_el0_sync handler to cater for the recoverable case, and restoring bad_mode to its original state, whereby it calls panic() and never returns. The recoverable case branches to bad_el0_sync with a bl, and returns to userspace via the usual ret_to_user mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 9955ac47 ("arm64: don't kill the kernel on a bad esr from el0") Reported-by: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Russell King authored
commit 06dfe5cc upstream. SA1111 PCMCIA was broken when PCMCIA switched to using dev_pm_ops for the PCMCIA socket class. PCMCIA used to handle suspend/resume via the socket hosting device, which happened at normal device suspend/resume time. However, the referenced commit changed this: much of the resume now happens much earlier, in the noirq resume handler of dev_pm_ops. However, on SA1111, the PCMCIA device is not accessible as the SA1111 has not been resumed at _noirq time. It's slightly worse than that, because the SA1111 has already been put to sleep at _noirq time, so suspend doesn't work properly. Fix this by converting the core SA1111 code to use dev_pm_ops as well, and performing its own suspend/resume at noirq time. This fixes these errors in the kernel log: pcmcia_socket pcmcia_socket0: time out after reset pcmcia_socket pcmcia_socket1: time out after reset and the resulting lack of PCMCIA cards after a S2RAM cycle. Fixes: d7646f76 ("pcmcia: use dev_pm_ops for class pcmcia_socket_class") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Russell King authored
commit da60626e upstream. Clear the current reset status prior to rebooting the platform. This adds the bit missing from 04fef228 ("[ARM] pxa: introduce reset_status and clear_reset_status for driver's usage"). Fixes: 04fef228 ("[ARM] pxa: introduce reset_status and clear_reset_status for driver's usage") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Srinivas Ramana authored
commit 117e5e9c upstream. If the bootloader uses the long descriptor format and jumps to kernel decompressor code, TTBCR may not be in a right state. Before enabling the MMU, it is required to clear the TTBCR.PD0 field to use TTBR0 for translation table walks. The commit dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") does the reset of TTBCR.N, but doesn't consider all the bits for the size of TTBCR.N. Clear TTBCR.PD0 field and reset all the three bits of TTBCR.N to indicate the use of TTBR0 and the correct base address width. Fixes: dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Robin Murphy authored
commit ba6dea4f upstream. Whilst MPIDR values themselves are less than 32 bits, it is still perfectly valid for a DT to have #address-cells > 1 in the CPUs node, resulting in the "reg" property having leading zero cell(s). In that situation, the big-endian nature of the data conspires with the current behaviour of only reading the first cell to cause the kernel to think all CPUs have ID 0, and become resoundingly unhappy as a consequence. Take the full property length into account when parsing CPUs so as to be correct under any circumstances. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu>
-
Baoquan He authored
commit c3db901c upstream. The current code missed freeing domain id when free a domain of struct dma_ops_domain. Signed-off-by: Baoquan He <bhe@redhat.com> Fixes: ec487d1a ('x86, AMD IOMMU: add domain allocation and deallocation functions') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Willy Tarreau <w@1wt.eu>
-