- 20 Jun, 2016 2 commits
-
-
Thomas Huth authored
[ Upstream commit 8dd75ccb ] We are already using the privileged versions of MMCR0, MMCR1 and MMCRA in the kernel, so for MMCR2, we should better use the privileged versions, too, to be consistent. Fixes: 240686c1 ("powerpc: Initialise PMU related regs on Power8") Cc: stable@vger.kernel.org # v3.10+ Suggested-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Thomas Huth authored
[ Upstream commit d23fac2b ] The SIAR and SDAR registers are available twice, one time as SPRs 780 / 781 (unprivileged, but read-only), and one time as the SPRs 796 / 797 (privileged, but read and write). The Linux kernel code currently uses the unprivileged SPRs - while this is OK for reading, writing to that register of course does not work. Since the KVM code tries to write to this register, too (see the mtspr in book3s_hv_rmhandlers.S), the contents of this register sometimes get lost for the guests, e.g. during migration of a VM. To fix this issue, simply switch to the privileged SPR numbers instead. Cc: stable@vger.kernel.org Signed-off-by: Thomas Huth <thuth@redhat.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 18 Jun, 2016 4 commits
-
-
Russell Currey authored
[ Upstream commit 871e178e ] In the "ibm,configure-pe" and "ibm,configure-bridge" RTAS calls, the spec states that values of 9900-9905 can be returned, indicating that software should delay for 10^x (where x is the last digit, i.e. 990x) milliseconds and attempt the call again. Currently, the kernel doesn't know about this, and respecting it fixes some PCI failures when the hypervisor is busy. The delay is capped at 0.2 seconds. Cc: <stable@vger.kernel.org> # 3.10+ Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tom Lendacky authored
[ Upstream commit ab6a11a7 ] The ccp-crypto module for AES XTS support has a bug that can allow requests greater than 4096 bytes in size to be passed to the CCP hardware. The CCP hardware does not support request sizes larger than 4096, resulting in incorrect output. The request should actually be handled by the fallback mechanism instantiated by the ccp-crypto module. Add a check to insure the request size is less than or equal to the maximum supported size and use the fallback mechanism if it is not. Cc: <stable@vger.kernel.org> # 3.14.x- Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
James Bottomley authored
[ Upstream commit a621bac3 ] When SCSI was written, all commands coming from the filesystem (REQ_TYPE_FS commands) had data. This meant that our signal for needing to complete the command was the number of bytes completed being equal to the number of bytes in the request. Unfortunately, with the advent of flush barriers, we can now get zero length REQ_TYPE_FS commands, which confuse this logic because they satisfy the condition every time. This means they never get retried even for retryable conditions, like UNIT ATTENTION because we complete them early assuming they're done. Fix this by special casing the early completion condition to recognise zero length commands with errors and let them drop through to the retry code. Cc: stable@vger.kernel.org Reported-by: Sebastian Parschauer <s.parschauer@gmx.de> Signed-off-by: James E.J. Bottomley <jejb@linux.vnet.ibm.com> Tested-by: Jack Wang <jinpu.wang@profitbricks.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Arnd Bergmann authored
[ Upstream commit bad6a185 ] In some rare randconfig builds, we can end up with ASYMMETRIC_PUBLIC_KEY_SUBTYPE enabled but CRYPTO_AKCIPHER disabled, which fails to link because of the reference to crypto_alloc_akcipher: crypto/built-in.o: In function `public_key_verify_signature': :(.text+0x110e4): undefined reference to `crypto_alloc_akcipher' This adds a Kconfig 'select' statement to ensure the dependency is always there. Cc: <stable@vger.kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 06 Jun, 2016 34 commits
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ville Syrjälä authored
[ Upstream commit 3017cd63 ] With netconsole (at least) the pr_err("... disablingn") call can recurse back into the dma-debug code, where it'll try to grab free_entries_lock again. Avoid the problem by doing the printk after dropping the lock. Link: http://lkml.kernel.org/r/1463678421-18683-1-git-send-email-ville.syrjala@linux.intel.comSigned-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Richard Weinberger authored
[ Upstream commit 1900149c ] Ezequiel reported that he's facing UBI going into read-only mode after power cut. It turned out that this behavior happens only when updating a static volume is interrupted and Fastmap is used. A possible trace can look like: ubi0 warning: ubi_io_read_vid_hdr [ubi]: no VID header found at PEB 2323, only 0xFF bytes ubi0 warning: ubi_eba_read_leb [ubi]: switch to read-only mode CPU: 0 PID: 833 Comm: ubiupdatevol Not tainted 4.6.0-rc2-ARCH #4 Hardware name: SAMSUNG ELECTRONICS CO., LTD. 300E4C/300E5C/300E7C/NP300E5C-AD8AR, BIOS P04RAP 10/15/2012 0000000000000286 00000000eba949bd ffff8800c45a7b38 ffffffff8140d841 ffff8801964be000 ffff88018eaa4800 ffff8800c45a7bb8 ffffffffa003abf6 ffffffff850e2ac0 8000000000000163 ffff8801850e2ac0 ffff8801850e2ac0 Call Trace: [<ffffffff8140d841>] dump_stack+0x63/0x82 [<ffffffffa003abf6>] ubi_eba_read_leb+0x486/0x4a0 [ubi] [<ffffffffa00453b3>] ubi_check_volume+0x83/0xf0 [ubi] [<ffffffffa0039d97>] ubi_open_volume+0x177/0x350 [ubi] [<ffffffffa00375d8>] vol_cdev_open+0x58/0xb0 [ubi] [<ffffffff8124b08e>] chrdev_open+0xae/0x1d0 [<ffffffff81243bcf>] do_dentry_open+0x1ff/0x300 [<ffffffff8124afe0>] ? cdev_put+0x30/0x30 [<ffffffff81244d36>] vfs_open+0x56/0x60 [<ffffffff812545f4>] path_openat+0x4f4/0x1190 [<ffffffff81256621>] do_filp_open+0x91/0x100 [<ffffffff81263547>] ? __alloc_fd+0xc7/0x190 [<ffffffff812450df>] do_sys_open+0x13f/0x210 [<ffffffff812451ce>] SyS_open+0x1e/0x20 [<ffffffff81a99e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4 UBI checks static volumes for data consistency and reads the whole volume upon first open. If the volume is found erroneous users of UBI cannot read from it, but another volume update is possible to fix it. The check is performed by running ubi_eba_read_leb() on every allocated LEB of the volume. For static volumes ubi_eba_read_leb() computes the checksum of all data stored in a LEB. To verify the computed checksum it has to read the LEB's volume header which stores the original checksum. If the volume header is not found UBI treats this as fatal internal error and switches to RO mode. If the UBI device was attached via a full scan the assumption is correct, the volume header has to be present as it had to be there while scanning to get known as mapped. If the attach operation happened via Fastmap the assumption is no longer correct. When attaching via Fastmap UBI learns the mapping table from Fastmap's snapshot of the system state and not via a full scan. It can happen that a LEB got unmapped after a Fastmap was written to the flash. Then UBI can learn the LEB still as mapped and accessing it returns only 0xFF bytes. As UBI is not a FTL it is allowed to have mappings to empty PEBs, it assumes that the layer above takes care of LEB accounting and referencing. UBIFS does so using the LEB property tree (LPT). For static volumes UBI blindly assumes that all LEBs are present and therefore special actions have to be taken. The described situation can happen when updating a static volume is interrupted, either by a user or a power cut. The volume update code first unmaps all LEBs of a volume and then writes LEB by LEB. If the sequence of operations is interrupted UBI detects this either by the absence of LEBs, no volume header present at scan time, or corrupted payload, detected via checksum. In the Fastmap case the former method won't trigger as no scan happened and UBI automatically thinks all LEBs are present. Only by reading data from a LEB it detects that the volume header is missing and incorrectly treats this as fatal error. To deal with the situation ubi_eba_read_leb() from now on checks whether we attached via Fastmap and handles the absence of a volume header like a data corruption error. This way interrupted static volume updates will correctly get detected also when Fastmap is used. Cc: <stable@vger.kernel.org> Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Brian Norris authored
[ Upstream commit b388e6a7 ] commit 0e707ae7 ("UBI: do propagate positive error codes up") seems to have produced an unintended change in the control flow here. Completely untested, but it looks obvious. Caught by Coverity, which didn't like the indentation. CID 1271184. Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Artem Bityutskiy authored
[ Upstream commit 0e707ae7 ] UBI uses positive function return codes internally, and should not propagate them up, except in the place this path fixes. Here is the original bug report from Dan Carpenter: The problem is really in ubi_eba_read_leb(). drivers/mtd/ubi/eba.c 412 err = ubi_io_read_vid_hdr(ubi, pnum, vid_hdr, 1); 413 if (err && err != UBI_IO_BITFLIPS) { 414 if (err > 0) { 415 /* 416 * The header is either absent or corrupted. 417 * The former case means there is a bug - 418 * switch to read-only mode just in case. 419 * The latter case means a real corruption - we 420 * may try to recover data. FIXME: but this is 421 * not implemented. 422 */ 423 if (err == UBI_IO_BAD_HDR_EBADMSG || 424 err == UBI_IO_BAD_HDR) { 425 ubi_warn("corrupted VID header at PEB %d, LEB %d:%d", 426 pnum, vol_id, lnum); 427 err = -EBADMSG; 428 } else 429 ubi_ro_mode(ubi); On this path we return UBI_IO_FF and UBI_IO_FF_BITFLIPS and it eventually gets passed to ERR_PTR(). We probably dereference the bad pointer and oops. At that point we've gone read only so it was already a bad situation... 430 } 431 goto out_free; 432 } else if (err == UBI_IO_BITFLIPS) 433 scrub = 1; 434 Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Richard Weinberger authored
[ Upstream commit 19371d73 ] If the WL pool runs out of PEBs we schedule a fastmap write to refill it as soon as possible. Ensure that only one at a time is scheduled otherwise we might end in a fastmap write storm because writing the fastmap can schedule another write if bitflips are detected. Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org> Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ross Lagerwall authored
[ Upstream commit f0f39387 ] Commit ff1e22e7 ("xen/events: Mask a moving irq") open-coded irq_move_irq() but left out checking if the IRQ is disabled. This broke resuming from suspend since it tries to move a (disabled) irq without holding the IRQ's desc->lock. Fix it by adding in a check for disabled IRQs. The resulting stacktrace was: kernel BUG at /build/linux-UbQGH5/linux-4.4.0/kernel/irq/migration.c:31! invalid opcode: 0000 [#1] SMP Modules linked in: xenfs xen_privcmd ... CPU: 0 PID: 9 Comm: migration/0 Not tainted 4.4.0-22-generic #39-Ubuntu Hardware name: Xen HVM domU, BIOS 4.6.1-xs125180 05/04/2016 task: ffff88003d75ee00 ti: ffff88003d7bc000 task.ti: ffff88003d7bc000 RIP: 0010:[<ffffffff810e26e2>] [<ffffffff810e26e2>] irq_move_masked_irq+0xd2/0xe0 RSP: 0018:ffff88003d7bfc50 EFLAGS: 00010046 RAX: 0000000000000000 RBX: ffff88003d40ba00 RCX: 0000000000000001 RDX: 0000000000000001 RSI: 0000000000000100 RDI: ffff88003d40bad8 RBP: ffff88003d7bfc68 R08: 0000000000000000 R09: ffff88003d000000 R10: 0000000000000000 R11: 000000000000023c R12: ffff88003d40bad0 R13: ffffffff81f3a4a0 R14: 0000000000000010 R15: 00000000ffffffff FS: 0000000000000000(0000) GS:ffff88003da00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd4264de624 CR3: 0000000037922000 CR4: 00000000003406f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Stack: ffff88003d40ba38 0000000000000024 0000000000000000 ffff88003d7bfca0 ffffffff814c8d92 00000010813ef89d 00000000805ea732 0000000000000009 0000000000000024 ffff88003cc39b80 ffff88003d7bfce0 ffffffff814c8f66 Call Trace: [<ffffffff814c8d92>] eoi_pirq+0xb2/0xf0 [<ffffffff814c8f66>] __startup_pirq+0xe6/0x150 [<ffffffff814ca659>] xen_irq_resume+0x319/0x360 [<ffffffff814c7e75>] xen_suspend+0xb5/0x180 [<ffffffff81120155>] multi_cpu_stop+0xb5/0xe0 [<ffffffff811200a0>] ? cpu_stop_queue_work+0x80/0x80 [<ffffffff811203d0>] cpu_stopper_thread+0xb0/0x140 [<ffffffff810a94e6>] ? finish_task_switch+0x76/0x220 [<ffffffff810ca731>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20 [<ffffffff810a3935>] smpboot_thread_fn+0x105/0x160 [<ffffffff810a3830>] ? sort_range+0x30/0x30 [<ffffffff810a0588>] kthread+0xd8/0xf0 [<ffffffff810a04b0>] ? kthread_create_on_node+0x1e0/0x1e0 [<ffffffff8182568f>] ret_from_fork+0x3f/0x70 [<ffffffff810a04b0>] ? kthread_create_on_node+0x1e0/0x1e0 Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefano Stabellini authored
[ Upstream commit 702f9260 ] b4ff8389 is incomplete: relies on nr_legacy_irqs() to get the number of legacy interrupts when actually nr_legacy_irqs() returns 0 after probe_8259A(). Use NR_IRQS_LEGACY instead. Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> CC: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Jiang Liu authored
[ Upstream commit 8abb850a ] Xen overrides __acpi_register_gsi and leaves __acpi_unregister_gsi as is. That means, an IRQ allocated by acpi_register_gsi_xen_hvm() or acpi_register_gsi_xen() will be freed by acpi_unregister_gsi_ioapic(), which may cause undesired effects. So override __acpi_unregister_gsi to NULL for safety. Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Tested-by: Sander Eikelenboom <linux@eikelenboom.it> Cc: Tony Luck <tony.luck@intel.com> Cc: xen-devel@lists.xenproject.org Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Graeme Gregory <graeme.gregory@linaro.org> Cc: Lv Zheng <lv.zheng@intel.com> Link: http://lkml.kernel.org/r/1421720467-7709-4-git-send-email-jiang.liu@linux.intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Oleg Nesterov authored
[ Upstream commit bf959931 ] The following program (simplified version of generated by syzkaller) #include <pthread.h> #include <unistd.h> #include <sys/ptrace.h> #include <stdio.h> #include <signal.h> void *thread_func(void *arg) { ptrace(PTRACE_TRACEME, 0,0,0); return 0; } int main(void) { pthread_t thread; if (fork()) return 0; while (getppid() != 1) ; pthread_create(&thread, NULL, thread_func, NULL); pthread_join(thread, NULL); return 0; } creates an unreapable zombie if /sbin/init doesn't use __WALL. This is not a kernel bug, at least in a sense that everything works as expected: debugger should reap a traced sub-thread before it can reap the leader, but without __WALL/__WCLONE do_wait() ignores sub-threads. Unfortunately, it seems that /sbin/init in most (all?) distributions doesn't use it and we have to change the kernel to avoid the problem. Note also that most init's use sys_waitid() which doesn't allow __WALL, so the necessary user-space fix is not that trivial. This patch just adds the "ptrace" check into eligible_child(). To some degree this matches the "tsk->ptrace" in exit_notify(), ->exit_signal is mostly ignored when the tracee reports to debugger. Or WSTOPPED, the tracer doesn't need to set this flag to wait for the stopped tracee. This obviously means the user-visible change: __WCLONE and __WALL no longer have any meaning for debugger. And I can only hope that this won't break something, but at least strace/gdb won't suffer. We could make a more conservative change. Say, we can take __WCLONE into account, or !thread_group_leader(). But it would be nice to not complicate these historical/confusing checks. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Jan Kratochvil <jan.kratochvil@redhat.com> Cc: "Michael Kerrisk (man-pages)" <mtk.manpages@gmail.com> Cc: Pedro Alves <palves@redhat.com> Cc: Roland McGrath <roland@hack.frob.com> Cc: <syzkaller@googlegroups.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tomáš Trnka authored
[ Upstream commit c0cb8bf3 ] The length of the GSS MIC token need not be a multiple of four bytes. It is then padded by XDR to a multiple of 4 B, but unwrap_integ_data() would previously only trim mic.len + 4 B. The remaining up to three bytes would then trigger a check in nfs4svc_decode_compoundargs(), leading to a "garbage args" error and mount failure: nfs4svc_decode_compoundargs: compound not properly padded! nfsd: failed to decode arguments! This would prevent older clients using the pre-RFC 4121 MIC format (37-byte MIC including a 9-byte OID) from mounting exports from v3.9+ servers using krb5i. The trimming was introduced by commit 4c190e2f ("sunrpc: trim off trailing checksum before returning decrypted or integrity authenticated buffer"). Fixes: 4c190e2f "unrpc: trim off trailing checksum..." Signed-off-by: Tomáš Trnka <ttrnka@mail.muni.cz> Cc: stable@vger.kernel.org Acked-by: Jeff Layton <jlayton@poochiereds.net> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Adrian Hunter authored
[ Upstream commit 265984b3 ] The CMD19/CMD14 bus width test has been found to be unreliable in some cases. It is not essential, so simply remove it. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Adrian Hunter authored
[ Upstream commit 9d65cb88 ] Intel host controllers are capable of doing the bus width test and of waiting while busy, so add the capability flags. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Matt Gumbel authored
[ Upstream commit 32ecd320 ] 008GE0 Toshiba mmc in some Intel Baytrail tablets responds to MMC_SEND_EXT_CSD in 450-600ms. This patch will... () Increase the long read time quirk timeout from 300ms to 600ms. Original author of that quirk says 300ms was only a guess and that the number may need to be raised in the future. () Add this specific MMC to the quirk Signed-off-by: Matt Gumbel <matthew.k.gumbel@intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ville Syrjälä authored
[ Upstream commit 7045c368 ] When we read out the watermark state from the hardware we're supposed to transfer that into the active watermarks, but currently we fail to any part of the active watermarks that isn't explicitly written. Let's clear it all upfront. Looks like this has been like this since the beginning, when I added the readout. No idea why I didn't clear it up. Cc: Matt Roper <matthew.d.roper@intel.com> Fixes: 243e6a44 ("drm/i915: Init HSW watermark tracking in intel_modeset_setup_hw_state()") Cc: stable@vger.kernel.org Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463151318-14719-2-git-send-email-ville.syrjala@linux.intel.com (cherry picked from commit 15606534) Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Rafael J. Wysocki authored
[ Upstream commit 3a17fb32 ] Grygorii Strashko reports: The PM runtime will be left disabled for the device if its .suspend_late() callback fails and async suspend is not allowed for this device. In this case device will not be added in dpm_late_early_list and dpm_resume_early() will ignore this device, as result PM runtime will be disabled for it forever (side effect: after 8 subsequent failures for the same device the PM runtime will be reenabled due to disable_depth overflow). To fix this problem, add devices to dpm_late_early_list regardless of whether or not device_suspend_late() returns errors for them. That will ensure failures in there to be handled consistently for all devices regardless of their async suspend/resume status. Reported-by: Grygorii Strashko <grygorii.strashko@ti.com> Tested-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ricky Liang authored
[ Upstream commit affa80bd ] When running a 32-bit userspace on a 64-bit kernel, the UI_SET_PHYS ioctl needs to be treated with special care, as it has the pointer size encoded in the command. Signed-off-by: Ricky Liang <jcliang@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sachin Prabhu authored
[ Upstream commit b74cb9a8 ] The session key is the default keyring set for request_key operations. This session key is revoked when the user owning the session logs out. Any long running daemon processes started by this session ends up with revoked session keyring which prevents these processes from using the request_key mechanism from obtaining the krb5 keys. The problem has been reported by a large number of autofs users. The problem is also seen with multiuser mounts where the share may be used by processes run by a user who has since logged out. A reproducer using automount is available on the Red Hat bz. The patch creates a new keyring which is used to cache cifs spnego upcalls. Red Hat bz: 1267754 Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Reported-by: Scott Mayhew <smayhew@redhat.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> CC: Stable <stable@vger.kernel.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mark Brown authored
[ Upstream commit d3030d11 ] The ak4642 driver is using a regmap cache sync to restore the configuration of the chip on resume but (as Peter observed) does not actually define a register cache which means that the resume is never going to work and we trigger asserts in regmap. Fix this by enabling caching. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Axel Lin authored
[ Upstream commit f8ea6ceb ] The max_register setting for ak4642, ak4643 and ak4648 are wrong, fix it. According to the datasheet: the maximum valid register for ak4642 is 0x1f the maximum valid register for ak4643 is 0x24 the maximum valid register for ak4648 is 0x27 The default settings for ak4642 and ak4643 are the same for 0x0 ~ 0x1f registers, so it's fine to use the same reg_default table with differnt num_reg_defaults setting. Signed-off-by: Axel Lin <axel.lin@ingics.com> Tested-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dave Chinner authored
[ Upstream commit 7d3aa7fe ] We don't write back stale inodes so we should skip them in xfs_iflush_cluster, too. cc: <stable@vger.kernel.org> # 3.10.x- Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dave Chinner authored
[ Upstream commit 51b07f30 ] Some careless idiot(*) wrote crap code in commit 1a3e8f3d ("xfs: convert inode cache lookups to use RCU locking") back in late 2010, and so xfs_iflush_cluster checks the wrong inode for whether it is still valid under RCU protection. Fix it to lock and check the correct inode. (*) Careless-idiot: Dave Chinner <dchinner@redhat.com> cc: <stable@vger.kernel.org> # 3.10.x- Discovered-by: Brain Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dave Chinner authored
[ Upstream commit b1438f47 ] When a failure due to an inode buffer occurs, the error handling fails to abort the inode writeback correctly. This can result in the inode being reclaimed whilst still in the AIL, leading to use-after-free situations as well as filesystems that cannot be unmounted as the inode log items left in the AIL never get removed. Fix this by ensuring fatal errors from xfs_imap_to_bp() result in the inode flush being aborted correctly. cc: <stable@vger.kernel.org> # 3.10.x- Reported-by: Shyam Kaushik <shyam@zadarastorage.com> Diagnosed-by: Shyam Kaushik <shyam@zadarastorage.com> Tested-by: Shyam Kaushik <shyam@zadarastorage.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Daniel Lezcano authored
[ Upstream commit e7387da5 ] Commit 0b89e9aa (cpuidle: delay enabling interrupts until all coupled CPUs leave idle) rightfully fixed a regression by letting the coupled idle state framework to handle local interrupt enabling when the CPU is exiting an idle state. The current code checks if the idle state is coupled and, if so, it will let the coupled code to enable interrupts. This way, it can decrement the ready-count before handling the interrupt. This mechanism prevents the other CPUs from waiting for a CPU which is handling interrupts. But the check is done against the state index returned by the back end driver's ->enter functions which could be different from the initial index passed as parameter to the cpuidle_enter_state() function. entered_state = target_state->enter(dev, drv, index); [ ... ] if (!cpuidle_state_is_coupled(drv, entered_state)) local_irq_enable(); [ ... ] If the 'index' is referring to a coupled idle state but the 'entered_state' is *not* coupled, then the interrupts are enabled again. All CPUs blocked on the sync barrier may busy loop longer if the CPU has interrupts to handle before decrementing the ready-count. That's consuming more energy than saving. Fixes: 0b89e9aa (cpuidle: delay enabling interrupts until all coupled CPUs leave idle) Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.15+ <stable@vger.kernel.org> # 3.15+ [ rjw: Subject & changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Steve French authored
[ Upstream commit 897fba11 ] Wrong return code was being returned on SMB3 rmdir of non-empty directory. For SMB3 (unlike for cifs), we attempt to delete a directory by set of delete on close flag on the open. Windows clients set this flag via a set info (SET_FILE_DISPOSITION to set this flag) which properly checks if the directory is empty. With this patch on smb3 mounts we correctly return "DIRECTORY NOT EMPTY" on attempts to remove a non-empty directory. Signed-off-by: Steve French <steve.french@primarydata.com> CC: Stable <stable@vger.kernel.org> Acked-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Metzmacher authored
[ Upstream commit 1a967d6c ] Only server which map unknown users to guest will allow access using a non-null NTLMv2_Response. For Samba it's the "map to guest = bad user" option. BUG: https://bugzilla.samba.org/show_bug.cgi?id=11913Signed-off-by: Stefan Metzmacher <metze@samba.org> CC: Stable <stable@vger.kernel.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Metzmacher authored
[ Upstream commit 777f69b8 ] Only server which map unknown users to guest will allow access using a non-null NTChallengeResponse. For Samba it's the "map to guest = bad user" option. BUG: https://bugzilla.samba.org/show_bug.cgi?id=11913Signed-off-by: Stefan Metzmacher <metze@samba.org> CC: Stable <stable@vger.kernel.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Metzmacher authored
[ Upstream commit fa8f3a35 ] Only server which map unknown users to guest will allow access using a non-null LMChallengeResponse. For Samba it's the "map to guest = bad user" option. BUG: https://bugzilla.samba.org/show_bug.cgi?id=11913Signed-off-by: Stefan Metzmacher <metze@samba.org> CC: Stable <stable@vger.kernel.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Metzmacher authored
[ Upstream commit cfda35d9 ] See [MS-NLMP] 3.2.5.1.2 Server Receives an AUTHENTICATE_MESSAGE from the Client: ... Set NullSession to FALSE If (AUTHENTICATE_MESSAGE.UserNameLen == 0 AND AUTHENTICATE_MESSAGE.NtChallengeResponse.Length == 0 AND (AUTHENTICATE_MESSAGE.LmChallengeResponse == Z(1) OR AUTHENTICATE_MESSAGE.LmChallengeResponse.Length == 0)) -- Special case: client requested anonymous authentication Set NullSession to TRUE ... Only server which map unknown users to guest will allow access using a non-null NTChallengeResponse. For Samba it's the "map to guest = bad user" option. BUG: https://bugzilla.samba.org/show_bug.cgi?id=11913 CC: Stable <stable@vger.kernel.org> Signed-off-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lyude authored
[ Upstream commit 255f0e7c ] During boot, MST hotplugs are generally expected (even if no physical hotplugging occurs) and result in DRM's connector topology changing. This means that using num_connector from the current mode configuration can lead to the number of connectors changing under us. This can lead to some nasty scenarios in fbcon: - We allocate an array to the size of dev->mode_config.num_connectors. - MST hotplug occurs, dev->mode_config.num_connectors gets incremented. - We try to loop through each element in the array using the new value of dev->mode_config.num_connectors, and end up going out of bounds since dev->mode_config.num_connectors is now larger then the array we allocated. fb_helper->connector_count however, will always remain consistent while we do a modeset in fb_helper. Note: This is just polish for 4.7, Dave Airlie's drm_connector refcounting fixed these bugs for real. But it's good enough duct-tape for stable kernel backporting, since backporting the refcounting changes is way too invasive. Cc: stable@vger.kernel.org Signed-off-by: Lyude <cpaul@redhat.com> [danvet: Clarify why we need this. Also remove the now unused "dev" local variable to appease gcc.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1463065021-18280-3-git-send-email-cpaul@redhat.comSigned-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Maciej W. Rozycki authored
[ Upstream commit e49d3848 ] Fix a build regression from commit c9017757 ("MIPS: init upper 64b of vector registers when MSA is first used"): arch/mips/built-in.o: In function `enable_restore_fp_context': traps.c:(.text+0xbb90): undefined reference to `_init_msa_upper' traps.c:(.text+0xbb90): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper' traps.c:(.text+0xbef0): undefined reference to `_init_msa_upper' traps.c:(.text+0xbef0): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper' to !CONFIG_CPU_HAS_MSA configurations with older GCC versions, which are unable to figure out that calls to `_init_msa_upper' are indeed dead. Of the many ways to tackle this failure choose the approach we have already taken in `thread_msa_context_live'. [ralf@linux-mips.org: Drop patch segment to junk file.] Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: stable@vger.kernel.org # v3.16+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13271/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Prarit Bhargava authored
[ Upstream commit ad67b437 ] b84106b4 ("PCI: Disable IO/MEM decoding for devices with non-compliant BARs") disabled BAR sizing for BARs 0-5 of devices that don't comply with the PCI spec. But it didn't do anything for expansion ROM BARs, so we still try to size them, resulting in warnings like this on Broadwell-EP: pci 0000:ff:12.0: BAR 6: failed to assign [mem size 0x00000001 pref] Move the non-compliant BAR check from __pci_read_base() up to pci_read_bases() so it applies to the expansion ROM BAR as well as to BARs 0-5. Note that direct callers of __pci_read_base(), like sriov_init(), will now bypass this check. We haven't had reports of devices with broken SR-IOV BARs yet. [bhelgaas: changelog] Fixes: b84106b4 ("PCI: Disable IO/MEM decoding for devices with non-compliant BARs") Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@redhat.com> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Andi Kleen <ak@linux.intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Adrian Hunter authored
[ Upstream commit 1c447116 ] Some eMMCs set the partition switch timeout too low. Now typically eMMCs are considered a critical component (e.g. because they store the root file system) and consequently are expected to be reliable. Thus we can neglect the use case where eMMCs can't switch reliably and we might want a lower timeout to facilitate speedy recovery. Although we could employ a quirk for the cards that are affected (if we could identify them all), as described above, there is little benefit to having a low timeout, so instead simply set a minimum timeout. The minimum is set to 300ms somewhat arbitrarily - the examples that have been seen had a timeout of 10ms but were sometimes taking 60-70ms. Cc: stable@vger.kernel.org Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Steven Rostedt (Red Hat) authored
[ Upstream commit 59643d15 ] If the size passed to ring_buffer_resize() is greater than MAX_LONG - BUF_PAGE_SIZE then the DIV_ROUND_UP() will return zero. Here's the details: # echo 18014398509481980 > /sys/kernel/debug/tracing/buffer_size_kb tracing_entries_write() processes this and converts kb to bytes. 18014398509481980 << 10 = 18446744073709547520 and this is passed to ring_buffer_resize() as unsigned long size. size = DIV_ROUND_UP(size, BUF_PAGE_SIZE); Where DIV_ROUND_UP(a, b) is (a + b - 1)/b BUF_PAGE_SIZE is 4080 and here 18446744073709547520 + 4080 - 1 = 18446744073709551599 where 18446744073709551599 is still smaller than 2^64 2^64 - 18446744073709551599 = 17 But now 18446744073709551599 / 4080 = 4521260802379792 and size = size * 4080 = 18446744073709551360 This is checked to make sure its still greater than 2 * 4080, which it is. Then we convert to the number of buffer pages needed. nr_page = DIV_ROUND_UP(size, BUF_PAGE_SIZE) but this time size is 18446744073709551360 and 2^64 - (18446744073709551360 + 4080 - 1) = -3823 Thus it overflows and the resulting number is less than 4080, which makes 3823 / 4080 = 0 an nr_pages is set to this. As we already checked against the minimum that nr_pages may be, this causes the logic to fail as well, and we crash the kernel. There's no reason to have the two DIV_ROUND_UP() (that's just result of historical code changes), clean up the code and fix this bug. Cc: stable@vger.kernel.org # 3.5+ Fixes: 83f40318 ("ring-buffer: Make removal of ring buffer pages atomic") Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-