- 05 Dec, 2017 9 commits
-
-
Dan Williams authored
commit 2bb6d283 upstream. Patch series "introduce get_user_pages_longterm()", v2. Here is a new get_user_pages api for cases where a driver intends to keep an elevated page count indefinitely. This is distinct from usages like iov_iter_get_pages where the elevated page counts are transient. The iov_iter_get_pages cases immediately turn around and submit the pages to a device driver which will put_page when the i/o operation completes (under kernel control). In the longterm case userspace is responsible for dropping the page reference at some undefined point in the future. This is untenable for filesystem-dax case where the filesystem is in control of the lifetime of the block / page and needs reasonable limits on how long it can wait for pages in a mapping to become idle. Fixing filesystems to actually wait for dax pages to be idle before blocks from a truncate/hole-punch operation are repurposed is saved for a later patch series. Also, allowing longterm registration of dax mappings is a future patch series that introduces a "map with lease" semantic where the kernel can revoke a lease and force userspace to drop its page references. I have also tagged these for -stable to purposely break cases that might assume that longterm memory registrations for filesystem-dax mappings were supported by the kernel. The behavior regression this policy change implies is one of the reasons we maintain the "dax enabled. Warning: EXPERIMENTAL, use at your own risk" notification when mounting a filesystem in dax mode. It is worth noting the device-dax interface does not suffer the same constraints since it does not support file space management operations like hole-punch. This patch (of 4): Until there is a solution to the dma-to-dax vs truncate problem it is not safe to allow long standing memory registrations against filesytem-dax vmas. Device-dax vmas do not have this problem and are explicitly allowed. This is temporary until a "memory registration with layout-lease" mechanism can be implemented for the affected sub-systems (RDMA and V4L2). [akpm@linux-foundation.org: use kcalloc()] Link: http://lkml.kernel.org/r/151068939435.7446.13560129395419350737.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: 3565fce3 ("mm, x86: get_user_pages() for dax mappings") Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Suggested-by:
Christoph Hellwig <hch@lst.de> Cc: Doug Ledford <dledford@redhat.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Cc: Inki Dae <inki.dae@samsung.com> Cc: Jan Kara <jack@suse.cz> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit 9702cffd upstream. Similar to how device-dax enforces that the 'address', 'offset', and 'len' parameters to mmap() be aligned to the device's fundamental alignment, the same constraints apply to munmap(). Implement ->split() to fail munmap calls that violate the alignment constraint. Otherwise, we later fail VM_BUG_ON checks in the unmap_page_range() path with crash signatures of the form: vma ffff8800b60c8a88 start 00007f88c0000000 end 00007f88c0e00000 next (null) prev (null) mm ffff8800b61150c0 prot 8000000000000027 anon_vma (null) vm_ops ffffffffa0091240 pgoff 0 file ffff8800b638ef80 private_data (null) flags: 0x380000fb(read|write|shared|mayread|maywrite|mayexec|mayshare|softdirty|mixedmap|hugepage) ------------[ cut here ]------------ kernel BUG at mm/huge_memory.c:2014! [..] RIP: 0010:__split_huge_pud+0x12a/0x180 [..] Call Trace: unmap_page_range+0x245/0xa40 ? __vma_adjust+0x301/0x990 unmap_vmas+0x4c/0xa0 unmap_region+0xae/0x120 ? __vma_rb_erase+0x11a/0x230 do_munmap+0x276/0x410 vm_munmap+0x6a/0xa0 SyS_munmap+0x1d/0x30 Link: http://lkml.kernel.org/r/151130418681.4029.7118245855057952010.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: dee41079 ("/dev/dax, core: file operations and dax-mmap") Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Reported-by:
Jeff Moyer <jmoyer@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit 31383c68 upstream. Patch series "device-dax: fix unaligned munmap handling" When device-dax is operating in huge-page mode we want it to behave like hugetlbfs and fail attempts to split vmas into unaligned ranges. It would be messy to teach the munmap path about device-dax alignment constraints in the same (hstate) way that hugetlbfs communicates this constraint. Instead, these patches introduce a new ->split() vm operation. This patch (of 2): The device-dax interface has similar constraints as hugetlbfs in that it requires the munmap path to unmap in huge page aligned units. Rather than add more custom vma handling code in __split_vma() introduce a new vm operation to perform this vma specific check. Link: http://lkml.kernel.org/r/151130418135.4029.6783191281930729710.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: dee41079 ("/dev/dax, core: file operations and dax-mmap") Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Cc: Jeff Moyer <jmoyer@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit 1501899a upstream. Currently only get_user_pages_fast() can safely handle the writable gup case due to its use of pud_access_permitted() to check whether the pud entry is writable. In the gup slow path pud_write() is used instead of pud_access_permitted() and to date it has been unimplemented, just calls BUG_ON(). kernel BUG at ./include/linux/hugetlb.h:244! [..] RIP: 0010:follow_devmap_pud+0x482/0x490 [..] Call Trace: follow_page_mask+0x28c/0x6e0 __get_user_pages+0xe4/0x6c0 get_user_pages_unlocked+0x130/0x1b0 get_user_pages_fast+0x89/0xb0 iov_iter_get_pages_alloc+0x114/0x4a0 nfs_direct_read_schedule_iovec+0xd2/0x350 ? nfs_start_io_direct+0x63/0x70 nfs_file_direct_read+0x1e0/0x250 nfs_file_read+0x90/0xc0 For now this just implements a simple check for the _PAGE_RW bit similar to pmd_write. However, this implies that the gup-slow-path check is missing the extra checks that the gup-fast-path performs with pud_access_permitted. Later patches will align all checks to use the 'access_permitted' helper if the architecture provides it. Note that the generic 'access_permitted' helper fallback is the simple _PAGE_RW check on architectures that do not define the 'access_permitted' helper(s). [dan.j.williams@intel.com: fix powerpc compile error] Link: http://lkml.kernel.org/r/151129126165.37405.16031785266675461397.stgit@dwillia2-desk3.amr.corp.intel.com Link: http://lkml.kernel.org/r/151043109938.2842.14834662818213616199.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: a00cc7d9 ("mm, x86: add support for PUD-sized transparent hugepages") Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Thomas Gleixner <tglx@linutronix.de> [x86] Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Kravetz authored
commit 63cd4489 upstream. If the call __alloc_contig_migrate_range() in alloc_contig_range returns -EBUSY, processing continues so that test_pages_isolated() is called where there is a tracepoint to identify the busy pages. However, it is possible for busy pages to become available between the calls to these two routines. In this case, the range of pages may be allocated. Unfortunately, the original return code (ret == -EBUSY) is still set and returned to the caller. Therefore, the caller believes the pages were not allocated and they are leaked. Update the comment to indicate that allocation is still possible even if __alloc_contig_migrate_range returns -EBUSY. Also, clear return code in this case so that it is not accidentally used or returned to caller. Link: http://lkml.kernel.org/r/20171122185214.25285-1-mike.kravetz@oracle.com Fixes: 8ef5849f ("mm/cma: always check which page caused allocation failure") Signed-off-by:
Mike Kravetz <mike.kravetz@oracle.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kirill A. Shutemov authored
commit a8f97366 upstream. Currently, we unconditionally make page table dirty in touch_pmd(). It may result in false-positive can_follow_write_pmd(). We may avoid the situation, if we would only make the page table entry dirty if caller asks for write access -- FOLL_WRITE. The patch also changes touch_pud() in the same way. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wang Nan authored
commit 687cb088 upstream. tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush TLB when tlb->fullmm is true: commit 5a7862e8 ("arm64: tlbflush: avoid flushing when fullmm == 1"). Which causes leaking of tlb entries. Will clarifies his patch: "Basically, we tag each address space with an ASID (PCID on x86) which is resident in the TLB. This means we can elide TLB invalidation when pulling down a full mm because we won't ever assign that ASID to another mm without doing TLB invalidation elsewhere (which actually just nukes the whole TLB). I think that means that we could potentially not fault on a kernel uaccess, because we could hit in the TLB" There could be a window between complete_signal() sending IPI to other cores and all threads sharing this mm are really kicked off from cores. In this window, the oom reaper may calls tlb_flush_mmu_tlbonly() to flush TLB then frees pages. However, due to the above problem, the TLB entries are not really flushed on arm64. Other threads are possible to access these pages through TLB entries. Moreover, a copy_to_user() can also write to these pages without generating page fault, causes use-after-free bugs. This patch gathers each vma instead of gathering full vm space. In this case tlb->fullmm is not true. The behavior of oom reaper become similar to munmapping before do_exit, which should be safe for all archs. Link: http://lkml.kernel.org/r/20171107095453.179940-1-wangnan0@huawei.com Fixes: aac45363 ("mm, oom: introduce oom reaper") Signed-off-by:
Wang Nan <wangnan0@huawei.com> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
David Rientjes <rientjes@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Bob Liu <liubo95@huawei.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Roman Gushchin <guro@fb.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Hocko authored
commit 4b81cb2f upstream. drain_all_pages backs off when called from a kworker context since commit 0ccce3b9 ("mm, page_alloc: drain per-cpu pages from workqueue context") because the original IPI based pcp draining has been replaced by a WQ based one and the check wanted to prevent from recursion and inter workers dependencies. This has made some sense at the time because the system WQ has been used and one worker holding the lock could be blocked while waiting for new workers to emerge which can be a problem under OOM conditions. Since then commit ce612879 ("mm: move pcp and lru-pcp draining into single wq") has moved draining to a dedicated (mm_percpu_wq) WQ with a rescuer so we shouldn't depend on any other WQ activity to make a forward progress so calling drain_all_pages from a worker context is safe as long as this doesn't happen from mm_percpu_wq itself which is not the case because all workers are required to _not_ depend on any MM locks. Why is this a problem in the first place? ACPI driven memory hot-remove (acpi_device_hotplug) is executed from the worker context. We end up calling __offline_pages to free all the pages and that requires both lru_add_drain_all_cpuslocked and drain_all_pages to do their job otherwise we can have dangling pages on pcp lists and fail the offline operation (__test_page_isolated_in_pageblock would see a page with 0 ref count but without PageBuddy set). Fix the issue by removing the worker check in drain_all_pages. lru_add_drain_all_cpuslocked doesn't have this restriction so it works as expected. Link: http://lkml.kernel.org/r/20170828093341.26341-1-mhocko@kernel.org Fixes: 0ccce3b9 ("mm, page_alloc: drain per-cpu pages from workqueue context") Signed-off-by:
Michal Hocko <mhocko@suse.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stefan Brüns authored
commit 9968e12a upstream. Commit f9cf3b28 ("platform/x86: hp-wmi: Refactor dock and tablet state fetchers") consolidated the methods for docking and laptop mode detection, but omitted to apply the correct mask for the laptop mode (it always uses the constant for docking). Fixes: f9cf3b28 ("platform/x86: hp-wmi: Refactor dock and tablet state fetchers") Signed-off-by:
Stefan Brüns <stefan.bruens@rwth-aachen.de> Cc: Michel Dänzer <michel@daenzer.net> Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 30 Nov, 2017 31 commits
-
-
Greg Kroah-Hartman authored
-
Sasha Neftin authored
commit b10effb9 upstream. Intel
® 100/200 Series Chipset platforms reduced the round-trip latency for the LAN Controller DMA accesses, causing in some high performance cases a buffer overrun while the I219 LAN Connected Device is processing the DMA transactions. I219LM and I219V devices can fall into unrecovered Tx hang under very stressfully UDP traffic and multiple reconnection of Ethernet cable. This Tx hang of the LAN Controller is only recovered if the system is rebooted. Slightly slow down DMA access by reducing the number of outstanding requests. This workaround could have an impact on TCP traffic performance on the platform. Disabling TSO eliminates performance loss for TCP traffic without a noticeable impact on CPU performance. Please, refer to I218/I219 specification update: https://www.intel.com/content/www/us/en/embedded/products/networking/ ethernet-connection-i218-family-documentation.html Signed-off-by:Sasha Neftin <sasha.neftin@intel.com> Reviewed-by:
Dima Ruinskiy <dima.ruinskiy@intel.com> Reviewed-by:
Raanan Avargil <raanan.avargil@intel.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Poirier authored
commit 4aea7a5c upstream. When e1000e_poll() is not fast enough to keep up with incoming traffic, the adapter (when operating in msix mode) raises the Other interrupt to signal Receiver Overrun. This is a double problem because 1) at the moment e1000_msix_other() assumes that it is only called in case of Link Status Change and 2) if the condition persists, the interrupt is repeatedly raised again in quick succession. Ideally we would configure the Other interrupt to not be raised in case of receiver overrun but this doesn't seem possible on this adapter. Instead, we handle the first part of the problem by reverting to the practice of reading ICR in the other interrupt handler, like before commit 16ecba59 ("e1000e: Do not read ICR in Other interrupt"). Thanks to commit 0a8047ac ("e1000e: Fix msi-x interrupt automask") which cleared IAME from CTRL_EXT, reading ICR doesn't interfere with RxQ0, TxQ0 interrupts anymore. We handle the second part of the problem by not re-enabling the Other interrupt right away when there is overrun. Instead, we wait until traffic subsides, napi polling mode is exited and interrupts are re-enabled. Reported-by:
Lennart Sorensen <lsorense@csclub.uwaterloo.ca> Fixes: 16ecba59 ("e1000e: Do not read ICR in Other interrupt") Signed-off-by:
Benjamin Poirier <bpoirier@suse.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Poirier authored
commit 19110cfb upstream. Lennart reported the following race condition: \ e1000_watchdog_task \ e1000e_has_link \ hw->mac.ops.check_for_link() === e1000e_check_for_copper_link /* link is up */ mac->get_link_status = false; /* interrupt */ \ e1000_msix_other hw->mac.get_link_status = true; link_active = !hw->mac.get_link_status /* link_active is false, wrongly */ This problem arises because the single flag get_link_status is used to signal two different states: link status needs checking and link status is down. Avoid the problem by using the return value of .check_for_link to signal the link status to e1000e_has_link(). Reported-by:
Lennart Sorensen <lsorense@csclub.uwaterloo.ca> Signed-off-by:
Benjamin Poirier <bpoirier@suse.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Poirier authored
commit d3509f8b upstream. All the helpers return -E1000_ERR_PHY. Signed-off-by:
Benjamin Poirier <bpoirier@suse.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Poirier authored
commit c4c40e51 upstream. In case of error from e1e_rphy(), the loop will exit early and "success" will be set to true erroneously. Signed-off-by:
Benjamin Poirier <bpoirier@suse.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Luca Coelho authored
commit dac4df1c upstream. Newer firmware versions (such as iwlwifi-8000C-34.ucode) have introduced an API change in the SCAN_REQ_UMAC command that is not backwards compatible. The driver needs to detect and use the new API format when the firmware reports it, otherwise the scan command will not work properly, causing a command timeout. Fix this by adding a TLV that tells the driver that the new API is in use and use the correct structures for it. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=197591 Fixes: d7a5b3e9 ("iwlwifi: mvm: bump API to 34 for 8000 and up") Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Luca Coelho authored
commit dbc89253 upstream. A lot of PCI IDs were missing and there were some problems with the configuration and firmware selection for devices on the 9000 series. Fix the firmware selection by adding files for the B-steps; add configuration for some integrated devices; and add a bunch of PCI IDs (mostly for integrated devices) that were missing from the driver's list. Without this patch, a lot of devices will not be recognized or will try to load the wrong firmware file. Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ihab Zhaika authored
commit d669fc2d upstream. add three new PCI ID'S for 8260 series Signed-off-by:
Ihab Zhaika <ihab.zhaika@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ihab Zhaika authored
commit 7cddbef4 upstream. add two new PCI ID'S for 8265 series Signed-off-by:
Ihab Zhaika <ihab.zhaika@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ihab Zhaika authored
commit 57b36f7f upstream. add four new PCI ID'S for a000 series Signed-off-by:
Ihab Zhaika <ihab.zhaika@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Luca Coelho authored
commit 1105a337 upstream. It's hard to find values that are missing in the list, so sorting the values and comparing them makes it much easier. To simplify this task, sort the devices in the list. Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oren Givon authored
commit d048b36b upstream. Add a new a000 device with PCI ID (0x2720, 0x0030). Signed-off-by:
Oren Givon <oren.givon@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oren Givon authored
commit f7f5873b upstream. The PCI ID (0x2720, 0x0070) was set with the config struct iwla000_2ax_cfg_hr instead of iwla000_2ac_cfg_hr_cdb. Fixes: 175b87c6 ("iwlwifi: add the new a000_2ax series") Signed-off-by:
Oren Givon <oren.givon@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Cc: Thomas Backlund <tmb@mageia.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Neil Armstrong authored
commit 4ee8e51b upstream. This year, Amlogic updated the ARM Trusted Firmware reserved memory mapping for Meson GXL SoCs and products sold since May 2017 uses this alternate reserved memory mapping. But products had been sold using the previous mapping. This issue has been explained in [1] and a dynamic solution is yet to be found to avoid loosing another 3Mbytes of reservable memory. In the meantime, this patch adds this alternate memory zone only for the GXL and GXM SoCs since GXBB based new products stopped earlier. [1] http://lists.infradead.org/pipermail/linux-amlogic/2017-October/004860.html Fixes: bba8e3f4 ("ARM64: dts: meson-gx: Add firmware reserved memory zones") Reported-by:
Jerome Brunet <jbrunet@baylibre.com> Signed-off-by:
Neil Armstrong <narmstrong@baylibre.com> Signed-off-by:
Kevin Hilman <khilman@baylibre.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanimir Varbanov authored
commit e69b987a upstream. This addresses the wrong behavior of decoder stop command by rewriting it. These new implementation enqueue an empty buffer on the decoder input buffer queue to signal end-of-stream. The client should stop queuing buffers on the V4L2 Output queue and continue queuing/dequeuing buffers on Capture queue. This process will continue until the client receives a buffer with V4L2_BUF_FLAG_LAST flag raised, which means that this is last decoded buffer with data. Signed-off-by:
Stanimir Varbanov <stanimir.varbanov@linaro.org> Tested-by:
Nicolas Dufresne <nicolas.dufresne@collabora.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanimir Varbanov authored
commit 5232c37c upstream. This fixes wrongly filled bytesused field of v4l2_plane structure by include data_offset in the plane, Also fill data_offset and bytesused for capture type of buffers only. Signed-off-by:
Stanimir Varbanov <stanimir.varbanov@linaro.org> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanimir Varbanov authored
commit cd1a77e3 upstream. This change will fix an issue with dma_free size found with DMA API debug enabled. Signed-off-by:
Stanimir Varbanov <stanimir.varbanov@linaro.org> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ricardo Ribalda Delgado authored
commit 9cac9d2f upstream. VIDIOC_DQEVENT and VIDIOC_QUERY_EXT_CTRL should give the same output for the control flags field. This patch creates a new function user_flags(), that calculates the user exported flags value (which is different than the kernel internal flags structure). This function is then used by all the code that exports the internal flags to userspace. Reported-by:
Dimitrios Katsaros <patcherwork@gmail.com> Signed-off-by:
Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 6c3b047f upstream. Make sure to check that we actually have an Interface Association Descriptor before dereferencing it during probe to avoid dereferencing a NULL-pointer. Fixes: e0d3bafd ("V4L/DVB (10954): Add cx231xx USB driver") Reported-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Johan Hovold <johan@kernel.org> Tested-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sean Young authored
commit 829bbf26 upstream. When receiving an nec repeat, rc_repeat() is called and then rc_keydown() with the last decoded scancode. That last call is redundant. Fixes: 265a2988 ("media: rc-core: consistent use of rc_repeat()") Signed-off-by:
Sean Young <sean@mess.org> Signed-off-by:
Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sean Young authored
commit 3e45067f upstream. The ioctl LIRC_SET_REC_TIMEOUT would set a timeout of 704ns if called with a timeout of 4294968us. Signed-off-by:
Sean Young <sean@mess.org> Signed-off-by:
Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michele Baldessari authored
commit b3120d2c upstream. Firmware load on AS102 is using the stack which is not allowed any longer. We currently fail with: kernel: transfer buffer not dma capable kernel: ------------[ cut here ]------------ kernel: WARNING: CPU: 0 PID: 598 at drivers/usb/core/hcd.c:1595 usb_hcd_map_urb_for_dma+0x41d/0x620 kernel: Modules linked in: amd64_edac_mod(-) edac_mce_amd as102_fe dvb_as102(+) kvm_amd kvm snd_hda_codec_realtek dvb_core snd_hda_codec_generic snd_hda_codec_hdmi snd_hda_intel snd_hda_codec irqbypass crct10dif_pclmul crc32_pclmul snd_hda_core snd_hwdep snd_seq ghash_clmulni_intel sp5100_tco fam15h_power wmi k10temp i2c_piix4 snd_seq_device snd_pcm snd_timer parport_pc parport tpm_infineon snd tpm_tis soundcore tpm_tis_core tpm shpchp acpi_cpufreq xfs libcrc32c amdgpu amdkfd amd_iommu_v2 radeon hid_logitech_hidpp i2c_algo_bit drm_kms_helper crc32c_intel ttm drm r8169 mii hid_logitech_dj kernel: CPU: 0 PID: 598 Comm: systemd-udevd Not tainted 4.13.10-200.fc26.x86_64 #1 kernel: Hardware name: ASUS All Series/AM1I-A, BIOS 0505 03/13/2014 kernel: task: ffff979933b24c80 task.stack: ffffaf83413a4000 kernel: RIP: 0010:usb_hcd_map_urb_for_dma+0x41d/0x620 systemd-fsck[659]: /dev/sda2: clean, 49/128016 files, 268609/512000 blocks kernel: RSP: 0018:ffffaf83413a7728 EFLAGS: 00010282 systemd-udevd[604]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. kernel: RAX: 000000000000001f RBX: ffff979930bce780 RCX: 0000000000000000 kernel: RDX: 0000000000000000 RSI: ffff97993ec0e118 RDI: ffff97993ec0e118 kernel: RBP: ffffaf83413a7768 R08: 000000000000039a R09: 0000000000000000 kernel: R10: 0000000000000001 R11: 00000000ffffffff R12: 00000000fffffff5 kernel: R13: 0000000001400000 R14: 0000000000000001 R15: ffff979930806800 kernel: FS: 00007effaca5c8c0(0000) GS:ffff97993ec00000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 00007effa9fca962 CR3: 0000000233089000 CR4: 00000000000406f0 kernel: Call Trace: kernel: usb_hcd_submit_urb+0x493/0xb40 kernel: ? page_cache_tree_insert+0x100/0x100 kernel: ? xfs_iunlock+0xd5/0x100 [xfs] kernel: ? xfs_file_buffered_aio_read+0x57/0xc0 [xfs] kernel: usb_submit_urb+0x22d/0x560 kernel: usb_start_wait_urb+0x6e/0x180 kernel: usb_bulk_msg+0xb8/0x160 kernel: as102_send_ep1+0x49/0xe0 [dvb_as102] kernel: ? devres_add+0x3f/0x50 kernel: as102_firmware_upload.isra.0+0x1dc/0x210 [dvb_as102] kernel: as102_fw_upload+0xb6/0x1f0 [dvb_as102] kernel: as102_dvb_register+0x2af/0x2d0 [dvb_as102] kernel: as102_usb_probe+0x1f3/0x260 [dvb_as102] kernel: usb_probe_interface+0x124/0x300 kernel: driver_probe_device+0x2ff/0x450 kernel: __driver_attach+0xa4/0xe0 kernel: ? driver_probe_device+0x450/0x450 kernel: bus_for_each_dev+0x6e/0xb0 kernel: driver_attach+0x1e/0x20 kernel: bus_add_driver+0x1c7/0x270 kernel: driver_register+0x60/0xe0 kernel: usb_register_driver+0x81/0x150 kernel: ? 0xffffffffc0807000 kernel: as102_usb_driver_init+0x1e/0x1000 [dvb_as102] kernel: do_one_initcall+0x50/0x190 kernel: ? __vunmap+0x81/0xb0 kernel: ? kfree+0x154/0x170 kernel: ? kmem_cache_alloc_trace+0x15f/0x1c0 kernel: ? do_init_module+0x27/0x1e9 kernel: do_init_module+0x5f/0x1e9 kernel: load_module+0x2602/0x2c30 kernel: SYSC_init_module+0x170/0x1a0 kernel: ? SYSC_init_module+0x170/0x1a0 kernel: SyS_init_module+0xe/0x10 kernel: do_syscall_64+0x67/0x140 kernel: entry_SYSCALL64_slow_path+0x25/0x25 kernel: RIP: 0033:0x7effab6cf3ea kernel: RSP: 002b:00007fff5cfcbbc8 EFLAGS: 00000246 ORIG_RAX: 00000000000000af kernel: RAX: ffffffffffffffda RBX: 00005569e0b83760 RCX: 00007effab6cf3ea kernel: RDX: 00007effac2099c5 RSI: 0000000000009a13 RDI: 00005569e0b98c50 kernel: RBP: 00007effac2099c5 R08: 00005569e0b83ed0 R09: 0000000000001d80 kernel: R10: 00007effab98db00 R11: 0000000000000246 R12: 00005569e0b98c50 kernel: R13: 00005569e0b81c60 R14: 0000000000020000 R15: 00005569dfadfdf7 kernel: Code: 48 39 c8 73 30 80 3d 59 60 9d 00 00 41 bc f5 ff ff ff 0f 85 26 ff ff ff 48 c7 c7 b8 6b d0 92 c6 05 3f 60 9d 00 01 e8 24 3d ad ff <0f> ff 8b 53 64 e9 09 ff ff ff 65 48 8b 0c 25 00 d3 00 00 48 8b kernel: ---[ end trace c4cae366180e70ec ]--- kernel: as10x_usb: error during firmware upload part1 Let's allocate the the structure dynamically so we can get the firmware loaded correctly: [ 14.243057] as10x_usb: firmware: as102_data1_st.hex loaded with success [ 14.500777] as10x_usb: firmware: as102_data2_st.hex loaded with success Signed-off-by:
Michele Baldessari <michele@acksyn.org> Signed-off-by:
Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicholas Piggin authored
commit 35602f82 upstream. While mapping hints with a length that cross 128TB are disallowed, MAP_FIXED allocations that cross 128TB are allowed. These are failing on hash (on radix they succeed). Add an additional case for fixed mappings to expand the addr_limit when crossing 128TB. Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Reviewed-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicholas Piggin authored
commit effc1b25 upstream. Hash unconditionally resets the addr_limit to default (128TB) when the mm context is initialised. If a process has > 128TB mappings when it forks, the child will not get the 512TB addr_limit, so accesses to valid > 128TB mappings will fail in the child. Fix this by only resetting the addr_limit to default if it was 0. Non zero indicates it was duplicated from the parent (0 means exec()). Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Reviewed-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicholas Piggin authored
commit 6a72dc03 upstream. When allocating VA space with a hint that crosses 128TB, the SLB addr_limit variable is not expanded if addr is not > 128TB, but the slice allocation looks at task_size, which is 512TB. This results in slice_check_fit() incorrectly succeeding because the slice_count truncates off bit 128 of the requested mask, so the comparison to the available mask succeeds. Fix this by using mm->context.addr_limit instead of mm->task_size for testing allocation limits. This causes such allocations to fail. Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") Reported-by:
Florian Weimer <fweimer@redhat.com> Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Reviewed-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Ellerman authored
commit 7ece3709 upstream. Currently userspace is able to request mmap() search between 128T-512T by specifying a hint address that is greater than 128T. But that means a hint of 128T exactly will return an address below 128T, which is confusing and wrong. So fix the logic to check the hint is greater than *or equal* to 128T. Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") Suggested-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Suggested-by:
Nicholas Piggin <npiggin@gmail.com> [mpe: Split out of Nick's bigger patch] Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicholas Piggin authored
commit 85e3f1ad upstream. Radix VA space allocations test addresses against mm->task_size which is 512TB, even in cases where the intention is to limit allocation to below 128TB. This results in mmap with a hint address below 128TB but address + length above 128TB succeeding when it should fail (as hash does after the previous patch). Set the high address limit to be considered up front, and base subsequent allocation checks on that consistently. Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Reviewed-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Ellerman authored
commit 475b581f upstream. On 64-bit Book3s, when we take an instruction fault the reason for the fault may be reported in SRR1. For data faults the reason is reported in DSISR (Data Storage Instruction Status Register). The reasons reported in each do not necessarily correspond, so we mask the SRR1 bits before copying them to the DSISR, which is then used by the page fault code. Prior to commit b4c001dc ("powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs") we used a hard-coded mask of 0x58200000, which corresponds to: DSISR_NOHPTE 0x40000000 /* no translation found */ DSISR_NOEXEC_OR_G 0x10000000 /* exec of no-exec or guarded */ DSISR_PROTFAULT 0x08000000 /* protection fault */ DSISR_KEYFAULT 0x00200000 /* Storage Key fault */ That commit added a #define for the mask, DSISR_SRR1_MATCH_64S, but incorrectly used a different similarly named DSISR_BAD_FAULT_64S. This had the effect of changing the mask to 0xa43a0000, which omits everything but DSISR_KEYFAULT. Luckily this had no visible effect, because in practice we hardly use the DSISR bits. The lack of DSISR_NOHPTE means a TLB flush optimisation was missed in the native HPTE code, and DSISR_NOEXEC_OR_G and DSISR_PROTFAULT are both only used to trigger rare warnings. So we got lucky, but let's fix it. The new value only has bits between 17 and 30 set, so we can continue to use andis. Fixes: b4c001dc ("powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs") Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Naveen N. Rao authored
commit 46725b17 upstream. When a uprobe is installed on an instruction that we currently do not emulate, we copy the instruction into a xol buffer and single step that instruction. If that instruction generates a fault, we abort the single stepping before invoking the signal handler. Once the signal handler is done, the uprobe trap is hit again since the instruction is retried and the process repeats. We use uprobe_deny_signal() to detect if the xol instruction triggered a signal. If so, we clear TIF_SIGPENDING and set TIF_UPROBE so that the signal is not handled until after the single stepping is aborted. In this case, uprobe_deny_signal() returns true and get_signal() ends up returning 0. However, in do_signal(), we are not looking at the return value, but depending on ksig.sig for further action, all with an uninitialized ksig that is not touched in this scenario. Fix the same by initializing ksig.sig to 0. Fixes: 129b69df ("powerpc: Use get_signal() signal_setup_done()") Reported-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Ellerman authored
commit f3f1dfd6 upstream. init_imc_pmu() uses topology_physical_package_id() to detect the node id of the processor it is on to get local memory, but that's wrong, and can lead to crashes. Fix it to use cpu_to_node(). Fixes: 885dcd70 ("powerpc/perf: Add nest IMC PMU support") Reported-By:
Rob Lippert <rlippert@google.com> Tested-By:
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by:
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-