- 25 Jan, 2016 4 commits
-
-
Junxiao Bi authored
commit 854ee2e9 upstream. Commit 8f1eb487 ("ocfs2: fix umask ignored issue") introduced an issue, SGID of sub dir was not inherited from its parents dir. It is because SGID is set into "inode->i_mode" in ocfs2_get_init_inode(), but is overwritten by "mode" which don't have SGID set later. Fixes: 8f1eb487 ("ocfs2: fix umask ignored issue") Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com> Cc: Mark Fasheh <mfasheh@suse.de> Cc: Joel Becker <jlbec@evilplan.org> Acked-by: Srinivas Eeda <srinivas.eeda@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Seth Jennings authored
commit 26bbe7ef upstream. Commit bdee237c ("x86: mm: Use 2GB memory block size on large-memory x86-64 systems") and 982792c7 ("x86, mm: probe memory block size for generic x86 64bit") introduced large block sizes for x86. This made it possible to have multiple sections per memory block where previously, there was a only every one section per block. Since blocks consist of contiguous ranges of section, there can be holes in the blocks where sections are not present. If one attempts to offline such a block, a crash occurs since the code is not designed to deal with this. This patch is a quick fix to gaurd against the crash by not allowing blocks with non-present sections to be offlined. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=107781Signed-off-by: Seth Jennings <sjennings@variantweb.net> Reported-by: Andrew Banman <abanman@sgi.com> Cc: Daniel J Blueman <daniel@numascale.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Greg KH <greg@kroah.com> Cc: Russ Anderson <rja@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Naoya Horiguchi authored
commit 0d777df5 upstream. Currently at the beginning of hugetlb_fault(), we call huge_pte_offset() and check whether the obtained *ptep is a migration/hwpoison entry or not. And if not, then we get to call huge_pte_alloc(). This is racy because the *ptep could turn into migration/hwpoison entry after the huge_pte_offset() check. This race results in BUG_ON in huge_pte_alloc(). We don't have to call huge_pte_alloc() when the huge_pte_offset() returns non-NULL, so let's fix this bug with moving the code into else block. Note that the *ptep could turn into a migration/hwpoison entry after this block, but that's not a problem because we have another !pte_present check later (we never go into hugetlb_no_page() in that case.) Fixes: 290408d4 ("hugetlb: hugepage migration core") Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Michal Hocko authored
commit 373ccbe5 upstream. Tetsuo Handa has reported that the system might basically livelock in OOM condition without triggering the OOM killer. The issue is caused by internal dependency of the direct reclaim on vmstat counter updates (via zone_reclaimable) which are performed from the workqueue context. If all the current workers get assigned to an allocation request, though, they will be looping inside the allocator trying to reclaim memory but zone_reclaimable can see stalled numbers so it will consider a zone reclaimable even though it has been scanned way too much. WQ concurrency logic will not consider this situation as a congested workqueue because it relies that worker would have to sleep in such a situation. This also means that it doesn't try to spawn new workers or invoke the rescuer thread if the one is assigned to the queue. In order to fix this issue we need to do two things. First we have to let wq concurrency code know that we are in trouble so we have to do a short sleep. In order to prevent from issues handled by 0e093d99 ("writeback: do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encountered in the current zone") we limit the sleep only to worker threads which are the ones of the interest anyway. The second thing to do is to create a dedicated workqueue for vmstat and mark it WQ_MEM_RECLAIM to note it participates in the reclaim and to have a spare worker thread for it. Signed-off-by: Michal Hocko <mhocko@suse.com> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Tejun Heo <tj@kernel.org> Cc: Cristopher Lameter <clameter@sgi.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Arkadiusz Miskiewicz <arekm@maven.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Ben Hutchings <ben@decadent.org.uk> [ luis: backported to 3.16, based on Ben's backport to 3.2: - use queue_delayed_work instead of queue_delayed_work_on in function vmstat_update() - change start_cpu_timer() instead of vmstat_shepherd() - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
- 18 Jan, 2016 13 commits
-
-
Naoya Horiguchi authored
commit a88c7695 upstream. When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back on alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator. In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement h->resv_huge_pages, which means that successful hugetlb_fault() returns without releasing the reserve count. As a result, subsequent hugetlb_fault() might fail despite that there are still free hugepages. This patch simply adds decrementing code on that code path. I reproduced this problem when testing v4.3 kernel in the following situation: - the test machine/VM is a NUMA system, - hugepage overcommiting is enabled, - most of hugepages are allocated and there's only one free hugepage which is on node 0 (for example), - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to node 1, tries to allocate a hugepage, - the allocation should fail but the reserve count is still hold. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [ luis: backported to 3.16: - use 'chg' instead of 'gbl_chg' - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Mikulas Patocka authored
commit e46e31a3 upstream. When using the Promise TX2+ SATA controller on PA-RISC, the system often crashes with kernel panic, for example just writing data with the dd utility will make it crash. Kernel panic - not syncing: drivers/parisc/sba_iommu.c: I/O MMU @ 000000000000a000 is out of mapping resources CPU: 0 PID: 18442 Comm: mkspadfs Not tainted 4.4.0-rc2 #2 Backtrace: [<000000004021497c>] show_stack+0x14/0x20 [<0000000040410bf0>] dump_stack+0x88/0x100 [<000000004023978c>] panic+0x124/0x360 [<0000000040452c18>] sba_alloc_range+0x698/0x6a0 [<0000000040453150>] sba_map_sg+0x260/0x5b8 [<000000000c18dbb4>] ata_qc_issue+0x264/0x4a8 [libata] [<000000000c19535c>] ata_scsi_translate+0xe4/0x220 [libata] [<000000000c19a93c>] ata_scsi_queuecmd+0xbc/0x320 [libata] [<0000000040499bbc>] scsi_dispatch_cmd+0xfc/0x130 [<000000004049da34>] scsi_request_fn+0x6e4/0x970 [<00000000403e95a8>] __blk_run_queue+0x40/0x60 [<00000000403e9d8c>] blk_run_queue+0x3c/0x68 [<000000004049a534>] scsi_run_queue+0x2a4/0x360 [<000000004049be68>] scsi_end_request+0x1a8/0x238 [<000000004049de84>] scsi_io_completion+0xfc/0x688 [<0000000040493c74>] scsi_finish_command+0x17c/0x1d0 The cause of the crash is not exhaustion of the IOMMU space, there is plenty of free pages. The function sba_alloc_range is called with size 0x11000, thus the pages_needed variable is 0x11. The function sba_search_bitmap is called with bits_wanted 0x11 and boundary size is 0x10 (because dma_get_seg_boundary(dev) returns 0xffff). The function sba_search_bitmap attempts to allocate 17 pages that must not cross 16-page boundary - it can't satisfy this requirement (iommu_is_span_boundary always returns true) and fails even if there are many free entries in the IOMMU space. How did it happen that we try to allocate 17 pages that don't cross 16-page boundary? The cause is in the function iommu_coalesce_chunks. This function tries to coalesce adjacent entries in the scatterlist. The function does several checks if it may coalesce one entry with the next, one of those checks is this: if (startsg->length + dma_len > max_seg_size) break; When it finishes coalescing adjacent entries, it allocates the mapping: sg_dma_len(contig_sg) = dma_len; dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE); sg_dma_address(contig_sg) = PIDE_FLAG | (iommu_alloc_range(ioc, dev, dma_len) << IOVP_SHIFT) | dma_offset; It is possible that (startsg->length + dma_len > max_seg_size) is false (we are just near the 0x10000 max_seg_size boundary), so the funcion decides to coalesce this entry with the next entry. When the coalescing succeeds, the function performs dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE); And now, because of non-zero dma_offset, dma_len is greater than 0x10000. iommu_alloc_range (a pointer to sba_alloc_range) is called and it attempts to allocate 17 pages for a device that must not cross 16-page boundary. To fix the bug, we must make sure that dma_len after addition of dma_offset and alignment doesn't cross the segment boundary. I.e. change if (startsg->length + dma_len > max_seg_size) break; to if (ALIGN(dma_len + dma_offset + startsg->length, IOVP_SIZE) > max_seg_size) break; This patch makes this change (it precalculates max_seg_boundary at the beginning of the function iommu_coalesce_chunks). I also added a check that the mapping length doesn't exceed dma_get_seg_boundary(dev) (it is not needed for Promise TX2+ SATA, but it may be needed for other devices that have dma_get_seg_boundary lower than dma_get_max_seg_size). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Alan Stern authored
commit ad87e032 upstream. Some USB device / host controller combinations seem to have problems with Link Power Management. For example, Steinar found that his xHCI controller wouldn't handle bandwidth calculations correctly for two video cards simultaneously when LPM was enabled, even though the bus had plenty of bandwidth available. This patch introduces a new quirk flag for devices that should remain disabled for LPM, and creates quirk entries for Steinar's devices. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Steinar H. Gunderson <sgunderson@bigfoot.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Mathias Nyman authored
commit f69115fd upstream. According to USB 2 specs ports need to signal resume for at least 20ms, in practice even longer, before moving to U0 state. Both host and devices can initiate resume. On device initiated resume, a port status interrupt with the port in resume state in issued. The interrupt handler tags a resume_done[port] timestamp with current time + USB_RESUME_TIMEOUT, and kick roothub timer. Root hub timer requests for port status, finds the port in resume state, checks if resume_done[port] timestamp passed, and set port to U0 state. On host initiated resume, current code sets the port to resume state, sleep 20ms, and finally sets the port to U0 state. This should also be changed to work in a similar way as the device initiated resume, with timestamp tagging, but that is not yet tested and will be a separate fix later. There are a few issues with this approach 1. A host initiated resume will also generate a resume event. The event handler will find the port in resume state, believe it's a device initiated resume, and act accordingly. 2. A port status request might cut the resume signalling short if a get_port_status request is handled during the host resume signalling. The port will be found in resume state. The timestamp is not set leading to time_after_eq(jiffies, timestamp) returning true, as timestamp = 0. get_port_status will proceed with moving the port to U0. 3. If an error, or anything else happens to the port during device initiated resume signalling it will leave all the device resume parameters hanging uncleared, preventing further suspend, returning -EBUSY, and cause the pm thread to busyloop trying to enter suspend. Fix this by using the existing resuming_ports bitfield to indicate that resume signalling timing is taken care of. Check if the resume_done[port] is set before using it for timestamp comparison, and also clear out any resume signalling related variables if port is not in U0 or Resume state This issue was discovered when a PM thread busylooped, trying to runtime suspend the xhci USB 2 roothub on a Dell XPS Reported-by: Daniel J Blueman <daniel@quora.org> Tested-by: Daniel J Blueman <daniel@quora.org> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Kirill A. Shutemov authored
commit 9f5bd308 upstream. There are few defects in vga_get() related to signal hadning: - we shouldn't check for pending signals for TASK_UNINTERRUPTIBLE case; - if we found pending signal we must remove ourself from wait queue and change task state back to running; - -ERESTARTSYS is more appropriate, I guess. Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name> Reviewed-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit ed8b45a3 upstream. If dm_btree_del()'s call to push_frame() fails, e.g. due to btree_node_validator finding invalid metadata, the dm_btree_del() error path must unlock all frames (which have active dm-bufio buffers) that were pushed onto the del_stack. Otherwise, dm_bufio_client_destroy() will BUG_ON() because dm-bufio buffers have leaked, e.g.: device-mapper: bufio: leaked buffer 3, hold count 1, list 0 Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Jan Stancek authored
commit 27f972d3 upstream. We encountered a panic on boot in ipmi_si on a dell per320 due to an uninitialized timer as follows. static int smi_start_processing(void *send_info, ipmi_smi_t intf) { /* Try to claim any interrupts. */ if (new_smi->irq_setup) new_smi->irq_setup(new_smi); --> IRQ arrives here and irq handler tries to modify uninitialized timer which triggers BUG_ON(!timer->function) in __mod_timer(). Call Trace: <IRQ> [<ffffffffa0532617>] start_new_msg+0x47/0x80 [ipmi_si] [<ffffffffa053269e>] start_check_enables+0x4e/0x60 [ipmi_si] [<ffffffffa0532bd8>] smi_event_handler+0x1e8/0x640 [ipmi_si] [<ffffffff810f5584>] ? __rcu_process_callbacks+0x54/0x350 [<ffffffffa053327c>] si_irq_handler+0x3c/0x60 [ipmi_si] [<ffffffff810efaf0>] handle_IRQ_event+0x60/0x170 [<ffffffff810f245e>] handle_edge_irq+0xde/0x180 [<ffffffff8100fc59>] handle_irq+0x49/0xa0 [<ffffffff8154643c>] do_IRQ+0x6c/0xf0 [<ffffffff8100ba53>] ret_from_intr+0x0/0x11 /* Set up the timer that drives the interface. */ setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); The following patch fixes the problem. To: Openipmi-developer@lists.sourceforge.net To: Corey Minyard <minyard@acm.org> CC: linux-kernel@vger.kernel.org Signed-off-by: Jan Stancek <jstancek@redhat.com> Signed-off-by: Tony Camuso <tcamuso@redhat.com> Signed-off-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit 50dd842a upstream. When applying block operations (BOPs) do not remove them from the uncommitted BOP ring-buffer until after they've been applied -- in case we recurse. Also, perform BOP_INC operation, in dm_sm_metadata_create() and sm_metadata_extend(), in terms of the uncommitted BOP ring-buffer rather than using direct calls to sm_ll_inc(). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit 49e99fc7 upstream. When you take a metadata snapshot the btree roots for the mapping and details tree need to have their reference counts incremented so they persist for the lifetime of the metadata snap. The roots being incremented were those currently written in the superblock, which could possibly be out of date if concurrent IO is triggering new mappings, breaking of sharing, etc. Fix this by performing a commit with the metadata lock held while taking a metadata snapshot. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Takashi Iwai authored
commit 9a811230 upstream. Lenovo Thinkpad T440s suffers from constant background noises, and it seems to be a generic hardware issue on this model: https://forums.lenovo.com/t5/ThinkPad-T400-T500-and-newer-T/T440s-speaker-noise/td-p/1339883 As the noise comes from the analog loopback path, disabling the path is the easy workaround. Also, the machine gives significant cracking noises at PM suspend. A workaround found by trial-and-error is to disable the shutup callback currently used for ALC269-variant. This patch addresses these noise issues by introducing a new fixup chain. Although the same workaround might be applicable to other Thinkpad models, it's applied only to T440s (17aa:220c) in this patch, so far, just to be safe (you chicken!). As a compromise, a new model option string "tp440" is provided now, though, so that owners of other Thinkpad models can test it more easily. Bugzilla: https://bugzilla.opensuse.org/show_bug.cgi?id=958504Reported-and-tested-by: Tim Hardeck <thardeck@suse.de> Signed-off-by: Takashi Iwai <tiwai@suse.de> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Oded Gabbay authored
commit 361c32d3 upstream. This patch makes the VCE IB test pass on Big-Endian systems. It converts to little-endian the contents of the VCE message. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Oded Gabbay authored
commit 687f4b98 upstream. This patch fixes the VCE ring test when running on Big-Endian machines. Every write to the ring needs to be translated to little-endian. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Oded Gabbay authored
commit 5f3e226f upstream. This patch makes the IB test on the GFX ring pass for CI-based cards installed in Big-Endian machines. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
- 11 Jan, 2016 23 commits
-
-
Al Viro authored
commit 4ad78628 upstream. For block devices the pagecache is associated with the inode on bdevfs, not with the aliasing ones on the mountable filesystems. The latter have its own ->i_data empty and ->i_mapping pointing to the (unique per major/minor) bdevfs inode. That guarantees cache coherence between all block device inodes with the same device number. Eviction of an alias inode has no business trying to evict the pages belonging to bdevfs one; moreover, ->i_mapping is only safe to access when the thing is opened. At the time of ->evict_inode() the victim is definitely *not* opened. We are about to kill the address space embedded into struct inode (inode->i_data) and that's what we need to empty of any pages. 9p instance tries to empty inode->i_mapping instead, which is both unsafe and bogus - if we have several device nodes with the same device number in different places, closing one of them should not try to empty the (shared) page cache. Fortunately, other instances in the tree are OK; they are evicting from &inode->i_data instead, as 9p one should. Reported-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com> Tested-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Hui Wang authored
commit 23adc192 upstream. We have two latest thinkpad laptop models which are all based on the Intel skylake platforms, and all of them have the codec alc293 on them. When the machines boot to the desktop, an greeting dialogue shows up with the notification sound. But on these two models, there is noise with the notification sound. We have 3 SKUs for each of the models, all of them have this problem. So far, this problem is only specific to these two thinkpad models, we did not find this problem on the old thinkpad models with the codec alc293 or alc292. A workaround for this problem is disabling the aamix. BugLink: https://bugs.launchpad.net/bugs/1523517Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Sagi Grimberg authored
commit 09c0c0be upstream. When using work request based memory registration (fast_reg) we must reserve SQ entries for registration and invalidation in addition to send operations. Each IO consumes 3 SQ entries (registration, send, invalidation) so we need to allocate 3x larger send-queue instead of 2x. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Doug Ledford <dledford@redhat.com> [ luis: backported to 3.16: - change function srp_create_target_ib() instead of srp_create_ch_ib() - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Trond Myklebust authored
commit 756b9b37 upstream. The NFSv4.1 callback channel is currently broken because the receive message will keep shrinking because the backchannel receive buffer size never gets reset. The easiest solution to this problem is instead of changing the receive buffer, to rather adjust the copied request. Fixes: 38b7631f ("nfs4: limit callback decoding to received bytes") Cc: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Benjamin Coddington authored
commit 38b7631f upstream. A truncated cb_compound request will cause the client to decode null or data from a previous callback for nfs4.1 backchannel case, or uninitialized data for the nfs4.0 case. This is because the path through svc_process_common() advances the request's iov_base and decrements iov_len without adjusting the overall xdr_buf's len field. That causes xdr_init_decode() to set up the xdr_stream with an incorrect length in nfs4_callback_compound(). Fixing this for the nfs4.1 backchannel case first requires setting the correct iov_len and page_len based on the length of received data in the same manner as the nfs4.0 case. Then the request's xdr_buf length can be adjusted for both cases based upon the remaining iov_len and page_len. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Suman Anna authored
commit c13f99b7 upstream. The virtio core uses a static ida named virtio_index_ida for assigning index numbers to virtio devices during registration. The ida core may allocate some internal idr cache layers and an ida bitmap upon any ida allocation, and all these layers are truely freed only upon the ida destruction. The virtio_index_ida is not destroyed at present, leading to a memory leak when using the virtio core as a module and atleast one virtio device is registered and unregistered. Fix this by invoking ida_destroy() in the virtio core module exit. Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
David Henningsson authored
commit 02f6ff90 upstream. On the internal mic of the Packard Bell DOTS, one channel has an inverted signal. Add a quirk to fix this up. BugLink: https://bugs.launchpad.net/bugs/1523232Signed-off-by: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Takashi Iwai authored
commit a74a8216 upstream. rme96 driver needs to reset DAC depending on the sample rate, and this results in resetting to the max volume suddenly. It's because of the missing call of snd_rme96_apply_dac_volume(). However, calling this function right after the DAC reset still may not work, and we need some delay before this call. Since the DAC reset and the procedure after that are performed in the spinlock, we delay the DAC volume restore at the end after the spinlock. Reported-and-tested-by: Sylvain LABOISNE <maeda1@free.fr> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Chunfeng Yun authored
commit 096b110a upstream. if a full speed hub connects to a high speed hub which supports MTT, the MTT field of its slot context will be set to 1 when xHCI driver setups an xHCI virtual device in xhci_setup_addressable_virt_dev(); once usb core fetch its hub descriptor, and need to update the xHC's internal data structures for the device, the HUB field of its slot context will be set to 1 too, meanwhile MTT is also set before, this will cause configure endpoint command fail, so in the case, we should clear MTT to 0 for full speed hub according to section 6.2.2 Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Peter Zijlstra authored
commit 642c2d67 upstream. Dmitry reported a fairly silly recursive lock deadlock for PERF_EVENT_IOC_PERIOD, fix this by explicitly doing the inactive part of __perf_event_period() instead of calling that function. Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Potapenko <glider@google.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Fixes: c7999c6f ("perf: Fix PERF_EVENT_IOC_PERIOD migration race") Link: http://lkml.kernel.org/r/20151130115615.GJ17308@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Ken Xue authored
commit 4fd41a85 upstream. The routines in scsi_pm.c assume that if a runtime-PM callback is invoked for a SCSI device, it can only mean that the device's driver has asked the block layer to handle the runtime power management (by calling blk_pm_runtime_init(), which among other things sets q->dev). However, this assumption turns out to be wrong for things like the ses driver. Normally ses devices are not allowed to do runtime PM, but userspace can override this setting. If this happens, the kernel gets a NULL pointer dereference when blk_post_runtime_resume() tries to use the uninitialized q->dev pointer. This patch fixes the problem by checking q->dev in block layer before handle runtime PM. Since ses doesn't define any PM callbacks and call blk_pm_runtime_init(), the crash won't occur. This fixes Bugzilla #101371. https://bugzilla.kernel.org/show_bug.cgi?id=101371 More discussion can be found from below link. http://marc.info/?l=linux-scsi&m=144163730531875&w=2Signed-off-by: Ken Xue <Ken.Xue@amd.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Cc: Xiangliang Yu <Xiangliang.Yu@amd.com> Cc: James E.J. Bottomley <JBottomley@odin.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Michael Terry <Michael.terry@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Mike Snitzer authored
commit 30ce6e1c upstream. The block allocated at the start of btree_split_sibling() is never released if later insert_at() fails. Fix this by releasing the previously allocated bufio block using unlock_block(). Reported-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Ben Hutchings authored
commit 5377adb0 upstream. usb_parse_ss_endpoint_companion() now decodes the burst multiplier correctly in order to check that it's <= 3, but still uses the wrong expression if warning that it's > 3. Fixes: ff30cbc8 ("usb: Use the USB_SS_MULT() macro to get the ...") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Alexey Khoroshilov authored
commit f9fa1887 upstream. qset_fill_page_list() do not check for dma mapping errors. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Hans Yang authored
commit 464ad8c4 upstream. When a USB 3.0 mass storage device is disconnected in transporting state, storage device driver may handle it as a transport error and reset the device by invoking usb_reset_and_verify_device() and following could happen: in usb_reset_and_verify_device(): udev->bos = NULL; For U1/U2 enabled devices, driver will disable LPM, and in some conditions: from usb_unlocked_disable_lpm() --> usb_disable_lpm() --> usb_enable_lpm() udev->bos->ss_cap->bU1devExitLat; And it causes 'NULL pointer' and 'kernel panic': [ 157.976257] Unable to handle kernel NULL pointer dereference at virtual address 00000010 ... [ 158.026400] PC is at usb_enable_link_state+0x34/0x2e0 [ 158.031442] LR is at usb_enable_lpm+0x98/0xac ... [ 158.137368] [<ffffffc0006a1cac>] usb_enable_link_state+0x34/0x2e0 [ 158.143451] [<ffffffc0006a1fec>] usb_enable_lpm+0x94/0xac [ 158.148840] [<ffffffc0006a20e8>] usb_disable_lpm+0xa8/0xb4 ... [ 158.214954] Kernel panic - not syncing: Fatal exception This commit moves 'udev->bos = NULL' behind usb_unlocked_disable_lpm() to prevent from NULL pointer access. Issue can be reproduced by following setup: 1) A SS pen drive behind a SS hub connected to the host. 2) Transporting data between the pen drive and the host. 3) Abruptly disconnect hub and pen drive from host. 4) With a chance it crashes. Signed-off-by: Hans Yang <hansy@nvidia.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Guillaume Delbergue authored
commit d5d4fdd8 upstream. This patch is specifically for PCI support on the Versatile PB board using a DT. Currently, the dynamic IRQ mapping is broken when using DTs. For example, on QEMU, the SCSI driver is unable to request the IRQ. To fix this issue, this patch replaces the current dynamic mechanism with a static value as is done in the non-DT case. Signed-off-by: Guillaume Delbergue <guillaume.delbergue@greensocs.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Al Viro authored
commit 9225c0b7 upstream. missing get_user() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Dmitry Katsubo authored
commit 9fa62b1a upstream. The patch extends the family of SATA-to-USB JMicron adapters that need FUA to be disabled and applies the same policy for uas driver. See details in http://unix.stackexchange.com/questions/237204/Signed-off-by: Dmitry Katsubo <dmitry.katsubo@gmail.com> Tested-by: Dmitry Katsubo <dmitry.katsubo@gmail.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Mikulas Patocka authored
commit d98f1cd0 upstream. When I connect an Intel SSD to SATA SIL controller (PCI ID 1095:3114), any TRIM command results in I/O errors being reported in the log. There is other similar error reported with TRIM and the SIL controller: https://bugs.centos.org/view.php?id=5880 Apparently the controller doesn't support TRIM commands. This patch disables TRIM support on the SATA SIL controller. ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 ata7.00: BMDMA2 stat 0x50001 ata7.00: failed command: DATA SET MANAGEMENT ata7.00: cmd 06/01:01:00:00:00/00:00:00:00:00/a0 tag 0 dma 512 out res 51/04:01:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) ata7.00: status: { DRDY ERR } ata7.00: error: { ABRT } ata7.00: device reported invalid CHS sector 0 sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 8:0:0:0: [sdb] tag#0 Sense Key : Illegal Request [current] [descriptor] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unaligned write command sd 8:0:0:0: [sdb] tag#0 CDB: Write same(16) 93 08 00 00 00 00 00 21 95 88 00 20 00 00 00 00 blk_update_request: I/O error, dev sdb, sector 2200968 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Xiangliang Yu authored
commit 023113d2 upstream. Current code doesn't update port value of Port Multiplier(PM) when sending FIS of softreset to device, command will fail if FBS is enabled. There are two ways to fix the issue: the first is to disable FBS before sending softreset command to PM device and the second is to update port value of PM when sending command. For the first way, i can't find any related rule in AHCI Spec. The second way can avoid disabling FBS and has better performance. Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Thomas Hellstrom authored
commit 025af189 upstream. In ttm_write_lock(), the uninterruptible path should call __ttm_write_lock() not __ttm_read_lock(). This fixes a vmwgfx hang on F23 start up. syeh: Extracted this from one of Thomas' internal patches. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Sinclair Yeh <syeh@vmware.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Jan Kara authored
commit bc23f0c8 upstream. Ted and Namjae have reported that truncated pages don't get timely reclaimed after being truncated in data=journal mode. The following test triggers the issue easily: for (i = 0; i < 1000; i++) { pwrite(fd, buf, 1024*1024, 0); fsync(fd); fsync(fd); ftruncate(fd, 0); } The reason is that journal_unmap_buffer() finds that truncated buffers are not journalled (jh->b_transaction == NULL), they are part of checkpoint list of a transaction (jh->b_cp_transaction != NULL) and have been already written out (!buffer_dirty(bh)). We clean such buffers but we leave them in the checkpoint list. Since checkpoint transaction holds a reference to the journal head, these buffers cannot be released until the checkpoint transaction is cleaned up. And at that point we don't call release_buffer_page() anymore so pages detached from mapping are lingering in the system waiting for reclaim to find them and free them. Fix the problem by removing buffers from transaction checkpoint lists when journal_unmap_buffer() finds out they don't have to be there anymore. Reported-and-tested-by: Namjae Jeon <namjae.jeon@samsung.com> Fixes: de1b7941Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
David Turner authored
commit a4dad1ae upstream. In ext4, the bottom two bits of {a,c,m}time_extra are used to extend the {a,c,m}time fields, deferring the year 2038 problem to the year 2446. When decoding these extended fields, for times whose bottom 32 bits would represent a negative number, sign extension causes the 64-bit extended timestamp to be negative as well, which is not what's intended. This patch corrects that issue, so that the only negative {a,c,m}times are those between 1901 and 1970 (as per 32-bit signed timestamps). Some older kernels might have written pre-1970 dates with 1,1 in the extra bits. This patch treats those incorrectly-encoded dates as pre-1970, instead of post-2311, until kernel 4.20 is released. Hopefully by then e2fsck will have fixed up the bad data. Also add a comment explaining the encoding of ext4's extra {a,c,m}time bits. Signed-off-by: David Turner <novalis@novalis.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reported-by: Mark Harris <mh8928@yahoo.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=23732Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-