- 11 Jun, 2019 40 commits
-
-
Louis Li authored
commit ce0e22f5 upstream. [What] vce ring test fails consistently during resume in s3 cycle, due to mismatch read & write pointers. On debug/analysis its found that rptr to be compared is not being correctly updated/read, which leads to this failure. Below is the failure signature: [drm:amdgpu_vce_ring_test_ring] *ERROR* amdgpu: ring 12 test failed [drm:amdgpu_device_ip_resume_phase2] *ERROR* resume of IP block <vce_v3_0> failed -110 [drm:amdgpu_device_resume] *ERROR* amdgpu_device_ip_resume failed (-110). [How] fetch rptr appropriately, meaning move its read location further down in the code flow. With this patch applied the s3 failure is no more seen for >5k s3 cycles, which otherwise is pretty consistent. V2: remove reduntant fetch of rptr Signed-off-by: Louis Li <Ching-shih.Li@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Harry Wentland authored
commit ada637e7 upstream. [WHY] We only want to load DMCU FW on Picasso and Raven 2, not on Raven 1. Signed-off-by: Harry Wentland <harry.wentland@amd.com> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 5887a599 upstream. Not necessary on soc15 and breaks driver reload on server cards. Acked-by: Amber Lin <Amber.Lin@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Wilson authored
commit d90c06d5 upstream. This was supposed to be a mask of all known rings, but it is being used by execbuffer to filter out invalid rings, and so is instead mapping high unused values onto valid rings. Instead of a mask of all known rings, we need it to be the mask of all possible rings. Fixes: 549f7365 ("drm/i915: Enable SandyBridge blitter ring") Fixes: de1add36 ("drm/i915: Decouple execbuf uAPI from internal implementation") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: <stable@vger.kernel.org> # v4.6+ Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190301140404.26690-21-chris@chris-wilson.co.ukSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aaron Liu authored
commit bdb1ccb0 upstream. In amdgpu_atif_handler, when hotplug event received, remove ATPX_DGPU_REQ_POWER_FOR_DISPLAYS check. This bit's check will cause missing system resume. Signed-off-by: Aaron Liu <aaron.liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian König authored
commit 2e26ccb1 upstream. Instead of the closest reference divider prefer the lowest, this fixes flickering issues on HP Compaq nx9420. Bugs: https://bugs.freedesktop.org/show_bug.cgi?id=108514Suggested-by: Paul Dufresne <dufresnep@gmail.com> Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 9d6fea57 upstream. In case we need to use them for GPU reset prior initializing the asic. Fixes a crash if the driver attempts to reset the GPU at driver load time. Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mario Kleiner authored
commit 0cbd0adc upstream. As discussed with Nicholas and Daniel Vetter (patchwork link to discussion below), the VRR timestamping behaviour produced utterly useless and bogus vblank/pageflip timestamps. We have found a way to fix this and provide sane behaviour. As of Linux 5.2, the amdgpu driver will be able to provide exactly the same vblank / pageflip timestamp semantic in variable refresh rate mode as in standard fixed refresh rate mode. This is achieved by deferring core vblank handling (drm_crtc_handle_vblank()) until the end of front porch, and also defer the sending of pageflip completion events until end of front porch, when we can safely compute correct pageflip/vblank timestamps. The same approach will be possible for other VRR capable kms drivers, so we can actually have sane and useful timestamps in VRR mode. This patch removes the section of the docs that describes the broken timestamp behaviour present in Linux 5.0/5.1. Fixes: ab7a664f ("drm: Document variable refresh properties") Link: https://patchwork.freedesktop.org/patch/285333/Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190418060157.18968-1-mario.kleiner.de@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ryan Pavlik authored
commit 29054230 upstream. Add two EDID vendor/product pairs used across a variety of Sensics products, as well as the OSVR HDK and HDK 2. Signed-off-by: Ryan Pavlik <ryan.pavlik@collabora.com> Signed-off-by: Daniel Stone <daniels@collabora.com> Reviewed-by: Daniel Stone <daniels@collabora.com> Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de> Link: https://patchwork.freedesktop.org/patch/msgid/20181203164644.13974-1-ryan.pavlik@collabora.com Cc: <stable@vger.kernel.org> # v4.15+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Airlie authored
commit b30a43ac upstream. There was a nouveau DDX that relied on legacy context ioctls to work, but we fixed it years ago, give distros that have a modern DDX the option to break the uAPI and close the mess of holes that legacy context support is. Full context of the story: commit 0e975980 Author: Peter Antoine <peter.antoine@intel.com> Date: Tue Jun 23 08:18:49 2015 +0100 drm: Turn off Legacy Context Functions The context functions are not used by the i915 driver and should not be used by modeset drivers. These driver functions contain several bugs and security holes. This change makes these functions optional can be turned on by a setting, they are turned off by default for modeset driver with the exception of the nouvea driver that may require them with an old version of libdrm. The previous attempt was commit 7c510133 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Aug 8 15:41:21 2013 +0200 drm: mark context support as a legacy subsystem but this had to be reverted commit c21eb21c Author: Dave Airlie <airlied@redhat.com> Date: Fri Sep 20 08:32:59 2013 +1000 Revert "drm: mark context support as a legacy subsystem" v2: remove returns from void function, and formatting (Daniel Vetter) v3: - s/Nova/nouveau/ in the commit message, and add references to the previous attempts - drop the part touching the drm hw lock, that should be a separate patch. Signed-off-by: Peter Antoine <peter.antoine@intel.com> (v2) Cc: Peter Antoine <peter.antoine@intel.com> (v2) Reviewed-by: Peter Antoine <peter.antoine@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> v2: move DRM_VM dependency into legacy config. v3: fix missing dep (kbuild robot) Cc: stable@vger.kernel.org Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andres Rodriguez authored
commit 30d62d44 upstream. Add vendor/product pairs for the Valve Index HMDs. Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Cc: Dave Airlie <airlied@redhat.com> Cc: <stable@vger.kernel.org> # v4.15 Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190502193157.15692-1-andresx7@gmail.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helen Koike authored
commit 474d952b upstream. Async update callbacks are expected to set the old_fb in the new_state so prepare/cleanup framebuffers are balanced. Cc: <stable@vger.kernel.org> # v4.14+ Fixes: 224a4c97 ("drm/msm: update cursors asynchronously through atomic") Suggested-by: Boris Brezillon <boris.brezillon@collabora.com> Signed-off-by: Helen Koike <helen.koike@collabora.com> Acked-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190603165610.24614-4-helen.koike@collabora.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Patrik Jakobsson authored
commit 7c420636 upstream. Some machines have an lvds child device in vbt even though a panel is not attached. To make detection more reliable we now also check the lvds config bits available in the vbt. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1665766 Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190416114607.1072-1-patrik.r.jakobsson@gmail.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helen Koike authored
commit c16b8555 upstream. Async update callbacks are expected to set the old_fb in the new_state so prepare/cleanup framebuffers are balanced. Calling drm_atomic_set_fb_for_plane() (which gets a reference of the new fb and put the old fb) is not required, as it's taken care by drm_mode_cursor_universal() when calling drm_atomic_helper_update_plane(). Cc: <stable@vger.kernel.org> # v4.19+ Fixes: 539c320b ("drm/vc4: update cursors asynchronously through atomic") Suggested-by: Boris Brezillon <boris.brezillon@collabora.com> Signed-off-by: Helen Koike <helen.koike@collabora.com> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190603165610.24614-5-helen.koike@collabora.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helen Koike authored
commit d985a353 upstream. In the case of async update, modifications are done in place, i.e. in the current plane state, so the new_state is prepared and the new_state is cleaned up (instead of the old_state, unlike what happens in a normal sync update). To cleanup the old_fb properly, it needs to be placed in the new_state in the end of async_update, so cleanup call will unreference the old_fb correctly. Also, the previous code had a: plane_state = plane->funcs->atomic_duplicate_state(plane); ... swap(plane_state, plane->state); if (plane->state->fb && plane->state->fb != new_state->fb) { ... } Which was wrong, as the fb were just assigned to be equal, so this if statement nevers evaluates to true. Another details is that the function drm_crtc_vblank_get() can only be called when vop->is_enabled is true, otherwise it has no effect and trows a WARN_ON(). Calling drm_atomic_set_fb_for_plane() (which get a referent of the new fb and pus the old fb) is not required, as it is taken care by drm_mode_cursor_universal() when calling drm_atomic_helper_update_plane(). Fixes: 15609559 ("drm/rockchip: update cursors asynchronously through atomic.") Cc: <stable@vger.kernel.org> # v4.20+ Signed-off-by: Helen Koike <helen.koike@collabora.com> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190603165610.24614-2-helen.koike@collabora.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit bd17cc5a upstream. The limit here is supposed to be how much of the page is left, but it's just using PAGE_SIZE as the limit. The other thing to remember is that snprintf() returns the number of bytes which would have been copied if we had had enough room. So that means that if we run out of space then this code would end up passing a negative value as the limit and the kernel would print an error message. I have change the code to use scnprintf() which returns the number of bytes that were successfully printed (not counting the NUL terminator). Fixes: c92316bf ("test_firmware: add batched firmware tests") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit 110080ce upstream. There are a couple potential integer overflows here. round_up(m->size + (m->addr & ~PAGE_MASK), PAGE_SIZE); The first thing is that the "m->size + (...)" addition could overflow, and the second is that round_up() overflows to zero if the result is within PAGE_SIZE of the type max. In this code, the "m->size" variable is an u64 but we're saving the result in "map_size" which is an unsigned long and genwqe_user_vmap() takes an unsigned long as well. So I have used ULONG_MAX as the upper bound. From a practical perspective unsigned long is fine/better than trying to change all the types to u64. Fixes: eaf4722d ("GenWQE Character device and DDCB queue") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Burton authored
commit e4f2d1af upstream. The pistachio platform uses the U-Boot bootloader & generally boots a kernel in the uImage format. As such it's useful to build one when building the kernel, but to do so currently requires the user to manually specify a uImage target on the make command line. Make uImage.gz the pistachio platform's default build target, so that the default is to build a kernel image that we can actually boot on a board such as the MIPS Creator Ci40. Marked for stable backport as far as v4.1 where pistachio support was introduced. This is primarily useful for CI systems such as kernelci.org which will benefit from us building a suitable image which can then be booted as part of automated testing, extending our test coverage to the affected stable branches. Signed-off-by: Paul Burton <paul.burton@mips.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Kevin Hilman <khilman@baylibre.com> URL: https://groups.io/g/kernelci/message/388 Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Burton authored
commit 074a1e11 upstream. The virt_addr_valid() function is meant to return true iff virt_to_page() will return a valid struct page reference. This is true iff the address provided is found within the unmapped address range between PAGE_OFFSET & MAP_BASE, but we don't currently check for that condition. Instead we simply mask the address to obtain what will be a physical address if the virtual address is indeed in the desired range, shift it to form a PFN & then call pfn_valid(). This can incorrectly return true if called with a virtual address which, after masking, happens to form a physical address corresponding to a valid PFN. For example we may vmalloc an address in the kernel mapped region starting a MAP_BASE & obtain the virtual address: addr = 0xc000000000002000 When masked by virt_to_phys(), which uses __pa() & in turn CPHYSADDR(), we obtain the following (bogus) physical address: addr = 0x2000 In a common system with PHYS_OFFSET=0 this will correspond to a valid struct page which should really be accessed by virtual address PAGE_OFFSET+0x2000, causing virt_addr_valid() to incorrectly return 1 indicating that the original address corresponds to a struct page. This is equivalent to the ARM64 change made in commit ca219452 ("arm64: Correctly bounds check virt_addr_valid"). This fixes fallout when hardened usercopy is enabled caused by the related commit 517e1fbe ("mm/usercopy: Drop extra is_vmalloc_or_module() check") which removed a check for the vmalloc range that was present from the introduction of the hardened usercopy feature. Signed-off-by: Paul Burton <paul.burton@mips.com> References: ca219452 ("arm64: Correctly bounds check virt_addr_valid") References: 517e1fbe ("mm/usercopy: Drop extra is_vmalloc_or_module() check") Reported-by: Julien Cristau <jcristau@debian.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: YunQiang Su <ysu@wavecomp.com> URL: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929366 Cc: stable@vger.kernel.org # v4.12+ Cc: linux-mips@vger.kernel.org Cc: Yunqiang Su <ysu@wavecomp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Roger Pau Monne authored
commit 1d5c76e6 upstream. There's no reason to request physically contiguous memory for those allocations. [boris: added CC to stable] Cc: stable@vger.kernel.org Reported-by: Ian Jackson <ian.jackson@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit 5651cd3c upstream. When the controller supports less queues than requested, we should make sure that queue mapping does the right thing and not assume that all queues are available. This fixes a crash when the controller supports less queues than requested. The rules are: 1. if no write/poll queues are requested, we assign the available queues to the default queue map. The default and read queue maps share the existing queues. 2. if write queues are requested: - first make sure that read queue map gets the requested nr_io_queues count - then grant the default queue map the minimum between the requested nr_write_queues and the remaining queues. If there are no available queues to dedicate to the default queue map, fallback to (1) and share all the queues in the existing queue map. 3. if poll queues are requested: - map the remaining queues to the poll queue map. Also, provide a log indication on how we constructed the different queue maps. Reported-by: Harris, James R <james.r.harris@intel.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Tested-by: Jim Harris <james.r.harris@intel.com> Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gerald Schaefer authored
commit 962f0af8 upstream. Commit 0aaba41b ("s390: remove all code using the access register mode") removed access register mode from the kernel, and also from the address space detection logic. However, user space could still switch to access register mode (trans_exc_code == 1), and exceptions in that mode would not be correctly assigned. Fix this by adding a check for trans_exc_code == 1 to get_fault_type(), and remove the wrong comment line before that function. Fixes: 0aaba41b ("s390: remove all code using the access register mode") Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@vger.kernel.org> # v4.15+ Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Hancock authored
commit 49b80958 upstream. This driver does not support reading more than 255 bytes at once because the register for storing the number of bytes to read is only 8 bits. Add a max_read_len quirk to enforce this. This was found when using this driver with the SFP driver, which was previously reading all 256 bytes in the SFP EEPROM in one transaction. This caused a bunch of hard-to-debug errors in the xiic driver since the driver/logic was treating the number of bytes to read as zero. Rejecting transactions that aren't supported at least allows the problem to be diagnosed more easily. Signed-off-by: Robert Hancock <hancock@sedsystems.ca> Reviewed-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jann Horn authored
commit de9f8696 upstream. get_desc() computes a pointer into the LDT while holding a lock that protects the LDT from being freed, but then drops the lock and returns the (now potentially dangling) pointer to its caller. Fix it by giving the caller a copy of the LDT entry instead. Fixes: 670f928b ("x86/insn-eval: Add utility function to get segment descriptor") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jiri Kosina authored
commit ec527c31 upstream. As explained in 0cc3cd21 ("cpu/hotplug: Boot HT siblings at least once") we always, no matter what, have to bring up x86 HT siblings during boot at least once in order to avoid first MCE bringing the system to its knees. That means that whenever 'nosmt' is supplied on the kernel command-line, all the HT siblings are as a result sitting in mwait or cpudile after going through the online-offline cycle at least once. This causes a serious issue though when a kernel, which saw 'nosmt' on its commandline, is going to perform resume from hibernation: if the resume from the hibernated image is successful, cr3 is flipped in order to point to the address space of the kernel that is being resumed, which in turn means that all the HT siblings are all of a sudden mwaiting on address which is no longer valid. That results in triple fault shortly after cr3 is switched, and machine reboots. Fix this by always waking up all the SMT siblings before initiating the 'restore from hibernation' process; this guarantees that all the HT siblings will be properly carried over to the resumed kernel waiting in resume_play_dead(), and acted upon accordingly afterwards, based on the target kernel configuration. Symmetricaly, the resumed kernel has to push the SMT siblings to mwait again in case it has SMT disabled; this means it has to online all the siblings when resuming (so that they come out of hlt) and offline them again to let them reach mwait. Cc: 4.19+ <stable@vger.kernel.org> # v4.19+ Debugged-by: Thomas Gleixner <tglx@linutronix.de> Fixes: 0cc3cd21 ("cpu/hotplug: Boot HT siblings at least once") Signed-off-by: Jiri Kosina <jkosina@suse.cz> Acked-by: Pavel Machek <pavel@ucw.cz> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Faiz Abbas authored
commit 73979931 upstream. In the call to regmap_update_bits() for SLOTTYPE, the mask and value fields are exchanged. Fix this. Signed-off-by: Faiz Abbas <faiz_abbas@ti.com> Fixes: 41fd4cae ("mmc: sdhci_am654: Add Initial Support for AM654 SDHCI driver") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takeshi Saito authored
commit 51b72656 upstream. If an SCC error occurs during a read/write command execution, a false positive CRC error message is output. mmcblk0: response CRC error sending r/w cmd command, card status 0x900 check_scc_error() checks SCC_RVSREQ.RVSERR bit. RVSERR detects a correction error in the next (up or down) delay tap position. However, since the command is successful, only retuning needs to be executed. This has been confirmed by HW engineers. Thus, on SCC error, set retuning flag instead of setting an error code. Fixes: b85fb0a1 ("mmc: tmio: Fix SCC error detection") Signed-off-by: Takeshi Saito <takeshi.saito.xv@renesas.com> [wsa: updated comment and commit message, removed some braces] Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit 61009f82 upstream. We accidentally changed the error code from -EAGAIN to 1 when we did the blk-mq conversion. Maybe a contributing factor to this mistake is that it wasn't obvious that the "while (chunk) {" condition is always true. I have cleaned that up as well. Fixes: d0be1227 ("mspro_block: convert to blk-mq") Cc: stable@vger.kernel.org Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Masahiro Yamada authored
commit 913ab978 upstream. To print the pathname that will be used by shell in the current environment, 'command -v' is a standardized way. [1] 'which' is also often used in scripts, but it is less portable. When I worked on commit bd55f96f ("kbuild: refactor cc-cross-prefix implementation"), I was eager to use 'command -v' but it did not work. (The reason is explained below.) I kept 'which' as before but got rid of '> /dev/null 2>&1' as I thought it was no longer needed. Sorry, I was wrong. It works well on my Ubuntu machine, but Alexey Brodkin reports noisy warnings on CentOS7 when 'which' fails to find the given command in the PATH environment. $ which foo which: no foo in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) Given that behavior of 'which' depends on system (and it may not be installed by default), I want to try 'command -v' once again. The specification [1] clearly describes the behavior of 'command -v' when the given command is not found: Otherwise, no output shall be written and the exit status shall reflect that the name was not found. However, we need a little magic to use 'command -v' from Make. $(shell ...) passes the argument to a subshell for execution, and returns the standard output of the command. Here is a trick. GNU Make may optimize this by executing the command directly instead of forking a subshell, if no shell special characters are found in the command and omitting the subshell will not change the behavior. In this case, no shell special character is used. So, Make will try to run it directly. However, 'command' is a shell-builtin command, then Make would fail to find it in the PATH environment: $ make ARCH=m68k defconfig make: command: Command not found make: command: Command not found make: command: Command not found In fact, Make has a table of shell-builtin commands because it must ask the shell to execute them. Until recently, 'command' was missing in the table. This issue was fixed by the following commit: | commit 1af314465e5dfe3e8baa839a32a72e83c04f26ef | Author: Paul Smith <psmith@gnu.org> | Date: Sun Nov 12 18:10:28 2017 -0500 | | * job.c: Add "command" as a known shell built-in. | | This is not a POSIX shell built-in but it's common in UNIX shells. | Reported by Nick Bowler <nbowler@draconx.ca>. Because the latest release is GNU Make 4.2.1 in 2016, this commit is not included in any released versions. (But some distributions may have back-ported it.) We need to trick Make to spawn a subshell. There are various ways to do so: 1) Use a shell special character '~' as dummy $(shell : ~; command -v $(c)gcc) 2) Use a variable reference that always expands to the empty string (suggested by David Laight) $(shell command$${x:+} -v $(c)gcc) 3) Use redirect $(shell command -v $(c)gcc 2>/dev/null) I chose 3) to not confuse people. The stderr would not be polluted anyway, but it will provide extra safety, and is easy to understand. Tested on Make 3.81, 3.82, 4.0, 4.1, 4.2, 4.2.1 [1] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/command.html Fixes: bd55f96f ("kbuild: refactor cc-cross-prefix implementation") Cc: linux-stable <stable@vger.kernel.org> # 5.1 Reported-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Tested-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kees Cook authored
commit 8880fa32 upstream. The ram pstore backend has always had the crash dumper frontend enabled unconditionally. However, it was possible to effectively disable it by setting a record_size=0. All the machinery would run (storing dumps to the temporary crash buffer), but 0 bytes would ultimately get stored due to there being no przs allocated for dumps. Commit 89d328f6 ("pstore/ram: Correctly calculate usable PRZ bytes"), however, assumed that there would always be at least one allocated dprz for calculating the size of the temporary crash buffer. This was, of course, not the case when record_size=0, and would lead to a NULL deref trying to find the dprz buffer size: BUG: unable to handle kernel NULL pointer dereference at (null) ... IP: ramoops_probe+0x285/0x37e (fs/pstore/ram.c:808) cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size; Instead, we need to only enable the frontends based on the success of the prz initialization and only take the needed actions when those zones are available. (This also fixes a possible error in detecting if the ftrace frontend should be enabled.) Reported-and-tested-by: Yaro Slav <yaro330@gmail.com> Fixes: 89d328f6 ("pstore/ram: Correctly calculate usable PRZ bytes") Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pi-Hsun Shih authored
commit a9fb94a9 upstream. Set tfm to NULL on free_buf_for_compression() after crypto_free_comp(). This avoid a use-after-free when allocate_buf_for_compression() and free_buf_for_compression() are called twice. Although free_buf_for_compression() freed the tfm, allocate_buf_for_compression() won't reinitialize the tfm since the tfm pointer is not NULL. Fixes: 95047b05 ("pstore: Refactor compression initialization") Signed-off-by: Pi-Hsun Shih <pihsun@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit a2bc9236 upstream. Prior to sending COPY_FILE_RANGE to userspace filesystem, we must flush all dirty pages in both the source and destination files. This patch adds the missing flush of the source file. Tested on libfuse-3.5.0 with: libfuse/example/passthrough_ll /mnt/fuse/ -o writeback libfuse/test/test_syscalls /mnt/fuse/tmp/test Fixes: 88bc7d50 ("fuse: add support for copy_file_range()") Cc: <stable@vger.kernel.org> # v4.20 Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit 35d6fcbb upstream. Do the proper cleanup in case the size check fails. Tested with xfstests:generic/228 Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: 0cbade02 ("fuse: honor RLIMIT_FSIZE in fuse_file_fallocate") Cc: Liu Bo <bo.liu@linux.alibaba.com> Cc: <stable@vger.kernel.org> # v3.5 Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yihao Wu authored
commit ba851a39 upstream. When a waiter is waked by CB_NOTIFY_LOCK, it will retry nfs4_proc_setlk(). The waiter may fail to nfs4_proc_setlk() and sleep again. However, the waiter is already removed from clp->cl_lock_waitq when handling CB_NOTIFY_LOCK in nfs4_wake_lock_waiter(). So any subsequent CB_NOTIFY_LOCK won't wake this waiter anymore. We should put the waiter back to clp->cl_lock_waitq before retrying. Cc: stable@vger.kernel.org #4.9+ Signed-off-by: Yihao Wu <wuyihao@linux.alibaba.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yihao Wu authored
commit 52b042ab upstream. Commit b7dbcc0e "NFSv4.1: Fix a race where CB_NOTIFY_LOCK fails to wake a waiter" found this bug. However it didn't fix it. This commit replaces schedule_timeout() with wait_woken() and default_wake_function() with woken_wake_function() in function nfs4_retry_setlk() and nfs4_wake_lock_waiter(). wait_woken() uses memory barriers in its implementation to avoid potential race condition when putting a process into sleeping state and then waking it up. Fixes: a1d617d8 ("nfs: allow blocking locks to be awoken by lock callbacks") Cc: stable@vger.kernel.org #4.9+ Signed-off-by: Yihao Wu <wuyihao@linux.alibaba.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 7987b694 upstream. The addition of rpc_check_timeout() to call_decode causes an Oops when the RPCSEC_GSS credential is rejected. The reason is that rpc_decode_header() will call xprt_release() in order to free task->tk_rqstp, which is needed by rpc_check_timeout() to check whether or not we should exit due to a soft timeout. The fix is to move the call to xprt_release() into call_decode() so we can perform it after rpc_check_timeout(). Reported-by: Olga Kornievskaia <olga.kornievskaia@gmail.com> Reported-by: Nick Bowler <nbowler@draconx.ca> Fixes: cea57789 ("SUNRPC: Clean up") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olga Kornievskaia authored
commit ec6017d9 upstream. If call_status returns ENOTCONN, we need to re-establish the connection state after. Otherwise the client goes into an infinite loop of call_encode, call_transmit, call_status (ENOTCONN), call_encode. Fixes: c8485e4d ("SUNRPC: Handle ECONNREFUSED correctly in xprt_transmit()") Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Cc: stable@vger.kernel.org # v2.6.29+ Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit 527a1d1e upstream. According to the found documentation, data cache flushes and sync instructions are needed on the PCX-U+ (PA8200, e.g. C200/C240) platforms, while PCX-W (PA8500, e.g. C360) platforms aparently don't need those flushes when changing the IO PDIR data structures. We have no documentation for PCX-W+ (PA8600) and PCX-W2 (PA8700) CPUs, but Carlo Pisani reported that his C3600 machine (PA8600, PCX-W+) fails when the fdc instructions were removed. His firmware didn't set the NIOP bit, so one may assume it's a firmware bug since other C3750 machines had the bit set. Even if documentation (as mentioned above) states that PCX-W (PA8500, e.g. J5000) does not need fdc flushes, Sven could show that an Adaptec 29320A PCI-X SCSI controller reliably failed on a dd command during the first five minutes in his J5000 when fdc flushes were missing. Going forward, we will now NOT replace the fdc and sync assembler instructions by NOPS if: a) the NP iopdir_fdc bit was set by firmware, or b) we find a CPU up to and including a PCX-W+ (PA8600). This fixes the HPMC crashes on a C240 and C36XX machines. For other machines we rely on the firmware to set the bit when needed. In case one finds HPMC issues, people could try to boot their machines with the "no-alternatives" kernel option to turn off any alternative patching. Reported-by: Sven Schnelle <svens@stackframe.org> Reported-by: Carlo Pisani <carlojpisani@gmail.com> Tested-by: Sven Schnelle <svens@stackframe.org> Fixes: 3847dab7 ("parisc: Add alternative coding infrastructure") Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # 5.0+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
John David Anglin authored
commit 63923d2c upstream. We only support I/O to kernel space. Using %sr1 to load the coherence index may be racy unless interrupts are disabled. This patch changes the code used to load the coherence index to use implicit space register selection. This saves one instruction and eliminates the race. Tested on rp3440, c8000 and c3750. Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eugeniy Paltsev authored
commit a8c715b4 upstream. As of today if userspace process tries to access a kernel virtual addres (0x7000_0000 to 0x7ffff_ffff) such that a legit kernel mapping already exists, that process hangs instead of being killed with SIGSEGV Fix that by ensuring that do_page_fault() handles kenrel vaddr only if in kernel mode. And given this, we can also simplify the code a bit. Now a vmalloc fault implies kernel mode so its failure (for some reason) can reuse the @no_context label and we can remove @bad_area_nosemaphore. Reproduce user test for original problem: ------------------------>8----------------- #include <stdlib.h> #include <stdint.h> int main(int argc, char *argv[]) { volatile uint32_t temp; temp = *(uint32_t *)(0x70000000); } ------------------------>8----------------- Cc: <stable@vger.kernel.org> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-