- 17 Jan, 2013 40 commits
-
-
Mark Brown authored
commit db04328c upstream. If count is less than the size of a register then we may hit integer wraparound when trying to move backwards to check if we're still in the buffer. Instead move the position forwards to check if it's still in the buffer, we are unlikely to be able to allocate a buffer sufficiently big to overflow here. Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zhang Rui authored
commit b7e38304 upstream. When system enters power off, the _PSW of Lid device is enabled. But this may cause the system to reboot instead of power off. A proper way to fix this is to always disable lid wakeup capability for S5. References: https://bugzilla.kernel.org/show_bug.cgi?id=35262Signed-off-by:
Zhang Rui <rui.zhang@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Joseph Salisbury <joseph.salisbury@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Namjae Jeon authored
commit fb719c59 upstream. Incrementing lenExtents even while writing to a hole is bad for performance as calls to udf_discard_prealloc and udf_truncate_tail_extent would not return from start if isize != lenExtents Signed-off-by:
Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by:
Ashish Sangwan <a.sangwan@samsung.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Shuah Khan <shuah.khan@hp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Namjae Jeon authored
commit 2fb7d99d upstream. Need to brelse the buffer_head stored in cur_epos and next_epos. Signed-off-by:
Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by:
Ashish Sangwan <a.sangwan@samsung.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Shuah Khan <shuah.khan@hp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ed Cashin authored
commit 0a41409c upstream. Before the aoe driver was an I/O request handler, it was a make_request-style block driver. Even so, there was a problem where sysfs expected a request queue to exist, so one was provided in commit 7135a71b ("aoe: allocate unused request_queue for sysfs"). During the transition to the request-handler style, a patch was merged that was based on a driver without the noop queue, and the noop queue remained in place after the patch was merged, even though a new functional queue was introduced by the patch, allocated through blk_init_queue. The user impact is a memory leak proportional to the number of AoE targets discovered. This patch removes the memory leak and cleans up vestiges of the old do-nothing queue from the aoeblk_gdalloc function. Signed-off-by:
Ed Cashin <ecashin@coraid.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guo Chao authored
commit 0ecaef06 upstream. If checksum fails, we should also release the buffer read from previous iteration. Signed-off-by:
Guo Chao <yan@linux.vnet.ibm.com> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
"Darrick J. Wong" <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit 0e9a9a1a upstream. When trying to mount a file system which does not contain a journal, but which does have a orphan list containing an inode which needs to be truncated, the mount call with hang forever in ext4_orphan_cleanup() because ext4_orphan_del() will return immediately without removing the inode from the orphan list, leading to an uninterruptible loop in kernel code which will busy out one of the CPU's on the system. This can be trivially reproduced by trying to mount the file system found in tests/f_orphan_extents_inode/image.gz from the e2fsprogs source tree. If a malicious user were to put this on a USB stick, and mount it on a Linux desktop which has automatic mounts enabled, this could be considered a potential denial of service attack. (Not a big deal in practice, but professional paranoids worry about such things, and have even been known to allocate CVE numbers for such problems.) Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
Zheng Liu <wenqing.lz@taobao.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit 721e3eba upstream. Commit c278531d added a warning when ext4_flush_unwritten_io() is called without i_mutex being taken. It had previously not been taken during orphan cleanup since races weren't possible at that point in the mount process, but as a result of this c278531d, we will now see a kernel WARN_ON in this case. Take the i_mutex in ext4_orphan_cleanup() to suppress this warning. Reported-by:
Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
Zheng Liu <wenqing.lz@taobao.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Tokarev authored
commit d096ad0f upstream. When a journal-less ext4 filesystem is mounted on a read-only block device (blockdev --setro will do), each remount (for other, unrelated, flags, like suid=>nosuid etc) results in a series of scary messages from kernel telling about I/O errors on the device. This is becauese of the following code ext4_remount(): if (sbi->s_journal == NULL) ext4_commit_super(sb, 1); at the end of remount procedure, which forces writing (flushing) of a superblock regardless whenever it is dirty or not, if the filesystem is readonly or not, and whenever the device itself is readonly or not. We only need call ext4_commit_super when the file system had been previously mounted read/write. Thanks to Eric Sandeen for help in diagnosing this issue. Signed-off-By:
Michael Tokarev <mjt@tls.msk.ru> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit d7961c7f upstream. The following race is possible between start_this_handle() and someone calling jbd2_journal_flush(). Process A Process B start_this_handle(). if (journal->j_barrier_count) # false if (!journal->j_running_transaction) { #true read_unlock(&journal->j_state_lock); jbd2_journal_lock_updates() jbd2_journal_flush() write_lock(&journal->j_state_lock); if (journal->j_running_transaction) { # false ... wait for committing trans ... write_unlock(&journal->j_state_lock); ... write_lock(&journal->j_state_lock); if (!journal->j_running_transaction) { # true jbd2_get_transaction(journal, new_transaction); write_unlock(&journal->j_state_lock); goto repeat; # eventually blocks on j_barrier_count > 0 ... J_ASSERT(!journal->j_running_transaction); # fails We fix the race by rechecking j_barrier_count after reacquiring j_state_lock in exclusive mode. Reported-by: yjwsignal@empal.com Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 261cb20c upstream. Currently we allow enabling dioread_nolock mount option on remount for filesystems where blocksize < PAGE_CACHE_SIZE. This isn't really supported so fix the bug by moving the check for blocksize != PAGE_CACHE_SIZE into parse_options(). Change the original PAGE_SIZE to PAGE_CACHE_SIZE along the way because that's what we are really interested in. Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
Eric Sandeen <sandeen@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Forrest Liu authored
commit c36575e6 upstream. When depth of extent tree is greater than 1, logical start value of interior node is not correctly updated in ext4_ext_rm_idx. Signed-off-by:
Forrest Liu <forrestl@synology.com> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
Ashish Sangwan <ashishsangwan2@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rafael J. Wysocki authored
commit 9f6d8f6a upstream. Currently, the PM core disables runtime PM for all devices right after executing subsystem/driver .suspend() callbacks for them and re-enables it right before executing subsystem/driver .resume() callbacks for them. This may lead to problems when there are two devices such that the .suspend() callback executed for one of them depends on runtime PM working for the other. In that case, if runtime PM has already been disabled for the second device, the first one's .suspend() won't work correctly (and analogously for resume). To make those issues go away, make the PM core disable runtime PM for devices right before executing subsystem/driver .suspend_late() callbacks for them and enable runtime PM for them right after executing subsystem/driver .resume_early() callbacks for them. This way the potential conflitcs between .suspend_late()/.resume_early() and their runtime PM counterparts are still prevented from happening, but the subtle ordering issues related to disabling/enabling runtime PM for devices during system suspend/resume are much easier to avoid. Reported-and-tested-by:
Jan-Matthias Braun <jan_braun@gmx.net> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by:
Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by:
Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Seth Forshee authored
commit e04c200f upstream. BugLink: http://bugs.launchpad.net/bugs/1086921Signed-off-by:
Seth Forshee <seth.forshee@canonical.com> Signed-off-by:
Matthew Garrett <matthew.garrett@nebula.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lothar Waßmann authored
commit 6c1ecba8 upstream. The VDCTRL4 register does not provide the MXS SET/CLR/TOGGLE feature. The write in mxsfb_disable_controller() sets the data_cnt for the LCD DMA to 0 which obviously means the max. count for the LCD DMA and leads to overwriting arbitrary memory when the display is unblanked. Signed-off-by:
Lothar Waßmann <LW@KARO-electronics.de> Acked-by:
Juergen Beisert <jbe@pengutronix.de> Tested-by:
Lauri Hintsala <lauri.hintsala@bluegiga.net> Signed-off-by:
Shawn Guo <shawn.guo@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hante Meuleman authored
commit 619c5a9a upstream. RSN IEs got incorrectly parsed and therefore ap mode using WPA2 security was not working. Reviewed-by:
Arend Van Spriel <arend@broadcom.com> Reviewed-by:
Pieter-Paul Giesberts <pieterpg@broadcom.com> Signed-off-by:
Hante Meuleman <meuleman@broadcom.com> Signed-off-by:
Arend van Spriel <arend@broadcom.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sivaram Nair authored
commit 92638e2f upstream. The ready_waiting_counts atomic variable is compared against the wrong online cpu count. The latter is computed incorrectly using logical-OR instead of bit-OR. This patch fixes that. Signed-off-by:
Sivaram Nair <sivaramn@nvidia.com> Acked-by:
Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by:
Colin Cross <ccross@android.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ian Campbell authored
commit d9a58a78 upstream. Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller than that. We have already accounted for this in NETFRONT_SKB_CB(skb)->pull_to so use that instead. Fixes WARN_ON from skb_try_coalesce. Signed-off-by:
Ian Campbell <ian.campbell@citrix.com> Cc: Sander Eikelenboom <linux@eikelenboom.it> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: annie li <annie.li@oracle.com> Cc: xen-devel@lists.xen.org Cc: netdev@vger.kernel.org Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kees Cook authored
commit 7b9205bd upstream. The seccomp path was using AUDIT_ANOM_ABEND from when seccomp mode 1 could only kill a process. While we still want to make sure an audit record is forced on a kill, this should use a separate record type since seccomp mode 2 introduces other behaviors. In the case of "handled" behaviors (process wasn't killed), only emit a record if the process is under inspection. This change also fixes userspace examination of seccomp audit events, since it was considered malformed due to missing fields of the AUDIT_ANOM_ABEND event type. Signed-off-by:
Kees Cook <keescook@chromium.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric Paris <eparis@redhat.com> Cc: Jeff Layton <jlayton@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Julien Tinnes <jln@google.com> Acked-by:
Will Drewry <wad@chromium.org> Acked-by:
Steve Grubb <sgrubb@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Verges authored
commit 0602934f upstream. If an LM73 device does not exist on an I2C bus, attempts to communicate with the device result in an error code returned from the i2c read/write functions. The current lm73 driver casts that return value from a s32 type to a s16 type, then converts it to a temperature in celsius. Because negative temperatures are valid, it is difficult to distinguish between an error code printed to the response buffer and a negative temperature recorded by the sensor. The solution is to evaluate the return value from the i2c functions before performing any temperature calculations. If the i2c function did not succeed, the error code should be passed back through the virtual file system layer instead of being printed into the response buffer. Before: $ cat /sys/class/hwmon/hwmon0/device/temp1_input -46 After: $ cat /sys/class/hwmon/hwmon0/device/temp1_input cat: read error: No such device or address Signed-off-by:
Chris Verges <kg4ysn@gmail.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit 70e22779 upstream. The timer appears to run too fast/race on 64 bit systems. Using msecs_to_jiffies seems to cause a deadlock on 64 bit. A calculation of (MSecond * HZ) / 1000 appears to run satisfactory. Change BSSIDInfoCount to u32. After this patch the driver can be successfully connect on little endian 64/32 bit systems. Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit c0d05b30 upstream. Fixes long issues. Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit b4dc03af upstream. Fixes long warning messages from patch [PATCH 08/14] staging: vt6656: 64 bit fixes : correct all type sizes Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit 77304928 upstream. After this patch all BYTE/WORD/DWORD types can be replaced with the appropriate u sizes. Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit a552397d upstream. Size of long issues replace with u32. Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit ab1dd996 upstream. Calling RFbSetPower with uCH zero value will cause out of bound array reference. This causes 64 bit kernels to oops on boot. Note: Driver does not function on 64 bit kernels and should be blacklisted on them. Signed-off-by:
Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit b7ca9c92 upstream. Change existing users of the function dm_cell_release_singleton to share cell_defer_except instead, and then remove the now-unused function. Everywhere that calls dm_cell_release_singleton, the bio in question is the holder of the cell. If there are no non-holder entries in the cell then cell_defer_except behaves exactly like dm_cell_release_singleton. Conversely, if there *are* non-holder entries then dm_cell_release_singleton must not be used because those entries would need to be deferred. Consequently, it is safe to replace use of dm_cell_release_singleton with cell_defer_except. This patch is a pre-requisite for "dm thin: fix race between simultaneous io and discards to same block". Signed-off-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alasdair G Kergon authored
commit e910d7eb upstream. Abort dm ioctl processing if userspace changes the data_size parameter after we validated it but before we finished copying the data buffer from userspace. The dm ioctl parameters are processed in the following sequence: 1. ctl_ioctl() calls copy_params(); 2. copy_params() makes a first copy of the fixed-sized portion of the userspace parameters into the local variable "tmp"; 3. copy_params() then validates tmp.data_size and allocates a new structure big enough to hold the complete data and copies the whole userspace buffer there; 4. ctl_ioctl() reads userspace data the second time and copies the whole buffer into the pointer "param"; 5. ctl_ioctl() reads param->data_size without any validation and stores it in the variable "input_param_size"; 6. "input_param_size" is further used as the authoritative size of the kernel buffer. The problem is that userspace code could change the contents of user memory between steps 2 and 4. In particular, the data_size parameter can be changed to an invalid value after the kernel has validated it. This lets userspace force the kernel to access invalid kernel memory. The fix is to ensure that the size has not changed at step 4. This patch shouldn't have a security impact because CAP_SYS_ADMIN is required to run this code, but it should be fixed anyway. Reported-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 550929fa upstream. This patch fixes a compilation failure on sparc32 by renaming struct node. struct node is already defined in include/linux/node.h. On sparc32, it happens to be included through other dependencies and persistent-data doesn't compile because of conflicting declarations. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit c1a94672 upstream. WRITE SAME bios are not yet handled correctly by device-mapper so disable their use on device-mapper devices by setting max_write_same_sectors to zero. As an example, a ciphertext device is incompatible because the data gets changed according to the location at which it written and so the dm crypt target cannot support it. Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Cc: Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tatyana Nikolova authored
commit 7bfcfa51 upstream. The terminate timer needs to be initialized just once. Signed-off-by:
Tatyana Nikolova <Tatyana.E.Nikolova@intel.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
CAI Qian <caiqian@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tatyana Nikolova authored
commit 7d9c199a upstream. Signed-off-by:
Tatyana Nikolova <Tatyana.E.Nikolova@intel.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
CAI Qian <caiqian@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit 93927ca5 upstream. This partially reverts commit 6c085a72 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Aug 20 11:40:46 2012 +0200 drm/i915: Track unbound pages Closer inspection of that patch revealed a bunch of unrelated changes in the shrinker: - The shrinker count is now in pages instead of objects. - For counting the shrinkable objects the old code only looked at the inactive list, the new code looks at all bounds objects (including pinned ones). That is obviously in addition to the new unbound list. - The shrinker cound is no longer scaled with sysctl_vfs_cache_pressure. Note though that with the default tuning value of vfs_cache_pressue = 100 this doesn't affect the shrinker behaviour. - When actually shrinking objects, the old code first dropped purgeable objects, then normal (inactive) objects. Only then did it, in a last-ditch effort idle the gpu and evict everything. The new code omits the intermediate step of evicting normal inactive objects. Safe for the first change, which seems benign, and the shrinker count scaling, which is a bit a different story, the endresult of all these changes is that the shrinker is _much_ more likely to fall back to the last-ditch resort of idling the gpu and evicting everything. The old code could only do that if something else evicted lots of objects meanwhile (since without any other changes the nr_to_scan will be smaller than the object count). Reverting the vfs_cache_pressure behaviour itself is a bit bogus: Only dentry/inode object caches should scale their shrinker counts with vfs_cache_pressure. Originally I've had that change reverted, too. But Chris Wilson insisted that it's too bogus and shouldn't again see the light of day. Hence revert all these other changes and restore the old shrinker behaviour, with the minor adjustment that we now first scan the unbound list, then the inactive list for each object category (purgeable or normal). A similar patch has been tested by a few people affected by the gen4/5 hangs which started to appear in 3.7, which some people bisected to the "drm/i915: Track unbound pages" commit. But just disabling the unbound logic alone didn't change things at all. Note that this patch doesn't fix the referenced bugs, it only hides the underlying bug(s) well enough to restore pre-3.7 behaviour. The key to achieve that is to massively reduce the likelyhood of going into a full gpu stall and evicting everything. v2: Reword commit message a bit, taking Chris Wilson's comment into account. v3: On Chris Wilson's insistency, do not reinstate the rather bogus vfs_cache_pressure change. Tested-by:
Greg KH <gregkh@linuxfoundation.org> Tested-by:
Dave Kleikamp <dave.kleikamp@oracle.com> References: https://bugs.freedesktop.org/show_bug.cgi?id=55984 References: https://bugs.freedesktop.org/show_bug.cgi?id=57122 References: https://bugs.freedesktop.org/show_bug.cgi?id=56916 References: https://bugs.freedesktop.org/show_bug.cgi?id=57136 Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked-by:
Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Wilson authored
commit 93be8788 upstream. As along the error path we do not correct the user pin-count for the failure, we may end up with userspace believing that it has a pinned object at offset 0 (when interrupted by a signal for example). Signed-off-by:
Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Wilson authored
commit 901593f2 upstream. Avoid clobbering adjacent blocks if they happen to expire earlier and amalgamate together to form the requested hole. In passing this fixes a regression from commit ea7b1dd4 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Fri Feb 18 17:59:12 2011 +0100 drm: mm: track free areas implicitly which swaps the end address for size (with a potential overflow) and effectively causes the eviction code to clobber almost all earlier buffers above the evictee. v2: Check the original hole not the adjusted as the coloring may confuse us when later searching for the overlapping nodes. Also make sure that we do apply the range restriction and color adjustment in the same order for both scanning, searching and insertion. v3: Send the version that was actually tested. Note that this seems to be ducttape of decent quality ot paper over some of our unbind related gpu hangs reported since 3.7. It is not fully effective though, and certainly doesn't fix the underlying bug. Signed-off-by:
Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Added note plus bugzilla link and tested-by.] Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55984Tested-by:
Norbert Preining <preining@logic.at> Acked-by: Dave Airlie <airlied@gmail.com Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Seung-Woo Kim authored
commit be8a42ae upstream. Increasing ref counts of both dma-buf and gem for imported dma-buf come from gem makes memory leak. release function of dma-buf cannot be called because f_count of dma-buf increased by importing gem and gem ref count cannot be decrease because of exported dma-buf. So I add dma_buf_put() for imported gem come from its own gem into each drivers having prime_import and prime_export capabilities. With this, only gem ref count is increased if importing gem exported from gem of same driver. Signed-off-by:
Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by:
Kyungmin.park <kyungmin.park@samsung.com> Cc: Inki Dae <inki.dae@samsung.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Rob Clark <rob.clark@linaro.org> Cc: Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Airlie authored
commit 5b42427f upstream. As pointed out by Seung-Woo Kim this should have been passing flags like nouveau/radeon have. Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit b0a2658a upstream. This piece of neat lore has been ported painstakingly and bug-for-bug compatible from the old crtc helper code. Imo it's utter nonsense. If you disconnected a cable and before you reconnect it, userspace (or the kernel) does an set_crtc call, this will result in that connector getting disabled. Which will result in a nice black screen when plugging in the cable again. There's absolutely no reason the kernel does such policy enforcements - if userspace tries to set up a mode on something disconnected we might fail loudly (since the dp link training fails), but silently adjusting the output configuration behind userspace's back is a recipe for disaster. Specifically I think that this could explain some of our MI_WAIT hangs around suspend, where userspace issues a scanline wait on a disable pipe. This mechanisims here could explain how that pipe got disabled without userspace noticing. Note that this fixes a NULL deref at BIOS takeover when the firmware sets up a disconnected output in a clone configuration with a connected output on the 2nd pipe: When doing the full modeset we don't have a mode for the 2nd pipe and OOPS. On the first pipe this doesn't matter, since at boot-up the fbdev helpers will set up the choosen configuration on that on first. Since this is now the umptenth bug around handling this imo brain-dead semantics correctly, I think it's time to kill it and see whether there's any userspace out there which relies on this. It also nicely demonstrates that we have a tiny window where DP hotplug can still kill the driver. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58396Tested-by:
Peter Ujfalusi <peter.ujfalusi@gmail.com> Reviewed-by:
Rodrigo Vivi <rodrigo.vivi@gmail.com> Reviewed-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Wilson authored
commit b4a98e57 upstream. If we accumulate unpin tasks because we are pageflipping faster than the system can schedule its workers, we can effectively create a pin-leak. The solution taken here is to limit the number of unpin tasks we have per-crtc and to flush those outstanding tasks if we accumulate too many. This should prevent any jitter in the normal case, and also prevent the hang if we should run too fast. Note: It is important that we switch from the system workqueue to our own dev_priv->wq since all work items on that queue are guaranteed to only need the dev->struct_mutex and not any modeset resources. For otherwise if we have a work item ahead in the queue which needs the modeset lock (like the output detect work used by both polling or hpd), this work and so the unpin work will never execute since the pageflip code already holds that lock. Unfortunately there's no lockdep support for this scenario in the workqueue code. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=46991Reported-and-tested-by:
Tvrtko Ursulin <tvrtko.ursulin@onelan.co.uk> Signed-off-by:
Chris Wilson <chris@chris-wilson.co.uk> [danvet: Added note about workqueu deadlock.] Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=56337Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Tested-by:
Daniel Gnoutcheff <daniel@gnoutcheff.name> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Wilson authored
commit e7d841ca upstream. Before queuing the flip but crucially after attaching the unpin-work to the crtc, we continue to setup the unpin-work. However, should the hardware fire early, we see the connected unpin-work and queue the task. The task then promptly runs and unpins the fb before we finish taking the required references or even pinning it... Havoc. To close the race, we use the flip-pending atomic to indicate when the flip is finally setup and enqueued. So during the flip-done processing, we can check more accurately whether the flip was expected. v2: Add the appropriate mb() to ensure that the writes to the page-flip worker are complete prior to marking it active and emitting the MI_FLIP. On the read side, the mb should be enforced by the spinlocks. Signed-off-by:
Chris Wilson <chris@chris-wilson.co.uk> [danvet: Review the barriers a bit, we need a write barrier both before and after updating ->pending. Similarly we need a read barrier in the interrupt handler both before and after reading ->pending. With well-ordered irqs only one barrier in each place should be required, but since this patch explicitly sets out to combat spurious interrupts with is staged activation of the unpin work we need to go full-bore on the barriers, too. Discussed with Chris Wilson on irc and changes acked by him.] Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-