- 19 Feb, 2015 3 commits
-
-
Ville Syrjälä authored
commit 7d47559e upstream. The flip stall detector kicks in when pending>=INTEL_FLIP_COMPLETE. That means if we first call intel_prepare_page_flip() but don't call intel_finish_page_flip(), the next stall check will erroneosly think the page flip was somehow stuck. With enough debug spew emitted from the interrupt handler my 830 hangs when this happens. My theory is that the previous vblank interrupt gets sufficiently delayed that the handler will see the pending bit set in IIR, but ISR still has the bit set as well (ie. the flip was processed by CS but didn't complete yet). In this case the handler will proceed to call intel_check_page_flip() immediately after intel_prepare_page_flip(). It then tries to print a backtrace for the stuck flip WARN, which apparetly results in way too much debug spew delaying interrupt processing further. That then seems to cause an endless loop in the interrupt handler, and the machine is dead until the watchdog kicks in and reboots. At least limiting the number of iterations of the loop in the interrupt handler also prevented the hang. So it seems better to not call intel_prepare_page_flip() without immediately calling intel_finish_page_flip(). The IIR/ISR trickery avoids races here so this is a perfectly safe thing to do. v2: Fix typo in commit message (checkpatch) Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88381 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85888Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ville Syrjälä authored
commit 38af6096 upstream. Looks like 830M doesn't quite like it when you try to move a plane from one pipe to another. It seems that the plane's old pipe has to be active even if the plane is already disabled, otherwise the relevant register just won't accept new values. The following commit: commit 1f1c2e24 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Thu Nov 28 17:30:01 2013 +0200 drm/i915: Swap primary planes on gen2 for FBC caused a regression on 830M. It will attempt to swap the planes when the driver is loaded, but at that time only pipe A might be active, so plane A gets disabled, but plane B won't get enabled since pipe B is not active when we try to move the plane over to pipe A. There's no reason to swap planes on 830M since it doesn't support FBC. Change the logic a bit to limit the plane swapping to platforms which actually support FBC. This should avoid getting a black screen on 830M. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ville Syrjälä authored
commit 1f1c2e24 upstream. Only plane A is FBC capable on gen2 (like gen3), but the panel fitter is hooked up to pipe B, so we want to prefer pipe B + plane A. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Add the code comment Chris requested in his review.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> [ kamal: 3.13-stable prereq for: 7d47559e "drm/i915: Don't call intel_prepare_page_flip() multiple times on gen2-4" ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 18 Feb, 2015 14 commits
-
-
Steev Klimaszewski authored
commit 007487f1 upstream. Currently we enable Exynos devices in the multi v7 defconfig, however, when testing on my ODROID-U3, I noticed that USB was not working. Enabling this option causes USB to work, which enables networking support as well since the ODROID-U3 has networking on the USB bus. [arnd] Support for odroid-u3 was added in 3.10, so it would be nice to backport this fix at least that far. Signed-off-by: Steev Klimaszewski <steev@gentoo.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Tomi Valkeinen authored
commit 30ea9c52 upstream. fb_deferred_io_fsync() returns the value of schedule_delayed_work() as an error code, but schedule_delayed_work() does not return an error. It returns true/false depending on whether the work was already queued. Fix this by ignoring the return value of schedule_delayed_work(). Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Tomi Valkeinen authored
commit 92b004d1 upstream. If the probe of an fb driver has been deferred due to missing dependencies, and the probe is later ran when a module is loaded, the fbdev framework will try to find a logo to use. However, the logos are __initdata, and have already been freed. This causes sometimes page faults, if the logo memory is not mapped, sometimes other random crashes as the logo data is invalid, and sometimes nothing, if the fbdev decides to reject the logo (e.g. the random value depicting the logo's height is too big). This patch adds a late_initcall function to mark the logos as freed. In reality the logos are freed later, and fbdev probe may be ran between this late_initcall and the freeing of the logos. In that case we will miss drawing the logo, even if it would be possible. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Andrew Jackson authored
commit 3475c3d0 upstream. Flush the FIFOs when the stream is prepared for use. This avoids an inadvertent swapping of the left/right channels if the FIFOs are not empty at startup. Signed-off-by: Andrew Jackson <Andrew.Jackson@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Rabin Vincent authored
commit 7e77bdeb upstream. If a request is backlogged, it's complete() handler will get called twice: once with -EINPROGRESS, and once with the final error code. af_alg's complete handler, unlike other users, does not handle the -EINPROGRESS but instead always completes the completion that recvmsg() is waiting on. This can lead to a return to user space while the request is still pending in the driver. If userspace closes the sockets before the requests are handled by the driver, this will lead to use-after-frees (and potential crashes) in the kernel due to the tfm having been freed. The crashes can be easily reproduced (for example) by reducing the max queue length in cryptod.c and running the following (from http://www.chronox.de/libkcapi.html) on AES-NI capable hardware: $ while true; do kcapi -x 1 -e -c '__ecb-aes-aesni' \ -k 00000000000000000000000000000000 \ -p 00000000000000000000000000000000 >/dev/null & done Signed-off-by: Rabin Vincent <rabin.vincent@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit e237ec37 upstream. Check that length specified in a component of a symlink fits in the input buffer we are reading. Also properly ignore component length for component types that do not use it. Otherwise we read memory after end of buffer for corrupted udf image. Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit 0e5cc9a4 upstream. Symlink reading code does not check whether the resulting path fits into the page provided by the generic code. This isn't as easy as just checking the symlink size because of various encoding conversions we perform on path. So we have to check whether there is still enough space in the buffer on the fly. Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit a1d47b26 upstream. UDF specification allows arbitrarily large symlinks. However we support only symlinks at most one block large. Check the length of the symlink so that we don't access memory beyond end of the symlink block. Reported-by: Carl Henrik Lunde <chlunde@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit e159332b upstream. Verify that inode size is sane when loading inode with data stored in ICB. Otherwise we may get confused later when working with the inode and inode size is too big. Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no> Signed-off-by: Jan Kara <jack@suse.cz> [ luis: backported to 3.16: - Adjusted exit paths as commit 6d3d5e86 ("udf: Make udf_read_inode() and udf_iget() return error") is not present in 3.16 kernel ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Paolo Bonzini authored
commit a629df7e upstream. Since most virtual machines raise this message once, it is a bit annoying. Make it KERN_DEBUG severity. Fixes: 7a2e8aafSigned-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
John David Anglin authored
commit 45db0738 upstream. The __ldcw macro has a problem when its argument needs to be reloaded from memory. The output memory operand and the input register operand both need to be reloaded using a register in class R1_REGS when generating 64-bit code. This fails because there's only a single register in the class. Instead, use a memory clobber. This also makes the __ldcw macro a compiler memory barrier. Signed-off-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Richard Guy Briggs authored
commit 041d7b98 upstream. A regression was caused by commit 780a7654: audit: Make testing for a valid loginuid explicit. (which in turn attempted to fix a regression caused by e1760bd5) When audit_krule_to_data() fills in the rules to get a listing, there was a missing clause to convert back from AUDIT_LOGINUID_SET to AUDIT_LOGINUID. This broke userspace by not returning the same information that was sent and expected. The rule: auditctl -a exit,never -F auid=-1 gives: auditctl -l LIST_RULES: exit,never f24=0 syscall=all when it should give: LIST_RULES: exit,never auid=-1 (0xffffffff) syscall=all Tag it so that it is reported the same way it was set. Create a new private flags audit_krule field (pflags) to store it that won't interact with the public one from the API. Signed-off-by: Richard Guy Briggs <rgb@redhat.com> Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Chris Wilson authored
commit add284a3 upstream. In order to act as a full command barrier by itself, we need to tell the pipecontrol to actually stall the command streamer while the flush runs. We require the full command barrier before operations like MI_SET_CONTEXT, which currently rely on a prior invalidate flush. References: https://bugs.freedesktop.org/show_bug.cgi?id=83677 Cc: Simon Farnsworth <simon@farnz.org.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Chris Wilson authored
commit 148b83d0 upstream. In the gen7 pipe control there is an extra bit to flush the media caches, so let's set it during cache invalidation flushes. v2: Rename to MEDIA_STATE_CLEAR to be more inline with spec. Cc: Simon Farnsworth <simon@farnz.org.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 10 Feb, 2015 1 commit
-
-
Mathias Krause authored
The backport of commit 5d26a105 ("crypto: prefix module autoloading with "crypto-"") lost the MODULE_ALIAS_CRYPTO() annotation of crc32c.c. Add it to fix the reported filesystem related regressions. Signed-off-by: Mathias Krause <minipli@googlemail.com> Reported-by: Philip Müller <philm@manjaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Rob McCathie <rob@manjaro.org> Cc: Luis Henriques <luis.henriques@canonical.com> Cc: Kamal Mostafa <kamal@canonical.com> Cc: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 09 Feb, 2015 3 commits
-
-
Kees Cook authored
commit 4943ba16 upstream. This adds the module loading prefix "crypto-" to the template lookup as well. For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly includes the "crypto-" prefix at every level, correctly rejecting "vfat": net-pf-38 algif-hash crypto-vfat(blowfish) crypto-vfat(blowfish)-all crypto-vfat Reported-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Mathias Krause authored
commit 3e14dcf7 upstream. Commit 5d26a105 ("crypto: prefix module autoloading with "crypto-"") changed the automatic module loading when requesting crypto algorithms to prefix all module requests with "crypto-". This requires all crypto modules to have a crypto specific module alias even if their file name would otherwise match the requested crypto algorithm. Even though commit 5d26a105 added those aliases for a vast amount of modules, it was missing a few. Add the required MODULE_ALIAS_CRYPTO annotations to those files to make them get loaded automatically, again. This fixes, e.g., requesting 'ecb(blowfish-generic)', which used to work with kernels v3.18 and below. Also change MODULE_ALIAS() lines to MODULE_ALIAS_CRYPTO(). The former won't work for crypto modules any more. Fixes: 5d26a105 ("crypto: prefix module autoloading with "crypto-"") Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Kees Cook authored
commit 5d26a105 upstream. This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 03 Feb, 2015 1 commit
-
-
Kamal Mostafa authored
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 29 Jan, 2015 18 commits
-
-
Thomas Gleixner authored
commit a5fd9733 upstream. commit 4dbd2771 "tick: export nohz tick idle symbols for module use" was merged via the thermal tree without an explicit ack from the relevant maintainers. The exports are abused by the intel powerclamp driver which implements a fake idle state from a sched FIFO task. This causes all kinds of wreckage in the NOHZ core code which rightfully assumes that tick_nohz_idle_enter/exit() are only called from the idle task itself. Recent changes in the NOHZ core lead to a failure of the powerclamp driver and now people try to hack completely broken and backwards workarounds into the NOHZ core code. This is completely unacceptable and just papers over the real problem. There are way more subtle issues lurking around the corner. The real solution is to fix the powerclamp driver by rewriting it with a sane concept, but that's beyond the scope of this. So the only solution for now is to remove the calls into the core NOHZ code from the powerclamp trainwreck along with the exports. Fixes: d6d71ee4 "PM: Introduce Intel PowerClamp Driver" Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Pan Jacob jun <jacob.jun.pan@intel.com> Cc: LKP <lkp@01.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Zhang Rui <rui.zhang@intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1412181110110.17382@nanosSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Junxiao Bi authored
commit 136f49b9 upstream. For buffer write, page lock will be got in write_begin and released in write_end, in ocfs2_write_end_nolock(), before it unlock the page in ocfs2_free_write_ctxt(), it calls ocfs2_run_deallocs(), this will ask for the read lock of journal->j_trans_barrier. Holding page lock and ask for journal->j_trans_barrier breaks the locking order. This will cause a deadlock with journal commit threads, ocfs2cmt will get write lock of journal->j_trans_barrier first, then it wakes up kjournald2 to do the commit work, at last it waits until done. To commit journal, kjournald2 needs flushing data first, it needs get the cache page lock. Since some ocfs2 cluster locks are holding by write process, this deadlock may hung the whole cluster. unlock pages before ocfs2_run_deallocs() can fix the locking order, also put unlock before ocfs2_commit_trans() to make page lock is unlocked before j_trans_barrier to preserve unlocking order. Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com> Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com> Reviewed-by: Mark Fasheh <mfasheh@suse.de> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Eric W. Biederman authored
commit c297abfd upstream. While reviewing the code of umount_tree I realized that when we append to a preexisting unmounted list we do not change pprev of the former first item in the list. Which means later in namespace_unlock hlist_del_init(&mnt->mnt_hash) on the former first item of the list will stomp unmounted.first leaving it set to some random mount point which we are likely to free soon. This isn't likely to hit, but if it does I don't know how anyone could track it down. [ This happened because we don't have all the same operations for hlist's as we do for normal doubly-linked lists. In particular, list_splice() is easy on our standard doubly-linked lists, while hlist_splice() doesn't exist and needs both start/end entries of the hlist. And commit 38129a13 incorrectly open-coded that missing hlist_splice(). We should think about making these kinds of "mindless" conversions easier to get right by adding the missing hlist helpers - Linus ] Fixes: 38129a13 switch mnt_hash to hlist Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jiri Jaburek authored
commit d70a1b98 upstream. The Arcam rPAC seems to have the same problem - whenever anything (alsamixer, udevd, 3.9+ kernel from 60af3d03, ..) attempts to access mixer / control interface of the card, the firmware "locks up" the entire device, resulting in SNDRV_PCM_IOCTL_HW_PARAMS failed (-5): Input/output error from alsa-lib. Other operating systems can somehow read the mixer (there seems to be playback volume/mute), but any manipulation is ignored by the device (which has hardware volume controls). Signed-off-by: Jiri Jaburek <jjaburek@redhat.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Andy Lutomirski authored
commit 3fb2f423 upstream. It turns out that there's a lurking ABI issue. GCC, when compiling this in a 32-bit program: struct user_desc desc = { .entry_number = idx, .base_addr = base, .limit = 0xfffff, .seg_32bit = 1, .contents = 0, /* Data, grow-up */ .read_exec_only = 0, .limit_in_pages = 1, .seg_not_present = 0, .useable = 0, }; will leave .lm uninitialized. This means that anything in the kernel that reads user_desc.lm for 32-bit tasks is unreliable. Revert the .lm check in set_thread_area(). The value never did anything in the first place. Fixes: 0e58af4e ("x86/tls: Disallow unusual TLS segments") Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dan Carpenter authored
commit 021b77be upstream. Probably this code was syncing a lot more often then intended because the do_sync variable wasn't set to zero. Fixes: c62988ec ('ceph: avoid meaningless calling ceph_caps_revoking if sync_mode == WB_SYNC_ALL.') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ilya Dryomov <idryomov@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Johannes Berg authored
commit 28a9bc68 upstream. When writing the code to allow per-station GTKs, I neglected to take into account the management frame keys (index 4 and 5) when freeing the station and only added code to free the first four data frame keys. Fix this by iterating the array of keys over the right length. Fixes: e31b8213 ("cfg80211/mac80211: allow per-station GTKs") Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Nicholas Bellinger authored
commit 6bf6ca75 upstream. This patch changes iscsit_do_tx_data() to fail on short writes when kernel_sendmsg() returns a value different than requested transfer length, returning -EPIPE and thus causing a connection reset to occur. This avoids a potential bug in the original code where a short write would result in kernel_sendmsg() being called again with the original iovec base + length. In practice this has not been an issue because iscsit_do_tx_data() is only used for transferring 48 byte headers + 4 byte digests, along with seldom used control payloads from NOPIN + TEXT_RSP + REJECT with less than 32k of data. So following Al's audit of iovec consumers, go ahead and fail the connection on short writes for now, and remove the bogus logic ahead of his proper upstream fix. Reported-by: Al Viro <viro@zeniv.linux.org.uk> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Larry Finger authored
commit 9a1dce3a upstream. The setting of this flag was missed in previous modifications. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Long Li authored
commit e86fb5e8 upstream. When ring buffer returns an error indicating retry, storvsc may not return a proper error code to SCSI when bounce buffer is not used. This has introduced I/O freeze on RAID running atop storvsc devices. This patch fixes it by always returning a proper error code. Signed-off-by: Long Li <longli@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Martin K. Petersen authored
commit 198a956a upstream. The Microsoft iSCSI target does not support REPORT SUPPORTED OPERATION CODES. Blacklist these devices so we don't attempt to send the command. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Tested-by: Mike Christie <michaelc@cs.wisc.edu> Reported-by: jazz@deti74.ru Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Vineet Gupta authored
commit e8ef060b upstream. This allows the sdplite/Zebu images to run on OSCI simulation platform Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Paul Mackerras authored
commit 8117ac6a upstream. Currently, when going idle, we set the flag indicating that we are in nap mode (paca->kvm_hstate.hwthread_state) and then execute the nap (or sleep or rvwinkle) instruction, all with the MMU on. This is bad for two reasons: (a) the architecture specifies that those instructions must be executed with the MMU off, and in fact with only the SF, HV, ME and possibly RI bits set, and (b) this introduces a race, because as soon as we set the flag, another thread can switch the MMU to a guest context. If the race is lost, this thread will typically start looping on relocation-on ISIs at 0xc...4400. This fixes it by setting the MSR as required by the architecture before setting the flag or executing the nap/sleep/rvwinkle instruction. [ shreyas@linux.vnet.ibm.com: Edited to handle LE ] Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Andy Lutomirski authored
commit 0e58af4e upstream. Users have no business installing custom code segments into the GDT, and segments that are not present but are otherwise valid are a historical source of interesting attacks. For completeness, block attempts to set the L bit. (Prior to this patch, the L bit would have been silently dropped.) This is an ABI break. I've checked glibc, musl, and Wine, and none of them look like they'll have any trouble. Note to stable maintainers: this is a hardening patch that fixes no known bugs. Given the possibility of ABI issues, this probably shouldn't be backported quickly. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: security@kernel.org <security@kernel.org> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Thomas Gleixner authored
commit c291ee62 upstream. Since the rework of the sparse interrupt code to actually free the unused interrupt descriptors there exists a race between the /proc interfaces to the irq subsystem and the code which frees the interrupt descriptor. CPU0 CPU1 show_interrupts() desc = irq_to_desc(X); free_desc(desc) remove_from_radix_tree(); kfree(desc); raw_spinlock_irq(&desc->lock); /proc/interrupts is the only interface which can actively corrupt kernel memory via the lock access. /proc/stat can only read from freed memory. Extremly hard to trigger, but possible. The interfaces in /proc/irq/N/ are not affected by this because the removal of the proc file is serialized in procfs against concurrent readers/writers. The removal happens before the descriptor is freed. For architectures which have CONFIG_SPARSE_IRQ=n this is a non issue as the descriptor is never freed. It's merely cleared out with the irq descriptor lock held. So any concurrent proc access will either see the old correct value or the cleared out ones. Protect the lookup and access to the irq descriptor in show_interrupts() with the sparse_irq_lock. Provide kstat_irqs_usr() which is protecting the lookup and access with sparse_irq_lock and switch /proc/stat to use it. Document the existing kstat_irqs interfaces so it's clear that the caller needs to take care about protection. The users of these interfaces are either not affected due to SPARSE_IRQ=n or already protected against removal. Fixes: 1f5a5b87 "genirq: Implement a sane sparse_irq allocator" Signed-off-by: Thomas Gleixner <tglx@linutronix.de> [ kamal: backport to 3.13-stable: context ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Sagi Grimberg authored
commit b02efbfc upstream. In situations such as bond failover, The new session establishment implicitly invokes the termination of the old connection. So, we don't want to wait for the old connection wait_conn to completely terminate before we accept the new connection and post a login response. The solution is to deffer the comp_wait completion and the conn_put to a work so wait_conn will effectively be non-blocking (flush errors are assumed to come very fast). We allocate isert_release_wq with WQ_UNBOUND and WQ_UNBOUND_MAX_ACTIVE to spread the concurrency of release works. Reported-by: Slava Shwartsman <valyushash@gmail.com> Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Sagi Grimberg authored
commit ca6c1d82 upstream. The np listener cm_id will also get ADDR_CHANGE event upcall (in case it is bound to a specific IP). Handle it correctly by creating a new cm_id and implicitly destroy the old one. Since this is the second event a listener np cm_id may encounter, we move the np cm_id event handling to a routine. Squashed: iser-target: Move cma_id setup to a function Reported-by: Slava Shwartsman <valyushash@gmail.com> Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Sagi Grimberg authored
commit 19e2090f upstream. Take isert_conn pointer from cm_id->qp->qp_context. This will allow us to know that the cm_id context is always the network portal. This will make the cm_id event check (connection or network portal) more reliable. In order to avoid a NULL dereference in cma_id->qp->qp_context we destroy the qp after we destroy the cm_id (and make the dereference safe). session stablishment/teardown sequences can happen in parallel, we should take into account that connected_handler might race with connection teardown flow. Also, protect isert_conn->conn_device->active_qps decrement within the error patch during QP creation failure and the normal teardown path in isert_connect_release(). Squashed: iser-target: Decrement completion context active_qps in error flow Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-