- 05 Aug, 2019 4 commits
-
-
Sean Nyekjaer authored
BugLink: https://bugs.launchpad.net/bugs/1838467 [ Upstream commit 0df82dcd ] Fully compatible with mcp2515, the mcp25625 have integrated transceiver. This patch add the mcp25625 to the device tree bindings documentation. Signed-off-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Thomas Pedersen authored
BugLink: https://bugs.launchpad.net/bugs/1838467 [ Upstream commit 55184244 ] ifmsh->csa is an RCU-protected pointer. The writer context in ieee80211_mesh_finish_csa() is already mutually exclusive with wdev->sdata.mtx, but the RCU checker did not know this. Use rcu_dereference_protected() to avoid a warning. fixes the following warning: [ 12.519089] ============================= [ 12.520042] WARNING: suspicious RCU usage [ 12.520652] 5.1.0-rc7-wt+ #16 Tainted: G W [ 12.521409] ----------------------------- [ 12.521972] net/mac80211/mesh.c:1223 suspicious rcu_dereference_check() usage! [ 12.522928] other info that might help us debug this: [ 12.523984] rcu_scheduler_active = 2, debug_locks = 1 [ 12.524855] 5 locks held by kworker/u8:2/152: [ 12.525438] #0: 00000000057be08c ((wq_completion)phy0){+.+.}, at: process_one_work+0x1a2/0x620 [ 12.526607] #1: 0000000059c6b07a ((work_completion)(&sdata->csa_finalize_work)){+.+.}, at: process_one_work+0x1a2/0x620 [ 12.528001] #2: 00000000f184ba7d (&wdev->mtx){+.+.}, at: ieee80211_csa_finalize_work+0x2f/0x90 [ 12.529116] #3: 00000000831a1f54 (&local->mtx){+.+.}, at: ieee80211_csa_finalize_work+0x47/0x90 [ 12.530233] #4: 00000000fd06f988 (&local->chanctx_mtx){+.+.}, at: ieee80211_csa_finalize_work+0x51/0x90 Signed-off-by: Thomas Pedersen <thomas@eero.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Chang-Hsien Tsai authored
BugLink: https://bugs.launchpad.net/bugs/1838467 [ Upstream commit f7c2d64b ] If the trace for read is larger than 4096, the return value sz will be 4096. This results in off-by-one error on buf: static char buf[4096]; ssize_t sz; sz = read(trace_fd, buf, sizeof(buf)); if (sz > 0) { buf[sz] = 0; puts(buf); } Signed-off-by: Chang-Hsien Tsai <luke.tw@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Aaron Ma authored
BugLink: https://bugs.launchpad.net/bugs/1838467 [ Upstream commit aa440de3 ] Adding 2 new touchpad PNPIDs to enable middle button support. Signed-off-by: Aaron Ma <aaron.ma@canonical.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
- 01 Aug, 2019 8 commits
-
-
Kleber Sacilotto de Souza authored
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Thomas Gleixner authored
Intel provided the following information: On all current Atom processors, instructions that use a segment register value (e.g. a load or store) will not speculatively execute before the last writer of that segment retires. Thus they will not use a speculatively written segment value. That means on ATOMs there is no speculation through SWAPGS, so the SWAPGS entry paths can be excluded from the extra LFENCE if PTI is disabled. Create a separate bug flag for the through SWAPGS speculation and mark all out-of-order ATOMs and AMD/HYGON CPUs as not affected. The in-order ATOMs are excluded from the whole mitigation mess anyway. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> CVE-2019-1125 (backported from commit f36cf386) [tyhicks: Backport to Xenial: - Dropped VULNWL_HYGON() change since this kernel version doesn't know about Hygon processors - Rename X86_FEATURE_PTI to X86_FEATURE_KAISER] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Josh Poimboeuf authored
Somehow the swapgs mitigation entry code patch ended up with a JMPQ instruction instead of JMP, where only the short jump is needed. Some assembler versions apparently fail to optimize JMPQ into a two-byte JMP when possible, instead always using a 7-byte JMP with relocation. For some reason that makes the entry code explode with a #GP during boot. Change it back to "JMP" as originally intended. Fixes: 18ec54fd ("x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations") Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2019-1125 (backported from commit 64dbc122) [tyhicks: Adjust context in entry_64.S] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Josh Poimboeuf authored
The previous commit added macro calls in the entry code which mitigate the Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are enabled. Enable those features where applicable. The mitigations may be disabled with "nospectre_v1" or "mitigations=off". There are different features which can affect the risk of attack: - When FSGSBASE is enabled, unprivileged users are able to place any value in GS, using the wrgsbase instruction. This means they can write a GS value which points to any value in kernel space, which can be useful with the following gadget in an interrupt/exception/NMI handler: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 // dependent load or store based on the value of %reg // for example: mov %(reg1), %reg2 If an interrupt is coming from user space, and the entry code speculatively skips the swapgs (due to user branch mistraining), it may speculatively execute the GS-based load and a subsequent dependent load or store, exposing the kernel data to an L1 side channel leak. Note that, on Intel, a similar attack exists in the above gadget when coming from kernel space, if the swapgs gets speculatively executed to switch back to the user GS. On AMD, this variant isn't possible because swapgs is serializing with respect to future GS-based accesses. NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case doesn't exist quite yet. - When FSGSBASE is disabled, the issue is mitigated somewhat because unprivileged users must use prctl(ARCH_SET_GS) to set GS, which restricts GS values to user space addresses only. That means the gadget would need an additional step, since the target kernel address needs to be read from user space first. Something like: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 mov (%reg1), %reg2 // dependent load or store based on the value of %reg2 // for example: mov %(reg2), %reg3 It's difficult to audit for this gadget in all the handlers, so while there are no known instances of it, it's entirely possible that it exists somewhere (or could be introduced in the future). Without tooling to analyze all such code paths, consider it vulnerable. Effects of SMAP on the !FSGSBASE case: - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not susceptible to Meltdown), the kernel is prevented from speculatively reading user space memory, even L1 cached values. This effectively disables the !FSGSBASE attack vector. - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP still prevents the kernel from speculatively reading user space memory. But it does *not* prevent the kernel from reading the user value from L1, if it has already been cached. This is probably only a small hurdle for an attacker to overcome. Thanks to Dave Hansen for contributing the speculative_smap() function. Thanks to Andrew Cooper for providing the inside scoop on whether swapgs is serializing on AMD. [ tglx: Fixed the USER fence decision and polished the comment as suggested by Dave Hansen ] Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> CVE-2019-1125 (backported from commit a2059825) [tyhicks: Backport to Xenial: - Adjust the file path to and context in kernel-parameters.txt - Rename X86_FEATURE_PTI to X86_FEATURE_KAISER] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Josh Poimboeuf authored
Spectre v1 isn't only about array bounds checks. It can affect any conditional checks. The kernel entry code interrupt, exception, and NMI handlers all have conditional swapgs checks. Those may be problematic in the context of Spectre v1, as kernel code can speculatively run with a user GS. For example: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg mov (%reg), %reg1 When coming from user space, the CPU can speculatively skip the swapgs, and then do a speculative percpu load using the user GS value. So the user can speculatively force a read of any kernel value. If a gadget exists which uses the percpu value as an address in another load/store, then the contents of the kernel value may become visible via an L1 side channel attack. A similar attack exists when coming from kernel space. The CPU can speculatively do the swapgs, causing the user GS to get used for the rest of the speculative window. The mitigation is similar to a traditional Spectre v1 mitigation, except: a) index masking isn't possible; because the index (percpu offset) isn't user-controlled; and b) an lfence is needed in both the "from user" swapgs path and the "from kernel" non-swapgs path (because of the two attacks described above). The user entry swapgs paths already have SWITCH_TO_KERNEL_CR3, which has a CR3 write when PTI is enabled. Since CR3 writes are serializing, the lfences can be skipped in those cases. On the other hand, the kernel entry swapgs paths don't depend on PTI. To avoid unnecessary lfences for the user entry case, create two separate features for alternative patching: X86_FEATURE_FENCE_SWAPGS_USER X86_FEATURE_FENCE_SWAPGS_KERNEL Use these features in entry code to patch in lfences where needed. The features aren't enabled yet, so there's no functional change. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> CVE-2019-1125 (backported from commit 18ec54fd) [tyhicks: Backport to Xenial: - Adjust context in calling.h - Minor rework of fencing in entry_64.S due to differences in entry points - Add a FENCE_SWAPGS_KERNEL_ENTRY to a swapgs, in NMI entry path, that wasn't present in newer kernels - Indent macros in calling.h to match the rest of the file's style - Move include of calling.h later in entry_64.S to fix build failure] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Fenghua Yu authored
It's a waste for the four X86_FEATURE_CQM_* feature bits to occupy two whole feature bits words. To better utilize feature words, re-define word 11 to host scattered features and move the four X86_FEATURE_CQM_* features into Linux defined word 11. More scattered features can be added in word 11 in the future. Rename leaf 11 in cpuid_leafs to CPUID_LNX_4 to reflect it's a Linux-defined leaf. Rename leaf 12 as CPUID_DUMMY which will be replaced by a meaningful name in the next patch when CPUID.7.1:EAX occupies world 12. Maximum number of RMID and cache occupancy scale are retrieved from CPUID.0xf.1 after scattered CQM features are enumerated. Carve out the code into a separate function. KVM doesn't support resctrl now. So it's safe to move the X86_FEATURE_CQM_* features to scattered features word 11 for KVM. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Aaron Lewis <aaronlewis@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Babu Moger <babu.moger@amd.com> Cc: "Chang S. Bae" <chang.seok.bae@intel.com> Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Juergen Gross <jgross@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: kvm ML <kvm@vger.kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Nadav Amit <namit@vmware.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Peter Feiner <pfeiner@google.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: Ravi V Shankar <ravi.v.shankar@intel.com> Cc: Sherry Hurwitz <sherry.hurwitz@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Lendacky <Thomas.Lendacky@amd.com> Cc: x86 <x86@kernel.org> Link: https://lkml.kernel.org/r/1560794416-217638-2-git-send-email-fenghua.yu@intel.com CVE-2019-1125 (backported from commit acec0ce0) [tyhicks: Backport to Xenial: - Adjust context in cpufeatures.h - No need to modify cpuid.h since we're missing commit d6321d49 ("KVM: x86: generalize guest_cpuid_has_ helpers") - No need to modify cpuid-deps.c since we're missing commit 0b00de85 ("x86/cpuid: Add generic table for CPUID dependencies") - Change CPUID_EDX to CR_EDX since we're missing commit 47f10a36 ("x86/cpuid: Cleanup cpuid_regs definitions")] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Borislav Petkov authored
... into a separate function for better readability. Split out from a patch from Fenghua Yu <fenghua.yu@intel.com> to keep the mechanical, sole code movement separate for easy review. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: x86@kernel.org CVE-2019-1125 (cherry picked from commit 45fc56e6) Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Kleber Sacilotto de Souza authored
Ignore: yes Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
- 29 Jul, 2019 9 commits
-
-
Sultan Alsawaf authored
Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
Sultan Alsawaf authored
BugLink: https://bugs.launchpad.net/bugs/1837609Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
Sultan Alsawaf authored
Ignore: yes Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
Sultan Alsawaf authored
BugLink: http://bugs.launchpad.net/bugs/1786013Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
Sultan Alsawaf authored
BugLink: http://bugs.launchpad.net/bugs/1786013Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
Alexander Duyck authored
BugLink: https://bugs.launchpad.net/bugs/1836760 Change the ethtool link settings call to just read the cached state out of the adapter structure instead of trying to recheck the value from the PF. Doing this should prevent excessive reading of the mailbox. Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Reviewed-by: "Guilherme G. Piccoli" <gpiccoli@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> (backported from commit 1e1b0c65) [gpiccoli: * Context adjustment for v4.4. * Changed the patch to use the old ethtool API for get link settings (get_link_ksettings -> get_settings); the new API was introduced in v4.6 by commit 3f1ac7a7.] Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com> Acked-by: Khalid Elmously <khalid.elmously@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Stephan Mueller authored
According to SP800-56A section 5.6.2.1, the public key to be processed for the ECDH operation shall be checked for appropriateness. When the public key is considered to be an ephemeral key, the partial validation test as defined in SP800-56A section 5.6.2.3.4 can be applied. The partial verification test requires the presence of the field elements of a and b. For the implemented NIST curves, b is defined in FIPS 186-4 appendix D.1.2. The element a is implicitly given with the Weierstrass equation given in D.1.2 where a = p - 3. Without the test, the NIST ACVP testing fails. After adding this check, the NIST ACVP testing passes. CVE-2018-5383 Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> (cherry picked from commit ea169a30) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Salvatore Benedetto authored
* Convert both smp and selftest to crypto kpp API * Remove module ecc as no more required * Add ecdh_helper functions for wrapping kpp async calls This patch has been tested *only* with selftest, which is called on module loading. Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> CVE-2018-5383 (cherry picked from commit 58771c1c) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Paolo Pisati authored
CVE-2018-5383 Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Sultan Alsawaf <sultan.alsawaf@canonical.com>
-
- 23 Jul, 2019 19 commits
-
-
Pierre authored
If crypto_get_default_rng returns an error, the function ecc_gen_privkey should return an error. Instead, it currently tries to use the default_rng nevertheless, thus creating a kernel panic with a NULL pointer dereference. Returning the error directly, as was supposedly intended when looking at the code, fixes this. Signed-off-by: Pierre Ducroquet <pinaraf@pinaraf.info> Reviewed-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 4c0e22c9) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Tudor-Dan Ambarus authored
Add support for generating ecc private keys. Generation of ecc private keys is helpful in a user-space to kernel ecdh offload because the keys are not revealed to user-space. Private key generation is also helpful to implement forward secrecy. If the user provides a NULL ecc private key, the kernel will generate it and further use it for ecdh. Move ecdh's object files below drbg's. drbg must be present in the kernel at the time of calling. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 6755fd26) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Tudor-Dan Ambarus authored
Rename ecdh_make_pub_key() to ecc_make_pub_key(). ecdh_make_pub_key() is not dh specific and the reference to dh is wrong. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 7380c56d) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Tudor-Dan Ambarus authored
ecc software implementation works with chunks of u64 data. There were some unnecessary casts to u8 and then back to u64 for the ecc keys. This patch removes the unnecessary casts. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit ad269597) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Tudor-Dan Ambarus authored
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 099054d7) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Tudor-Dan Ambarus authored
While here, add missing argument description (ndigits). Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit c0ca1215) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Stephan Mueller authored
Add the KPP API documentation to the kernel crypto API Sphinx documentation. This addition includes the documentation of the ECDH and DH helpers which are needed to create the approrpiate input data for the crypto_kpp_set_secret function. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Jonathan Corbet <corbet@lwn.net> CVE-2018-5383 (cherry picked from commit 8d23da22) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Stephen Rothwell authored
There is another ecdh_shared_secret in net/bluetooth/ecc.c Fixes: 3c4b2390 ("crypto: ecdh - Add ECDH software support") Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 8f44df15) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Salvatore Benedetto authored
* Implement ECDH under kpp API * Provide ECC software support for curve P-192 and P-256. * Add kpp test for ECDH with data generated by OpenSSL Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 3c4b2390) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Salvatore Benedetto authored
* Implement MPI based Diffie-Hellman under kpp API * Test provided uses data generad by OpenSSL Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 802c7f1c) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Salvatore Benedetto authored
Add key-agreement protocol primitives (kpp) API which allows to implement primitives required by protocols such as DH and ECDH. The API is composed mainly by the following functions * set_secret() - It allows the user to set his secret, also referred to as his private key, along with the parameters known to both parties involved in the key-agreement session. * generate_public_key() - It generates the public key to be sent to the other counterpart involved in the key-agreement session. The function has to be called after set_params() and set_secret() * generate_secret() - It generates the shared secret for the session Other functions such as init() and exit() are provided for allowing cryptographic hardware to be inizialized properly before use Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> CVE-2018-5383 (cherry picked from commit 4e5f2c40) Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Connor Kuehl <connor.kuehl@canonical.com> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
-
Greg Kroah-Hartman authored
BugLink: https://bugs.launchpad.net/bugs/1836668Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Robin Gong authored
BugLink: https://bugs.launchpad.net/bugs/1836668 commit 3f93a4f2 upstream. It is possible for an irq triggered by channel0 to be received later after clks are disabled once firmware loaded during sdma probe. If that happens then clearing them by writing to SDMA_H_INTR won't work and the kernel will hang processing infinite interrupts. Actually, don't need interrupt triggered on channel0 since it's pollling SDMA_H_STATSTOP to know channel0 done rather than interrupt in current code, just clear BD_INTR to disable channel0 interrupt to avoid the above case. This issue was brought by commit 1d069bfa ("dmaengine: imx-sdma: ack channel 0 IRQ in the interrupt handler") which didn't take care the above case. Fixes: 1d069bfa ("dmaengine: imx-sdma: ack channel 0 IRQ in the interrupt handler") Cc: stable@vger.kernel.org #5.0+ Signed-off-by: Robin Gong <yibin.gong@nxp.com> Reported-by: Sven Van Asbroeck <thesven73@gmail.com> Tested-by: Sven Van Asbroeck <thesven73@gmail.com> Reviewed-by: Michael Olbrich <m.olbrich@pengutronix.de> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Paolo Bonzini authored
BugLink: https://bugs.launchpad.net/bugs/1836668 commit 3f16a5c3 upstream. This warning can be triggered easily by userspace, so it should certainly not cause a panic if panic_on_warn is set. Reported-by: syzbot+c03f30b4f4c46bdf8575@syzkaller.appspotmail.com Suggested-by: Alexander Potapenko <glider@google.com> Acked-by: Alexander Potapenko <glider@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Kees Cook authored
BugLink: https://bugs.launchpad.net/bugs/1836668 Commit dbbb08f5 upstream. Adjust vdso_{start|end} to be char arrays to avoid compile-time analysis that flags "too large" memcmp() calls with CONFIG_FORTIFY_SOURCE. Cc: Jisheng Zhang <jszhang@marvell.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Vineet Gupta authored
BugLink: https://bugs.launchpad.net/bugs/1836668 commit af1be2e2 upstream. ARC gcc prior to GNU 2018.03 release didn't have a target specific __builtin_trap() implementation, generating default abort() call. Implement the abort() call - emulating what newer gcc does for the same, as suggested by Arnd. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Linus Torvalds authored
BugLink: https://bugs.launchpad.net/bugs/1836668 [ Upstream commit 423ea325 ] Make the forward declaration actually match the real function definition, something that previous versions of gcc had just ignored. This is another patch to fix new warnings from gcc-9 before I start the merge window pulls. I don't want to miss legitimate new warnings just because my system update brought a new compiler with new warnings. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Nikolay Borisov authored
BugLink: https://bugs.launchpad.net/bugs/1836668 commit debd1c06 upstream. Recent FITRIM work, namely bbbf7243 ("btrfs: combine device update operations during transaction commit") combined the way certain operations are recoded in a transaction. As a result an ASSERT was added in dev_replace_finish to ensure the new code works correctly. Unfortunately I got reports that it's possible to trigger the assert, meaning that during a device replace it's possible to have an unfinished chunk allocation on the source device. This is supposed to be prevented by the fact that a transaction is committed before finishing the replace oepration and alter acquiring the chunk mutex. This is not sufficient since by the time the transaction is committed and the chunk mutex acquired it's possible to allocate a chunk depending on the workload being executed on the replaced device. This bug has been present ever since device replace was introduced but there was never code which checks for it. The correct way to fix is to ensure that there is no pending device modification operation when the chunk mutex is acquire and if there is repeat transaction commit. Unfortunately it's not possible to just exclude the source device from btrfs_fs_devices::dev_alloc_list since this causes ENOSPC to be hit in transaction commit. Fixing that in another way would need to add special cases to handle the last writes and forbid new ones. The looped transaction fix is more obvious, and can be easily backported. The runtime of dev-replace is long so there's no noticeable delay caused by that. Reported-by: David Sterba <dsterba@suse.com> Fixes: 391cd9df ("Btrfs: fix unprotected alloc list insertion during the finishing procedure of replace") CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Herbert Xu authored
BugLink: https://bugs.launchpad.net/bugs/1836668 commit c8ea9fce upstream. Sometimes mpi_powm will leak karactx because a memory allocation failure causes a bail-out that skips the freeing of karactx. This patch moves the freeing of karactx to the end of the function like everything else so that it can't be skipped. Reported-by: syzbot+f7baccc38dcc1e094e77@syzkaller.appspotmail.com Fixes: cdec9cb5 ("crypto: GnuPG based MPI lib - source files...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-