1. 07 Aug, 2018 2 commits
    • Ondrej Mosnacek's avatar
      crypto: x86/aegis,morus - Fix and simplify CPUID checks · 877ccce7
      Ondrej Mosnacek authored
      It turns out I had misunderstood how the x86_match_cpu() function works.
      It evaluates a logical OR of the matching conditions, not logical AND.
      This caused the CPU feature checks for AEGIS to pass even if only SSE2
      (but not AES-NI) was supported (or vice versa), leading to potential
      crashes if something tried to use the registered algs.
      
      This patch switches the checks to a simpler method that is used e.g. in
      the Camellia x86 code.
      
      The patch also removes the MODULE_DEVICE_TABLE declarations which
      actually seem to cause the modules to be auto-loaded at boot, which is
      not desired. The crypto API on-demand module loading is sufficient.
      
      Fixes: 1d373d4e ("crypto: x86 - Add optimized AEGIS implementations")
      Fixes: 6ecc9d9f ("crypto: x86 - Add optimized MORUS implementations")
      Signed-off-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Tested-by: default avatarMilan Broz <gmazyland@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      877ccce7
    • Ard Biesheuvel's avatar
      crypto: arm64 - revert NEON yield for fast AEAD implementations · f10dc56c
      Ard Biesheuvel authored
      As it turns out, checking the TIF_NEED_RESCHED flag after each
      iteration results in a significant performance regression (~10%)
      when running fast algorithms (i.e., ones that use special instructions
      and operate in the < 4 cycles per byte range) on in-order cores with
      comparatively slow memory accesses such as the Cortex-A53.
      
      Given the speed of these ciphers, and the fact that the page based
      nature of the AEAD scatterwalk API guarantees that the core NEON
      transform is never invoked with more than a single page's worth of
      input, we can estimate the worst case duration of any resulting
      scheduling blackout: on a 1 GHz Cortex-A53 running with 64k pages,
      processing a page's worth of input at 4 cycles per byte results in
      a delay of ~250 us, which is a reasonable upper bound.
      
      So let's remove the yield checks from the fused AES-CCM and AES-GCM
      routines entirely.
      
      This reverts commit 7b67ae4d and
      partially reverts commit 7c50136a.
      
      Fixes: 7c50136a ("crypto: arm64/aes-ghash - yield NEON after every ...")
      Fixes: 7b67ae4d ("crypto: arm64/aes-ccm - yield NEON after every ...")
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f10dc56c
  2. 20 Jul, 2018 1 commit
    • Herbert Xu's avatar
      crypto: padlock-aes - Fix Nano workaround data corruption · 46d8c4b2
      Herbert Xu authored
      This was detected by the self-test thanks to Ard's chunking patch.
      
      I finally got around to testing this out on my ancient Via box.  It
      turns out that the workaround got the assembly wrong and we end up
      doing count + initial cycles of the loop instead of just count.
      
      This obviously causes corruption, either by overwriting the source
      that is yet to be processed, or writing over the end of the buffer.
      
      On CPUs that don't require the workaround only ECB is affected.
      On Nano CPUs both ECB and CBC are affected.
      
      This patch fixes it by doing the subtraction prior to the assembly.
      
      Fixes: a76c1c23 ("crypto: padlock-aes - work around Nano CPU...")
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarJamie Heilman <jamie@audible.transient.net>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      46d8c4b2
  3. 13 Jul, 2018 1 commit
  4. 01 Jul, 2018 2 commits
  5. 15 Jun, 2018 5 commits
    • Michael Büsch's avatar
      hwrng: core - Always drop the RNG in hwrng_unregister() · 837bf7cc
      Michael Büsch authored
      enable_best_rng() is used in hwrng_unregister() to switch away from the
      currently active RNG, if that is the one currently being removed.
      However enable_best_rng() might fail, if the next RNG's init routine
      fails. In that case enable_best_rng() will return an error code and
      the currently active RNG will remain active.
      After unregistering this might lead to crashes due to use-after-free.
      
      Fix this by dropping the currently active RNG, if enable_best_rng()
      failed. This will result in no RNG to be active, if the next-best
      one failed to initialize.
      
      This problem was introduced by 142a27f0
      
      Fixes: 142a27f0 ("hwrng: core - Reset user selected rng by...")
      Reported-by: default avatarWirz <spam@lukas-wirz.de>
      Tested-by: default avatarWirz <spam@lukas-wirz.de>
      Signed-off-by: default avatarMichael Büsch <m@bues.ch>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      837bf7cc
    • Ondrej Mosnáček's avatar
      crypto: morus640 - Fix out-of-bounds access · a81ae809
      Ondrej Mosnáček authored
      We must load the block from the temporary variable here, not directly
      from the input.
      
      Also add forgotten zeroing-out of the uninitialized part of the
      temporary block (as is done correctly in morus1280.c).
      
      Fixes: 396be41f ("crypto: morus - Add generic MORUS AEAD implementations")
      Reported-by: syzbot+1fafa9c4cf42df33f716@syzkaller.appspotmail.com
      Reported-by: syzbot+d82643ba80bf6937cd44@syzkaller.appspotmail.com
      Signed-off-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a81ae809
    • Dmitry Vyukov's avatar
      crypto: don't optimize keccakf() · f044a84e
      Dmitry Vyukov authored
      keccakf() is the only function in kernel that uses __optimize() macro.
      __optimize() breaks frame pointer unwinder as optimized code uses RBP,
      and amusingly this always lead to degraded performance as gcc does not
      inline across different optimizations levels, so keccakf() wasn't inlined
      into its callers and keccakf_round() wasn't inlined into keccakf().
      
      Drop __optimize() to resolve both problems.
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Fixes: 83dee2ce ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize")
      Reported-by: syzbot+37035ccfa9a0a017ffcf@syzkaller.appspotmail.com
      Reported-by: syzbot+e073e4740cfbb3ae200b@syzkaller.appspotmail.com
      Cc: linux-crypto@vger.kernel.org
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f044a84e
    • Jia He's avatar
      crypto: arm64/aes-blk - fix and move skcipher_walk_done out of kernel_neon_begin, _end · 6e88f012
      Jia He authored
      In a arm64 server(QDF2400),I met a similar might-sleep warning as [1]:
      [    7.019116] BUG: sleeping function called from invalid context at
      ./include/crypto/algapi.h:416
      [    7.027863] in_atomic(): 1, irqs_disabled(): 0, pid: 410, name:
      cryptomgr_test
      [    7.035106] 1 lock held by cryptomgr_test/410:
      [    7.039549]  #0:         (ptrval) (&drbg->drbg_mutex){+.+.}, at:
      drbg_instantiate+0x34/0x398
      [    7.048038] CPU: 9 PID: 410 Comm: cryptomgr_test Not tainted
      4.17.0-rc6+ #27
      [    7.068228]  dump_backtrace+0x0/0x1c0
      [    7.071890]  show_stack+0x24/0x30
      [    7.075208]  dump_stack+0xb0/0xec
      [    7.078523]  ___might_sleep+0x160/0x238
      [    7.082360]  skcipher_walk_done+0x118/0x2c8
      [    7.086545]  ctr_encrypt+0x98/0x130
      [    7.090035]  simd_skcipher_encrypt+0x68/0xc0
      [    7.094304]  drbg_kcapi_sym_ctr+0xd4/0x1f8
      [    7.098400]  drbg_ctr_update+0x98/0x330
      [    7.102236]  drbg_seed+0x1b8/0x2f0
      [    7.105637]  drbg_instantiate+0x2ac/0x398
      [    7.109646]  drbg_kcapi_seed+0xbc/0x188
      [    7.113482]  crypto_rng_reset+0x4c/0xb0
      [    7.117319]  alg_test_drbg+0xec/0x330
      [    7.120981]  alg_test.part.6+0x1c8/0x3c8
      [    7.124903]  alg_test+0x58/0xa0
      [    7.128044]  cryptomgr_test+0x50/0x58
      [    7.131708]  kthread+0x134/0x138
      [    7.134936]  ret_from_fork+0x10/0x1c
      
      Seems there is a bug in Ard Biesheuvel's commit.
      Fixes: 68338174 ("crypto: arm64/aes-blk - move kernel mode neon
      en/disable into loop")
      
      [1] https://www.spinics.net/lists/linux-crypto/msg33103.html
      
      Signed-off-by: jia.he@hxt-semitech.com
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: <stable@vger.kernel.org> # 4.17
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6e88f012
    • Dan Carpenter's avatar
      crypto: chtls - use after free in chtls_pt_recvmsg() · f70b359b
      Dan Carpenter authored
      We call chtls_free_skb() but then we dereference it on the next lines.
      Also "skb" can't be NULL, we just dereferenced it on the line before.
      
      I have moved the free down a couple lines to fix this issue.
      
      Fixes: 17a7d24a ("crypto: chtls - generic handling of data and hdr")
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f70b359b
  6. 30 May, 2018 26 commits
  7. 26 May, 2018 3 commits