1. 25 Aug, 2018 8 commits
    • Ondrej Mosnacek's avatar
      crypto: vmx - Fix sleep-in-atomic bugs · 0522236d
      Ondrej Mosnacek authored
      This patch fixes sleep-in-atomic bugs in AES-CBC and AES-XTS VMX
      implementations. The problem is that the blkcipher_* functions should
      not be called in atomic context.
      
      The bugs can be reproduced via the AF_ALG interface by trying to
      encrypt/decrypt sufficiently large buffers (at least 64 KiB) using the
      VMX implementations of 'cbc(aes)' or 'xts(aes)'. Such operations then
      trigger BUG in crypto_yield():
      
      [  891.863680] BUG: sleeping function called from invalid context at include/crypto/algapi.h:424
      [  891.864622] in_atomic(): 1, irqs_disabled(): 0, pid: 12347, name: kcapi-enc
      [  891.864739] 1 lock held by kcapi-enc/12347:
      [  891.864811]  #0: 00000000f5d42c46 (sk_lock-AF_ALG){+.+.}, at: skcipher_recvmsg+0x50/0x530
      [  891.865076] CPU: 5 PID: 12347 Comm: kcapi-enc Not tainted 4.19.0-0.rc0.git3.1.fc30.ppc64le #1
      [  891.865251] Call Trace:
      [  891.865340] [c0000003387578c0] [c000000000d67ea4] dump_stack+0xe8/0x164 (unreliable)
      [  891.865511] [c000000338757910] [c000000000172a58] ___might_sleep+0x2f8/0x310
      [  891.865679] [c000000338757990] [c0000000006bff74] blkcipher_walk_done+0x374/0x4a0
      [  891.865825] [c0000003387579e0] [d000000007e73e70] p8_aes_cbc_encrypt+0x1c8/0x260 [vmx_crypto]
      [  891.865993] [c000000338757ad0] [c0000000006c0ee0] skcipher_encrypt_blkcipher+0x60/0x80
      [  891.866128] [c000000338757b10] [c0000000006ec504] skcipher_recvmsg+0x424/0x530
      [  891.866283] [c000000338757bd0] [c000000000b00654] sock_recvmsg+0x74/0xa0
      [  891.866403] [c000000338757c10] [c000000000b00f64] ___sys_recvmsg+0xf4/0x2f0
      [  891.866515] [c000000338757d90] [c000000000b02bb8] __sys_recvmsg+0x68/0xe0
      [  891.866631] [c000000338757e30] [c00000000000bbe4] system_call+0x5c/0x70
      
      Fixes: 8c755ace ("crypto: vmx - Adding CBC routines for VMX module")
      Fixes: c07f5d3d ("crypto: vmx - Adding support for XTS")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0522236d
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-gcm-ce - fix scatterwalk API violation · c2b24c36
      Ard Biesheuvel authored
      Commit 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on
      two input blocks at a time") modified the granularity at which
      the AES/GCM code processes its input to allow subsequent changes
      to be applied that improve performance by using aggregation to
      process multiple input blocks at once.
      
      For this reason, it doubled the algorithm's 'chunksize' property
      to 2 x AES_BLOCK_SIZE, but retained the non-SIMD fallback path that
      processes a single block at a time. In some cases, this violates the
      skcipher scatterwalk API, by calling skcipher_walk_done() with a
      non-zero residue value for a chunk that is expected to be handled
      in its entirety. This results in a WARN_ON() to be hit by the TLS
      self test code, but is likely to break other user cases as well.
      Unfortunately, none of the current test cases exercises this exact
      code path at the moment.
      
      Fixes: 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on two ...")
      Reported-by: default avatarVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c2b24c36
    • Dave Watson's avatar
      crypto: aesni - Use unaligned loads from gcm_context_data · e5b954e8
      Dave Watson authored
      A regression was reported bisecting to 1476db2d
      "Move HashKey computation from stack to gcm_context".  That diff
      moved HashKey computation from the stack, which was explicitly aligned
      in the asm, to a struct provided from the C code, depending on
      AESNI_ALIGN_ATTR for alignment.   It appears some compilers may not
      align this struct correctly, resulting in a crash on the movdqa
      instruction when attempting to encrypt or decrypt data.
      
      Fix by using unaligned loads for the HashKeys.  On modern
      hardware there is no perf difference between the unaligned and
      aligned loads.  All other accesses to gcm_context_data already use
      unaligned loads.
      Reported-by: default avatarMauro Rossi <issor.oruam@gmail.com>
      Fixes: 1476db2d ("Move HashKey computation from stack to gcm_context")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarDave Watson <davejwatson@fb.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e5b954e8
    • Ganesh Goudar's avatar
      crypto: chtls - fix null dereference chtls_free_uld() · 65b2c12d
      Ganesh Goudar authored
      call chtls_free_uld() only for the initialized cdev,
      this fixes NULL dereference in chtls_free_uld()
      Signed-off-by: default avatarGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: default avatarAtul Gupta <atul.gupta@chelsio.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      65b2c12d
    • Ard Biesheuvel's avatar
      crypto: arm64/sm4-ce - check for the right CPU feature bit · 7fa885e2
      Ard Biesheuvel authored
      ARMv8.2 specifies special instructions for the SM3 cryptographic hash
      and the SM4 symmetric cipher. While it is unlikely that a core would
      implement one and not the other, we should only use SM4 instructions
      if the SM4 CPU feature bit is set, and we currently check the SM3
      feature bit instead. So fix that.
      
      Fixes: e99ce921 ("crypto: arm64 - add support for SM4...")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7fa885e2
    • Horia Geantă's avatar
      crypto: caam - fix DMA mapping direction for RSA forms 2 & 3 · f1bf9e60
      Horia Geantă authored
      Crypto engine needs some temporary locations in external memory for
      running RSA decrypt forms 2 and 3 (CRT).
      These are named "tmp1" and "tmp2" in the PDB.
      
      Update DMA mapping direction of tmp1 and tmp2 from TO_DEVICE to
      BIDIRECTIONAL, since engine needs r/w access.
      
      Cc: <stable@vger.kernel.org> # 4.13+
      Fixes: 52e26d77 ("crypto: caam - add support for RSA key form 2")
      Fixes: 4a651b12 ("crypto: caam - add support for RSA key form 3")
      Signed-off-by: default avatarHoria Geantă <horia.geanta@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f1bf9e60
    • Horia Geantă's avatar
      crypto: caam/qi - fix error path in xts setkey · ad876a18
      Horia Geantă authored
      xts setkey callback returns 0 on some error paths.
      Fix this by returning -EINVAL.
      
      Cc: <stable@vger.kernel.org> # 4.12+
      Fixes: b189817c ("crypto: caam/qi - add ablkcipher and authenc algorithms")
      Signed-off-by: default avatarHoria Geantă <horia.geanta@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ad876a18
    • Horia Geantă's avatar
      crypto: caam/jr - fix descriptor DMA unmapping · cc98963d
      Horia Geantă authored
      Descriptor address needs to be swapped to CPU endianness before being
      DMA unmapped.
      
      Cc: <stable@vger.kernel.org> # 4.8+
      Fixes: 261ea058 ("crypto: caam - handle core endianness != caam endianness")
      Reported-by: default avatarLaurentiu Tudor <laurentiu.tudor@nxp.com>
      Signed-off-by: default avatarHoria Geantă <horia.geanta@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      cc98963d
  2. 07 Aug, 2018 9 commits
  3. 03 Aug, 2018 23 commits