1. 30 Mar, 2018 17 commits
  2. 23 Mar, 2018 11 commits
  3. 16 Mar, 2018 12 commits
    • Ard Biesheuvel's avatar
      crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels · 1a3713c7
      Ard Biesheuvel authored
      Tweak the SHA256 update routines to invoke the SHA256 block transform
      block by block, to avoid excessive scheduling delays caused by the
      NEON algorithm running with preemption disabled.
      
      Also, remove a stale comment which no longer applies now that kernel
      mode NEON is actually disallowed in some contexts.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      1a3713c7
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path · 870c163a
      Ard Biesheuvel authored
      CBC MAC is strictly sequential, and so the current AES code simply
      processes the input one block at a time. However, we are about to add
      yield support, which adds a bit of overhead, and which we prefer to
      align with other modes in terms of granularity (i.e., it is better to
      have all routines yield every 64 bytes and not have an exception for
      CBC MAC which yields every 16 bytes)
      
      So unroll the loop by 4. We still cannot perform the AES algorithm in
      parallel, but we can at least merge the loads and stores.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      870c163a
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path · a8f8a69e
      Ard Biesheuvel authored
      CBC encryption is strictly sequential, and so the current AES code
      simply processes the input one block at a time. However, we are
      about to add yield support, which adds a bit of overhead, and which
      we prefer to align with other modes in terms of granularity (i.e.,
      it is better to have all routines yield every 64 bytes and not have
      an exception for CBC encrypt which yields every 16 bytes)
      
      So unroll the loop by 4. We still cannot perform the AES algorithm in
      parallel, but we can at least merge the loads and stores.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a8f8a69e
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-blk - remove configurable interleave · 55868b45
      Ard Biesheuvel authored
      The AES block mode implementation using Crypto Extensions or plain NEON
      was written before real hardware existed, and so its interleave factor
      was made build time configurable (as well as an option to instantiate
      all interleaved sequences inline rather than as subroutines)
      
      We ended up using INTERLEAVE=4 with inlining disabled for both flavors
      of the core AES routines, so let's stick with that, and remove the option
      to configure this at build time. This makes the code easier to modify,
      which is nice now that we're adding yield support.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      55868b45
    • Ard Biesheuvel's avatar
      crypto: arm64/chacha20 - move kernel mode neon en/disable into loop · 4bf7e7a1
      Ard Biesheuvel authored
      When kernel mode NEON was first introduced on arm64, the preserve and
      restore of the userland NEON state was completely unoptimized, and
      involved saving all registers on each call to kernel_neon_begin(),
      and restoring them on each call to kernel_neon_end(). For this reason,
      the NEON crypto code that was introduced at the time keeps the NEON
      enabled throughout the execution of the crypto API methods, which may
      include calls back into the crypto API that could result in memory
      allocation or other actions that we should avoid when running with
      preemption disabled.
      
      Since then, we have optimized the kernel mode NEON handling, which now
      restores lazily (upon return to userland), and so the preserve action
      is only costly the first time it is called after entering the kernel.
      
      So let's put the kernel_neon_begin() and kernel_neon_end() calls around
      the actual invocations of the NEON crypto code, and run the remainder of
      the code with kernel mode NEON disabled (and preemption enabled)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4bf7e7a1
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-bs - move kernel mode neon en/disable into loop · 78ad7b08
      Ard Biesheuvel authored
      When kernel mode NEON was first introduced on arm64, the preserve and
      restore of the userland NEON state was completely unoptimized, and
      involved saving all registers on each call to kernel_neon_begin(),
      and restoring them on each call to kernel_neon_end(). For this reason,
      the NEON crypto code that was introduced at the time keeps the NEON
      enabled throughout the execution of the crypto API methods, which may
      include calls back into the crypto API that could result in memory
      allocation or other actions that we should avoid when running with
      preemption disabled.
      
      Since then, we have optimized the kernel mode NEON handling, which now
      restores lazily (upon return to userland), and so the preserve action
      is only costly the first time it is called after entering the kernel.
      
      So let's put the kernel_neon_begin() and kernel_neon_end() calls around
      the actual invocations of the NEON crypto code, and run the remainder of
      the code with kernel mode NEON disabled (and preemption enabled)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      78ad7b08
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-blk - move kernel mode neon en/disable into loop · 68338174
      Ard Biesheuvel authored
      When kernel mode NEON was first introduced on arm64, the preserve and
      restore of the userland NEON state was completely unoptimized, and
      involved saving all registers on each call to kernel_neon_begin(),
      and restoring them on each call to kernel_neon_end(). For this reason,
      the NEON crypto code that was introduced at the time keeps the NEON
      enabled throughout the execution of the crypto API methods, which may
      include calls back into the crypto API that could result in memory
      allocation or other actions that we should avoid when running with
      preemption disabled.
      
      Since then, we have optimized the kernel mode NEON handling, which now
      restores lazily (upon return to userland), and so the preserve action
      is only costly the first time it is called after entering the kernel.
      
      So let's put the kernel_neon_begin() and kernel_neon_end() calls around
      the actual invocations of the NEON crypto code, and run the remainder of
      the code with kernel mode NEON disabled (and preemption enabled)
      
      Note that this requires some reshuffling of the registers in the asm
      code, because the XTS routines can no longer rely on the registers to
      retain their contents between invocations.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      68338174
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop · bd2ad885
      Ard Biesheuvel authored
      When kernel mode NEON was first introduced on arm64, the preserve and
      restore of the userland NEON state was completely unoptimized, and
      involved saving all registers on each call to kernel_neon_begin(),
      and restoring them on each call to kernel_neon_end(). For this reason,
      the NEON crypto code that was introduced at the time keeps the NEON
      enabled throughout the execution of the crypto API methods, which may
      include calls back into the crypto API that could result in memory
      allocation or other actions that we should avoid when running with
      preemption disabled.
      
      Since then, we have optimized the kernel mode NEON handling, which now
      restores lazily (upon return to userland), and so the preserve action
      is only costly the first time it is called after entering the kernel.
      
      So let's put the kernel_neon_begin() and kernel_neon_end() calls around
      the actual invocations of the NEON crypto code, and run the remainder of
      the code with kernel mode NEON disabled (and preemption enabled)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      bd2ad885
    • Ard Biesheuvel's avatar
      crypto: testmgr - add a new test case for CRC-T10DIF · 702202f1
      Ard Biesheuvel authored
      In order to be able to test yield support under preempt, add a test
      vector for CRC-T10DIF that is long enough to take multiple iterations
      (and thus possible preemption between them) of the primary loop of the
      accelerated x86 and arm64 implementations.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      702202f1
    • Kees Cook's avatar
      crypto: ecc - Remove stack VLA usage · 14de5211
      Kees Cook authored
      On the quest to remove all VLAs from the kernel[1], this switches to
      a pair of kmalloc regions instead of using the stack. This also moves
      the get_random_bytes() after all allocations (and drops the needless
      "nbytes" variable).
      
      [1] https://lkml.org/lkml/2018/3/7/621Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarTudor Ambarus <tudor.ambarus@microchip.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      14de5211
    • Gary R Hook's avatar
      crypto: ccp - Validate buffer lengths for copy operations · b698a9f4
      Gary R Hook authored
      The CCP driver copies data between scatter/gather lists and DMA buffers.
      The length of the requested copy operation must be checked against
      the available destination buffer length.
      Reported-by: default avatarMaciej S. Szmigiero <mail@maciej.szmigiero.name>
      Signed-off-by: default avatarGary R Hook <gary.hook@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b698a9f4
    • Kamil Konieczny's avatar
      crypto: hash - Prevent use of req->result in ahash update · 3d053d53
      Kamil Konieczny authored
      Prevent improper use of req->result field in ahash update, init, export and
      import functions in drivers code. A driver should use ahash request context
      if it needs to save internal state.
      Signed-off-by: default avatarKamil Konieczny <k.konieczny@partner.samsung.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3d053d53