1. 20 Oct, 2023 6 commits
    • Eric Biggers's avatar
      crypto: x86/sha256 - implement ->digest for sha256 · fdcac2dd
      Eric Biggers authored
      Implement a ->digest function for sha256-ssse3, sha256-avx, sha256-avx2,
      and sha256-ni.  This improves the performance of crypto_shash_digest()
      with these algorithms by reducing the number of indirect calls that are
      made.
      
      For now, don't bother with this for sha224, since sha224 is rarely used.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fdcac2dd
    • Eric Biggers's avatar
      crypto: arm64/sha2-ce - implement ->digest for sha256 · 1efcbf0e
      Eric Biggers authored
      Implement a ->digest function for sha256-ce.  This improves the
      performance of crypto_shash_digest() with this algorithm by reducing the
      number of indirect calls that are made.  This only adds ~112 bytes of
      code, mostly for the inlined init, as the finup function is tail-called.
      
      For now, don't bother with this for sha224, since sha224 is rarely used.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      1efcbf0e
    • Eric Biggers's avatar
      crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest() · 2e02c25a
      Eric Biggers authored
      Fold shash_digest_unaligned() into its only remaining caller.  Also,
      avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
      crypto_shash_init() with shash->init(desc).  Finally, replace
      shash_update_unaligned() + shash_final_unaligned() with
      shash_finup_unaligned() which does exactly that.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2e02c25a
    • Eric Biggers's avatar
      crypto: shash - optimize the default digest and finup · 313a4074
      Eric Biggers authored
      For an shash algorithm that doesn't implement ->digest, currently
      crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
      shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
      bytes, then the rest), then 1 to ->final.  This is true even if the
      algorithm implements ->finup.  This is caused by an unnecessary fallback
      to code meant to handle unaligned inputs.  In fact,
      crypto_shash_digest() already does the needed alignment check earlier.
      Therefore, optimize the number of indirect calls for aligned inputs to 3
      when the algorithm implements ->finup.  It remains at 5 when the
      algorithm implements neither ->finup nor ->digest.
      
      Similarly, for an shash algorithm that doesn't implement ->finup,
      currently crypto_shash_finup() with aligned input makes 4 indirect
      calls: 1 to shash_finup_unaligned(), 2 to ->update, and
      1 to ->final.  Optimize this to 3 calls.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      313a4074
    • Eric Biggers's avatar
      crypto: xts - use 'spawn' for underlying single-block cipher · bb40d326
      Eric Biggers authored
      Since commit adad556e ("crypto: api - Fix built-in testing
      dependency failures"), the following warning appears when booting an
      x86_64 kernel that is configured with
      CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y and CONFIG_CRYPTO_AES_NI_INTEL=y,
      even when CONFIG_CRYPTO_XTS=y and CONFIG_CRYPTO_AES=y:
      
          alg: skcipher: skipping comparison tests for xts-aes-aesni because xts(ecb(aes-generic)) is unavailable
      
      This is caused by an issue in the xts template where it allocates an
      "aes" single-block cipher without declaring a dependency on it via the
      crypto_spawn mechanism.  This issue was exposed by the above commit
      because it reversed the order that the algorithms are tested in.
      
      Specifically, when "xts(ecb(aes-generic))" is instantiated and tested
      during the comparison tests for "xts-aes-aesni", the "xts" template
      allocates an "aes" crypto_cipher for encrypting tweaks.  This resolves
      to "aes-aesni".  (Getting "aes-aesni" instead of "aes-generic" here is a
      bit weird, but it's apparently intended.)  Due to the above-mentioned
      commit, the testing of "aes-aesni", and the finalization of its
      registration, now happens at this point instead of before.  At the end
      of that, crypto_remove_spawns() unregisters all algorithm instances that
      depend on a lower-priority "aes" implementation such as "aes-generic"
      but that do not depend on "aes-aesni".  However, because "xts" does not
      use the crypto_spawn mechanism for its "aes", its dependency on
      "aes-aesni" is not recognized by crypto_remove_spawns().  Thus,
      crypto_remove_spawns() unexpectedly unregisters "xts(ecb(aes-generic))".
      
      Fix this issue by making the "xts" template use the crypto_spawn
      mechanism for its "aes" dependency, like what other templates do.
      
      Note, this fix could be applied as far back as commit f1c131b4
      ("crypto: xts - Convert to skcipher").  However, the issue only got
      exposed by the much more recent changes to how the crypto API runs the
      self-tests, so there should be no need to backport this to very old
      kernels.  Also, an alternative fix would be to flip the list iteration
      order in crypto_start_tests() to restore the original testing order.
      I'm thinking we should do that too, since the original order seems more
      natural, but it shouldn't be relied on for correctness.
      
      Fixes: adad556e ("crypto: api - Fix built-in testing dependency failures")
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      bb40d326
    • zhenwei pi's avatar
      crypto: virtio - handle config changed by work queue · 9da27466
      zhenwei pi authored
      MST pointed out: config change callback is also handled incorrectly
      in this driver, it takes a mutex from interrupt context.
      
      Handle config changed by work queue instead.
      
      Cc: Gonglei (Arei) <arei.gonglei@huawei.com>
      Cc: Halil Pasic <pasic@linux.ibm.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      9da27466
  2. 14 Oct, 2023 1 commit
    • Weili Qian's avatar
      crypto: hisilicon/qm - alloc buffer to set and get xqc · 5b90073d
      Weili Qian authored
      If the temporarily applied memory is used to set or get the xqc
      information, the driver releases the memory immediately after the
      hardware mailbox operation time exceeds the driver waiting time.
      However, the hardware does not cancel the operation, so the hardware
      may write data to released memory.
      
      Therefore, when the driver is bound to a device, the driver reserves
      memory for the xqc configuration. The subsequent xqc configuration
      uses the reserved memory to prevent hardware from accessing the
      released memory.
      Signed-off-by: default avatarWeili Qian <qianweili@huawei.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5b90073d
  3. 13 Oct, 2023 27 commits
  4. 12 Oct, 2023 3 commits
  5. 05 Oct, 2023 3 commits