1. 12 Apr, 2024 11 commits
  2. 05 Apr, 2024 13 commits
    • Thorsten Blum's avatar
      crypto: jitter - Replace http with https · 4ad27a8b
      Thorsten Blum authored
      The PDF is also available via https.
      Signed-off-by: default avatarThorsten Blum <thorsten.blum@toblux.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4ad27a8b
    • Thorsten Blum's avatar
      8fa5f4f0
    • Eric Biggers's avatar
      crypto: x86/aes-xts - wire up VAES + AVX10/512 implementation · aa2197f5
      Eric Biggers authored
      Add an AES-XTS implementation "xts-aes-vaes-avx10_512" for x86_64 CPUs
      with the VAES, VPCLMULQDQ, and either AVX10/512 or AVX512BW + AVX512VL
      extensions.  This implementation uses zmm registers to operate on four
      AES blocks at a time.  The assembly code is instantiated using a macro
      so that most of the source code is shared with other implementations.
      
      To avoid downclocking on older Intel CPU models, an exclusion list is
      used to prevent this 512-bit implementation from being used by default
      on some CPU models.  They will use xts-aes-vaes-avx10_256 instead.  For
      now, this exclusion list is simply coded into aesni-intel_glue.c.  It
      may make sense to eventually move it into a more central location.
      
      xts-aes-vaes-avx10_512 is slightly faster than xts-aes-vaes-avx10_256 on
      some current CPUs.  E.g., on AMD Zen 4, AES-256-XTS decryption
      throughput increases by 13% with 4096-byte inputs, or 14% with 512-byte
      inputs.  On Intel Sapphire Rapids, AES-256-XTS decryption throughput
      increases by 2% with 4096-byte inputs, or 3% with 512-byte inputs.
      
      Future CPUs may provide stronger 512-bit support, in which case a larger
      benefit should be seen.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      aa2197f5
    • Eric Biggers's avatar
      crypto: x86/aes-xts - wire up VAES + AVX10/256 implementation · ee63fea0
      Eric Biggers authored
      Add an AES-XTS implementation "xts-aes-vaes-avx10_256" for x86_64 CPUs
      with the VAES, VPCLMULQDQ, and either AVX10/256 or AVX512BW + AVX512VL
      extensions.  This implementation avoids using zmm registers, instead
      using ymm registers to operate on two AES blocks at a time.  The
      assembly code is instantiated using a macro so that most of the source
      code is shared with other implementations.
      
      This is the optimal implementation on CPUs that support VAES and AVX512
      but where the zmm registers should not be used due to downclocking
      effects, for example Intel's Ice Lake.  It should also be the optimal
      implementation on future CPUs that support AVX10/256 but not AVX10/512.
      
      The performance is slightly better than that of xts-aes-vaes-avx2, which
      uses the same 256-bit vector length, due to factors such as being able
      to use ymm16-ymm31 to cache the AES round keys, and being able to use
      the vpternlogd instruction to do XORs more efficiently.  For example, on
      Ice Lake, the throughput of decrypting 4096-byte messages with
      AES-256-XTS is 6.6% higher with xts-aes-vaes-avx10_256 than with
      xts-aes-vaes-avx2.  While this is a small improvement, it is
      straightforward to provide this implementation (xts-aes-vaes-avx10_256)
      as long as we are providing xts-aes-vaes-avx2 and xts-aes-vaes-avx10_512
      anyway, due to the way the _aes_xts_crypt macro is structured.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ee63fea0
    • Eric Biggers's avatar
      crypto: x86/aes-xts - wire up VAES + AVX2 implementation · e787060b
      Eric Biggers authored
      Add an AES-XTS implementation "xts-aes-vaes-avx2" for x86_64 CPUs with
      the VAES, VPCLMULQDQ, and AVX2 extensions, but not AVX512 or AVX10.
      This implementation uses ymm registers to operate on two AES blocks at a
      time.  The assembly code is instantiated using a macro so that most of
      the source code is shared with other implementations.
      
      This is the optimal implementation on AMD Zen 3.  It should also be the
      optimal implementation on Intel Alder Lake, which similarly supports
      VAES but not AVX512.  Comparing to xts-aes-aesni-avx on Zen 3,
      xts-aes-vaes-avx2 provides 70% higher AES-256-XTS decryption throughput
      with 4096-byte messages, or 23% higher with 512-byte messages.
      
      A large improvement is also seen with CPUs that do support AVX512 (e.g.,
      98% higher AES-256-XTS decryption throughput on Ice Lake with 4096-byte
      messages), though the following patches add AVX512 optimized
      implementations to get a bit more performance on those CPUs.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e787060b
    • Eric Biggers's avatar
      crypto: x86/aes-xts - wire up AESNI + AVX implementation · 996f4dcb
      Eric Biggers authored
      Add an AES-XTS implementation "xts-aes-aesni-avx" for x86_64 CPUs that
      have the AES-NI and AVX extensions but not VAES.  It's similar to the
      existing xts-aes-aesni in that uses xmm registers to operate on one AES
      block at a time.  It differs from xts-aes-aesni in the following ways:
      
      - It uses the VEX-coded (non-destructive) instructions from AVX.
        This improves performance slightly.
      - It incorporates some additional optimizations such as interleaving the
        tweak computation with AES en/decryption, handling single-page
        messages more efficiently, and caching the first round key.
      - It supports only 64-bit (x86_64).
      - It's generated by an assembly macro that will also be used to generate
        VAES-based implementations.
      
      The performance improvement over xts-aes-aesni varies from small to
      large, depending on the CPU and other factors such as the size of the
      messages en/decrypted.  For example, the following increases in
      AES-256-XTS decryption throughput are seen on the following CPUs:
      
                                | 4096-byte messages | 512-byte messages |
          ----------------------+--------------------+-------------------+
          Intel Skylake         |        6%          |       31%         |
          Intel Cascade Lake    |        4%          |       26%         |
          AMD Zen 1             |        61%         |       73%         |
          AMD Zen 2             |        36%         |       59%         |
      
      (The above CPUs don't support VAES, so they can't use VAES instead.)
      
      While this isn't as large an improvement as what VAES provides, this
      still seems worthwhile.  This implementation is fairly easy to provide
      based on the assembly macro that's needed for VAES anyway, and it will
      be the best implementation on a large number of CPUs (very roughly, the
      CPUs launched by Intel and AMD from 2011 to 2018).
      
      This makes the existing xts-aes-aesni *mostly* obsolete.  For now, leave
      it in place to support 32-bit kernels and also CPUs like Intel Westmere
      that support AES-NI but not AVX.  (We could potentially remove it anyway
      and just rely on the indirect acceleration via ecb-aes-aesni in those
      cases, but that change will need to be considered separately.)
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      996f4dcb
    • Eric Biggers's avatar
      crypto: x86/aes-xts - add AES-XTS assembly macro for modern CPUs · d6371688
      Eric Biggers authored
      Add an assembly file aes-xts-avx-x86_64.S which contains a macro that
      expands into AES-XTS implementations for x86_64 CPUs that support at
      least AES-NI and AVX, optionally also taking advantage of VAES,
      VPCLMULQDQ, and AVX512 or AVX10.
      
      This patch doesn't expand the macro at all.  Later patches will do so,
      adding each implementation individually so that the motivation and use
      case for each individual implementation can be fully presented.
      
      The file also provides a function aes_xts_encrypt_iv() which handles the
      encryption of the IV (tweak), using AES-NI and AVX.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d6371688
    • Eric Biggers's avatar
      x86: add kconfig symbols for assembler VAES and VPCLMULQDQ support · 7d4700d1
      Eric Biggers authored
      Add config symbols AS_VAES and AS_VPCLMULQDQ that expose whether the
      assembler supports the vector AES and carryless multiplication
      cryptographic extensions.
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7d4700d1
    • Joachim Vandersmissen's avatar
      crypto: ecdh - explicitly zeroize private_key · 73e5984e
      Joachim Vandersmissen authored
      private_key is overwritten with the key parameter passed in by the
      caller (if present), or alternatively a newly generated private key.
      However, it is possible that the caller provides a key (or the newly
      generated key) which is shorter than the previous key. In that
      scenario, some key material from the previous key would not be
      overwritten. The easiest solution is to explicitly zeroize the entire
      private_key array first.
      
      Note that this patch slightly changes the behavior of this function:
      previously, if the ecc_gen_privkey failed, the old private_key would
      remain. Now, the private_key is always zeroized. This behavior is
      consistent with the case where params.key is set and ecc_is_key_valid
      fails.
      Signed-off-by: default avatarJoachim Vandersmissen <git@jvdsn.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      73e5984e
    • Joel Granados's avatar
      crypto: fips - Remove the now superfluous sentinel element from ctl_table array · 5adf213c
      Joel Granados authored
      This commit comes at the tail end of a greater effort to remove the
      empty elements at the end of the ctl_table arrays (sentinels) which will
      reduce the overall build time size of the kernel and run time memory
      bloat by ~64 bytes per sentinel (further information Link :
      https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/)
      
      Remove sentinel from crypto_sysctl_table
      Signed-off-by: default avatarJoel Granados <j.granados@samsung.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5adf213c
    • Thorsten Blum's avatar
      crypto: jitter - Use kvfree_sensitive() to fix Coccinelle warning · 6e61ee1c
      Thorsten Blum authored
      Replace memzero_explicit() and kvfree() with kvfree_sensitive() to fix
      the following Coccinelle/coccicheck warning reported by
      kfree_sensitive.cocci:
      
      	WARNING opportunity for kfree_sensitive/kvfree_sensitive
      Signed-off-by: default avatarThorsten Blum <thorsten.blum@toblux.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6e61ee1c
    • Animesh Agarwal's avatar
      dt-bindings: crypto: ti,omap-sham: Convert to dtschema · a00dce05
      Animesh Agarwal authored
      Convert the OMAP SoC SHA crypto Module bindings to DT Schema.
      Signed-off-by: default avatarAnimesh Agarwal <animeshagarwal28@gmail.com>
      Reviewed-by: default avatarConor Dooley <conor.dooley@microchip.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a00dce05
    • Gustavo A. R. Silva's avatar
      crypto: qat - Avoid -Wflex-array-member-not-at-end warnings · 140e4c85
      Gustavo A. R. Silva authored
      -Wflex-array-member-not-at-end is coming in GCC-14, and we are getting
      ready to enable it globally.
      
      Use the `__struct_group()` helper to separate the flexible array
      from the rest of the members in flexible `struct qat_alg_buf_list`,
      through tagged `struct qat_alg_buf_list_hdr`, and avoid embedding the
      flexible-array member in the middle of `struct qat_alg_fixed_buf_list`.
      
      Also, use `container_of()` whenever we need to retrieve a pointer to
      the flexible structure.
      
      So, with these changes, fix the following warnings:
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
      
      Link: https://github.com/KSPP/linux/issues/202Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Acked-by: default avatarGiovanni Cabiddu <giovanni.cabiddu@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      140e4c85
  3. 02 Apr, 2024 16 commits
    • Uwe Kleine-König's avatar
      hwrng: mxc-rnga - Drop usage of platform_driver_probe() · a9a72140
      Uwe Kleine-König authored
      There are considerations to drop platform_driver_probe() as a concept
      that isn't relevant any more today. It comes with an added complexity
      that makes many users hold it wrong. (E.g. this driver should have mark
      the driver struct with __refdata.)
      
      Convert the driver to the more usual module_platform_driver().
      
      This fixes a W=1 build warning:
      
      	WARNING: modpost: drivers/char/hw_random/mxc-rnga: section mismatch in reference: mxc_rnga_driver+0x10 (section: .data) -> mxc_rnga_remove (section: .exit.text)
      
      with CONFIG_HW_RANDOM_MXC_RNGA=m.
      Signed-off-by: default avatarUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a9a72140
    • Chang S. Bae's avatar
      crypto: x86/aesni - Update aesni_set_key() to return void · e3299a4c
      Chang S. Bae authored
      The aesni_set_key() implementation has no error case, yet its prototype
      specifies to return an error code.
      
      Modify the function prototype to return void and adjust the related code.
      Signed-off-by: default avatarChang S. Bae <chang.seok.bae@intel.com>
      Reviewed-by: default avatarEric Biggers <ebiggers@google.com>
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: linux-crypto@vger.kernel.org
      Cc: x86@kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e3299a4c
    • Chang S. Bae's avatar
      crypto: x86/aesni - Rearrange AES key size check · d50b35f0
      Chang S. Bae authored
      aes_expandkey() already includes an AES key size check. If AES-NI is
      unusable, invoke the function without the size check.
      
      Also, use aes_check_keylen() instead of open code.
      Signed-off-by: default avatarChang S. Bae <chang.seok.bae@intel.com>
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: linux-crypto@vger.kernel.org
      Cc: x86@kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d50b35f0
    • Aleksandr Mishin's avatar
      crypto: bcm - Fix pointer arithmetic · 2b3460cb
      Aleksandr Mishin authored
      In spu2_dump_omd() value of ptr is increased by ciph_key_len
      instead of hash_iv_len which could lead to going beyond the
      buffer boundaries.
      Fix this bug by changing ciph_key_len to hash_iv_len.
      
      Found by Linux Verification Center (linuxtesting.org) with SVACE.
      
      Fixes: 9d12ba86 ("crypto: brcm - Add Broadcom SPU driver")
      Signed-off-by: default avatarAleksandr Mishin <amishin@t-argos.ru>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2b3460cb
    • Jerry Snitselaar's avatar
      crypto: iaa - Fix some errors in IAA documentation · 616ce45c
      Jerry Snitselaar authored
      This cleans up the following issues I ran into when trying to use the
      scripts and commands in the iaa-crypto.rst document.
      
      - Fix incorrect arguments being passed to accel-config
        config-wq.
          - Replace --device_name with --driver-name.
          - Replace --driver_name with --driver-name.
          - Replace --size with --wq-size.
          - Add missing --priority argument.
      - Add missing accel-config config-engine command after the
        config-wq commands.
      - Fix wq name passed to accel-config config-wq.
      - Add rmmod/modprobe of iaa_crypto to script that disables,
        then enables all devices and workqueues to avoid enable-wq
        failing with -EEXIST when trying to register to compression
        algorithm.
      - Fix device name in cases where iaa was used instead of iax.
      
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: linux-crypto@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Signed-off-by: default avatarJerry Snitselaar <jsnitsel@redhat.com>
      Reviewed-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      616ce45c
    • Stefan Berger's avatar
      crypto: ecdsa - Fix module auto-load on add-key · 48e4fd6d
      Stefan Berger authored
      Add module alias with the algorithm cra_name similar to what we have for
      RSA-related and other algorithms.
      
      The kernel attempts to modprobe asymmetric algorithms using the names
      "crypto-$cra_name" and "crypto-$cra_name-all." However, since these
      aliases are currently missing, the modules are not loaded. For instance,
      when using the `add_key` function, the hash algorithm is typically
      loaded automatically, but the asymmetric algorithm is not.
      
      Steps to test:
      
      1. Create certificate
      
        openssl req -x509 -sha256 -newkey ec \
        -pkeyopt "ec_paramgen_curve:secp384r1" -keyout key.pem -days 365 \
        -subj '/CN=test' -nodes -outform der -out nist-p384.der
      
      2. Optionally, trace module requests with: trace-cmd stream -e module &
      
      3. Trigger add_key call for the cert:
      
         # keyctl padd asymmetric "" @u < nist-p384.der
         641069229
         # lsmod | head -2
         Module                  Size  Used by
         ecdsa_generic          16384  0
      
      Fixes: c12d448b ("crypto: ecdsa - Register NIST P384 and extend test suite")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      48e4fd6d
    • Joachim Vandersmissen's avatar
      crypto: ecc - update ecc_gen_privkey for FIPS 186-5 · dbad7b69
      Joachim Vandersmissen authored
      FIPS 186-5 [1] was released approximately 1 year ago. The most
      interesting change for ecc_gen_privkey is the removal of curves with
      order < 224 bits. This is minimum is now checked in step 1. It is
      unlikely that there is still any benefit in generating private keys for
      curves with n < 224, as those curves provide less than 112 bits of
      security strength and are therefore unsafe for any modern usage.
      
      This patch also updates the documentation for __ecc_is_key_valid and
      ecc_gen_privkey to clarify which FIPS 186-5 method is being used to
      generate private keys. Previous documentation mentioned that "extra
      random bits" was used. However, this did not match the code. Instead,
      the code currently uses (and always has used) the "rejection sampling"
      ("testing candidates" in FIPS 186-4) method.
      
      [1]: https://doi.org/10.6028/NIST.FIPS.186-5Signed-off-by: default avatarJoachim Vandersmissen <git@jvdsn.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      dbad7b69
    • Vitaly Chikunov's avatar
      crypto: ecrdsa - Fix module auto-load on add_key · eb5739a1
      Vitaly Chikunov authored
      Add module alias with the algorithm cra_name similar to what we have for
      RSA-related and other algorithms.
      
      The kernel attempts to modprobe asymmetric algorithms using the names
      "crypto-$cra_name" and "crypto-$cra_name-all." However, since these
      aliases are currently missing, the modules are not loaded. For instance,
      when using the `add_key` function, the hash algorithm is typically
      loaded automatically, but the asymmetric algorithm is not.
      
      Steps to test:
      
      1. Cert is generated usings ima-evm-utils test suite with
         `gen-keys.sh`, example cert is provided below:
      
        $ base64 -d >test-gost2012_512-A.cer <<EOF
        MIIB/DCCAWagAwIBAgIUK8+whWevr3FFkSdU9GLDAM7ure8wDAYIKoUDBwEBAwMFADARMQ8wDQYD
        VQQDDAZDQSBLZXkwIBcNMjIwMjAxMjIwOTQxWhgPMjA4MjEyMDUyMjA5NDFaMBExDzANBgNVBAMM
        BkNBIEtleTCBoDAXBggqhQMHAQEBAjALBgkqhQMHAQIBAgEDgYQABIGALXNrTJGgeErBUOov3Cfo
        IrHF9fcj8UjzwGeKCkbCcINzVUbdPmCopeJRHDJEvQBX1CQUPtlwDv6ANjTTRoq5nCk9L5PPFP1H
        z73JIXHT0eRBDVoWy0cWDRz1mmQlCnN2HThMtEloaQI81nTlKZOcEYDtDpi5WODmjEeRNQJMdqCj
        UDBOMAwGA1UdEwQFMAMBAf8wHQYDVR0OBBYEFCwfOITMbE9VisW1i2TYeu1tAo5QMB8GA1UdIwQY
        MBaAFCwfOITMbE9VisW1i2TYeu1tAo5QMAwGCCqFAwcBAQMDBQADgYEAmBfJCMTdC0/NSjz4BBiQ
        qDIEjomO7FEHYlkX5NGulcF8FaJW2jeyyXXtbpnub1IQ8af1KFIpwoS2e93LaaofxpWlpQLlju6m
        KYLOcO4xK3Whwa2hBAz9YbpUSFjvxnkS2/jpH2MsOSXuUEeCruG/RkHHB3ACef9umG6HCNQuAPY=
        EOF
      
      2. Optionally, trace module requests with: trace-cmd stream -e module &
      
      3. Trigger add_key call for the cert:
      
        # keyctl padd asymmetric "" @u <test-gost2012_512-A.cer
        939910969
        # lsmod | head -3
        Module                  Size  Used by
        ecrdsa_generic         16384  0
        streebog_generic       28672  0
      Repored-by: default avatarPaul Wolneykien <manowar@altlinux.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Tested-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      eb5739a1
    • Li Zhijian's avatar
      hwrng: core - Convert sprintf/snprintf to sysfs_emit · 90d012fb
      Li Zhijian authored
      Per filesystems/sysfs.rst, show() should only use sysfs_emit()
      or sysfs_emit_at() when formatting the value to be returned to user space.
      
      coccinelle complains that there are still a couple of functions that use
      snprintf(). Convert them to sysfs_emit().
      
      sprintf() will be converted as weel if they have.
      
      Generally, this patch is generated by
      make coccicheck M=<path/to/file> MODE=patch \
      COCCI=scripts/coccinelle/api/device_attr_show.cocci
      
      No functional change intended
      
      CC: Olivia Mackall <olivia@selenic.com>
      CC: Herbert Xu <herbert@gondor.apana.org.au>
      CC: linux-crypto@vger.kernel.org
      Signed-off-by: default avatarLi Zhijian <lizhijian@fujitsu.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      90d012fb
    • Luca Weiss's avatar
      dt-bindings: crypto: ice: Document sc7280 inline crypto engine · 355577ef
      Luca Weiss authored
      Document the compatible used for the inline crypto engine found on
      SC7280.
      Signed-off-by: default avatarLuca Weiss <luca.weiss@fairphone.com>
      Acked-by: default avatarKrzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      355577ef
    • Eric Biggers's avatar
      crypto: remove CONFIG_CRYPTO_STATS · 29ce50e0
      Eric Biggers authored
      Remove support for the "Crypto usage statistics" feature
      (CONFIG_CRYPTO_STATS).  This feature does not appear to have ever been
      used, and it is harmful because it significantly reduces performance and
      is a large maintenance burden.
      
      Covering each of these points in detail:
      
      1. Feature is not being used
      
      Since these generic crypto statistics are only readable using netlink,
      it's fairly straightforward to look for programs that use them.  I'm
      unable to find any evidence that any such programs exist.  For example,
      Debian Code Search returns no hits except the kernel header and kernel
      code itself and translations of the kernel header:
      https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
      
      The patch series that added this feature in 2018
      (https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
      said "The goal is to have an ifconfig for crypto device."  This doesn't
      appear to have happened.
      
      It's not clear that there is real demand for crypto statistics.  Just
      because the kernel provides other types of statistics such as I/O and
      networking statistics and some people find those useful does not mean
      that crypto statistics are useful too.
      
      Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
      it was able to be disabled in RHEL and Fedora as a bug fix
      (https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
      
      Even further evidence comes from the fact that there are and have been
      bugs in how the stats work, but they were never reported.  For example,
      before Linux v6.7 hash stats were double-counted in most cases.
      
      There has also never been any documentation for this feature, so it
      might be hard to use even if someone wanted to.
      
      2. CONFIG_CRYPTO_STATS significantly reduces performance
      
      Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
      the crypto API, even if no program ever retrieves the statistics.  This
      primarily affects systems with a large number of CPUs.  For example,
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
      that Lustre client encryption performance improved from 21.7GB/s to
      48.2GB/s by disabling CONFIG_CRYPTO_STATS.
      
      It can be argued that this means that CONFIG_CRYPTO_STATS should be
      optimized with per-cpu counters similar to many of the networking
      counters.  But no one has done this in 5+ years.  This is consistent
      with the fact that the feature appears to be unused, so there seems to
      be little interest in improving it as opposed to just disabling it.
      
      It can be argued that because CONFIG_CRYPTO_STATS is off by default,
      performance doesn't matter.  But Linux distros tend to error on the side
      of enabling options.  The option is enabled in Ubuntu and Arch Linux,
      and until recently was enabled in RHEL and Fedora (see above).  So, even
      just having the option available is harmful to users.
      
      3. CONFIG_CRYPTO_STATS is a large maintenance burden
      
      There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
      spread among 32 files.  It significantly complicates much of the
      implementation of the crypto API.  After the initial submission, many
      fixes and refactorings have consumed effort of multiple people to keep
      this feature "working".  We should be spending this effort elsewhere.
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarCorentin Labbe <clabbe@baylibre.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      29ce50e0
    • Gustavo A. R. Silva's avatar
      crypto: nx - Avoid -Wflex-array-member-not-at-end warning · 1e6b251c
      Gustavo A. R. Silva authored
      -Wflex-array-member-not-at-end is coming in GCC-14, and we are getting
      ready to enable it globally. So, we are deprecating flexible-array
      members in the middle of another structure.
      
      There is currently an object (`header`) in `struct nx842_crypto_ctx`
      that contains a flexible structure (`struct nx842_crypto_header`):
      
      struct nx842_crypto_ctx {
      	...
              struct nx842_crypto_header header;
              struct nx842_crypto_header_group group[NX842_CRYPTO_GROUP_MAX];
      	...
      };
      
      So, in order to avoid ending up with a flexible-array member in the
      middle of another struct, we use the `struct_group_tagged()` helper to
      separate the flexible array from the rest of the members in the flexible
      structure:
      
      struct nx842_crypto_header {
      	struct_group_tagged(nx842_crypto_header_hdr, hdr,
      
      		... the rest of the members
      
      	);
              struct nx842_crypto_header_group group[];
      } __packed;
      
      With the change described above, we can now declare an object of the
      type of the tagged struct, without embedding the flexible array in the
      middle of another struct:
      
      struct nx842_crypto_ctx {
      	...
              struct nx842_crypto_header_hdr header;
              struct nx842_crypto_header_group group[NX842_CRYPTO_GROUP_MAX];
      	...
       } __packed;
      
      We also use `container_of()` whenever we need to retrieve a pointer to
      the flexible structure, through which we can access the flexible
      array if needed.
      
      So, with these changes, fix the following warning:
      
      In file included from drivers/crypto/nx/nx-842.c:55:
      drivers/crypto/nx/nx-842.h:174:36: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]
        174 |         struct nx842_crypto_header header;
            |                                    ^~~~~~
      Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      1e6b251c
    • Jia Jie Ho's avatar
      crypto: starfive - Use dma for aes requests · 7467147e
      Jia Jie Ho authored
      Convert AES module to use dma for data transfers to reduce cpu load and
      compatible with future variants.
      Signed-off-by: default avatarJia Jie Ho <jiajie.ho@starfivetech.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7467147e
    • Jia Jie Ho's avatar
      crypto: starfive - Skip unneeded key free · a05c821e
      Jia Jie Ho authored
      Skip unneeded kfree_sensitive if RSA module is using falback algo.
      Signed-off-by: default avatarJia Jie Ho <jiajie.ho@starfivetech.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a05c821e
    • Jia Jie Ho's avatar
      crypto: starfive - Update hash dma usage · b6e9eb69
      Jia Jie Ho authored
      Current hash uses sw fallback for non-word aligned input scatterlists.
      Add support for unaligned cases utilizing the data valid mask for dma.
      Signed-off-by: default avatarJia Jie Ho <jiajie.ho@starfivetech.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b6e9eb69
    • Jia Jie Ho's avatar
      dt-bindings: crypto: starfive: Add jh8100 support · 2ccf7a5d
      Jia Jie Ho authored
      Add compatible string and additional interrupt for StarFive JH8100
      crypto engine.
      Signed-off-by: default avatarJia Jie Ho <jiajie.ho@starfivetech.com>
      Acked-by: default avatarConor Dooley <conor.dooley@microchip.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2ccf7a5d