Commit 84c7d76b authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Remove crypto stats interface

  Algorithms:
   - Add faster AES-XTS on modern x86_64 CPUs
   - Forbid curves with order less than 224 bits in ecc (FIPS 186-5)
   - Add ECDSA NIST P521

  Drivers:
   - Expose otp zone in atmel
   - Add dh fallback for primes > 4K in qat
   - Add interface for live migration in qat
   - Use dma for aes requests in starfive
   - Add full DMA support for stm32mpx in stm32
   - Add Tegra Security Engine driver

  Others:
   - Introduce scope-based x509_certificate allocation"

* tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (123 commits)
  crypto: atmel-sha204a - provide the otp content
  crypto: atmel-sha204a - add reading from otp zone
  crypto: atmel-i2c - rename read function
  crypto: atmel-i2c - add missing arg description
  crypto: iaa - Use kmemdup() instead of kzalloc() and memcpy()
  crypto: sahara - use 'time_left' variable with wait_for_completion_timeout()
  crypto: api - use 'time_left' variable with wait_for_completion_killable_timeout()
  crypto: caam - i.MX8ULP donot have CAAM page0 access
  crypto: caam - init-clk based on caam-page0-access
  crypto: starfive - Use fallback for unaligned dma access
  crypto: starfive - Do not free stack buffer
  crypto: starfive - Skip unneeded fallback allocation
  crypto: starfive - Skip dma setup for zeroed message
  crypto: hisilicon/sec2 - fix for register offset
  crypto: hisilicon/debugfs - mask the unnecessary info from the dump
  crypto: qat - specify firmware files for 402xx
  crypto: x86/aes-gcm - simplify GCM hash subkey derivation
  crypto: x86/aes-gcm - delete unused GCM assembly code
  crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath()
  hwrng: stm32 - repair clock handling
  ...
parents 87caef42 13909a0c
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/nvidia,tegra234-se-aes.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra Security Engine for AES algorithms
description:
The Tegra Security Engine accelerates the following AES encryption/decryption
algorithms - AES-ECB, AES-CBC, AES-OFB, AES-XTS, AES-CTR, AES-GCM, AES-CCM,
AES-CMAC
maintainers:
- Akhil R <akhilrajeev@nvidia.com>
properties:
compatible:
const: nvidia,tegra234-se-aes
reg:
maxItems: 1
clocks:
maxItems: 1
iommus:
maxItems: 1
dma-coherent: true
required:
- compatible
- reg
- clocks
- iommus
additionalProperties: false
examples:
- |
#include <dt-bindings/memory/tegra234-mc.h>
#include <dt-bindings/clock/tegra234-clock.h>
crypto@15820000 {
compatible = "nvidia,tegra234-se-aes";
reg = <0x15820000 0x10000>;
clocks = <&bpmp TEGRA234_CLK_SE>;
iommus = <&smmu TEGRA234_SID_SES_SE1>;
dma-coherent;
};
...
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/nvidia,tegra234-se-hash.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra Security Engine for HASH algorithms
description:
The Tegra Security HASH Engine accelerates the following HASH functions -
SHA1, SHA224, SHA256, SHA384, SHA512, SHA3-224, SHA3-256, SHA3-384, SHA3-512
HMAC(SHA224), HMAC(SHA256), HMAC(SHA384), HMAC(SHA512)
maintainers:
- Akhil R <akhilrajeev@nvidia.com>
properties:
compatible:
const: nvidia,tegra234-se-hash
reg:
maxItems: 1
clocks:
maxItems: 1
iommus:
maxItems: 1
dma-coherent: true
required:
- compatible
- reg
- clocks
- iommus
additionalProperties: false
examples:
- |
#include <dt-bindings/memory/tegra234-mc.h>
#include <dt-bindings/clock/tegra234-clock.h>
crypto@15840000 {
compatible = "nvidia,tegra234-se-hash";
reg = <0x15840000 0x10000>;
clocks = <&bpmp TEGRA234_CLK_SE>;
iommus = <&smmu TEGRA234_SID_SES_SE2>;
dma-coherent;
};
...
OMAP SoC SHA crypto Module
Required properties:
- compatible : Should contain entries for this and backward compatible
SHAM versions:
- "ti,omap2-sham" for OMAP2 & OMAP3.
- "ti,omap4-sham" for OMAP4 and AM33XX.
- "ti,omap5-sham" for OMAP5, DRA7 and AM43XX.
- ti,hwmods: Name of the hwmod associated with the SHAM module
- reg : Offset and length of the register set for the module
- interrupts : the interrupt-specifier for the SHAM module.
Optional properties:
- dmas: DMA specifiers for the rx dma. See the DMA client binding,
Documentation/devicetree/bindings/dma/dma.txt
- dma-names: DMA request name. Should be "rx" if a dma is present.
Example:
/* AM335x */
sham: sham@53100000 {
compatible = "ti,omap4-sham";
ti,hwmods = "sham";
reg = <0x53100000 0x200>;
interrupts = <109>;
dmas = <&edma 36>;
dma-names = "rx";
};
...@@ -15,6 +15,7 @@ properties: ...@@ -15,6 +15,7 @@ properties:
- enum: - enum:
- qcom,sa8775p-inline-crypto-engine - qcom,sa8775p-inline-crypto-engine
- qcom,sc7180-inline-crypto-engine - qcom,sc7180-inline-crypto-engine
- qcom,sc7280-inline-crypto-engine
- qcom,sm8450-inline-crypto-engine - qcom,sm8450-inline-crypto-engine
- qcom,sm8550-inline-crypto-engine - qcom,sm8550-inline-crypto-engine
- qcom,sm8650-inline-crypto-engine - qcom,sm8650-inline-crypto-engine
......
...@@ -12,7 +12,9 @@ maintainers: ...@@ -12,7 +12,9 @@ maintainers:
properties: properties:
compatible: compatible:
const: starfive,jh7110-crypto enum:
- starfive,jh7110-crypto
- starfive,jh8100-crypto
reg: reg:
maxItems: 1 maxItems: 1
...@@ -28,7 +30,10 @@ properties: ...@@ -28,7 +30,10 @@ properties:
- const: ahb - const: ahb
interrupts: interrupts:
maxItems: 1 minItems: 1
items:
- description: SHA2 module irq
- description: SM3 module irq
resets: resets:
maxItems: 1 maxItems: 1
...@@ -54,6 +59,27 @@ required: ...@@ -54,6 +59,27 @@ required:
additionalProperties: false additionalProperties: false
allOf:
- if:
properties:
compatible:
const: starfive,jh7110-crypto
then:
properties:
interrupts:
maxItems: 1
- if:
properties:
compatible:
const: starfive,jh8100-crypto
then:
properties:
interrupts:
minItems: 2
examples: examples:
- | - |
crypto: crypto@16000000 { crypto: crypto@16000000 {
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/ti,omap-sham.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: OMAP SoC SHA crypto Module
maintainers:
- Animesh Agarwal <animeshagarwal28@gmail.com>
properties:
compatible:
enum:
- ti,omap2-sham
- ti,omap4-sham
- ti,omap5-sham
reg:
maxItems: 1
interrupts:
maxItems: 1
dmas:
maxItems: 1
dma-names:
const: rx
ti,hwmods:
description: Name of the hwmod associated with the SHAM module
$ref: /schemas/types.yaml#/definitions/string
enum: [sham]
dependencies:
dmas: [dma-names]
additionalProperties: false
required:
- compatible
- ti,hwmods
- reg
- interrupts
examples:
- |
sham@53100000 {
compatible = "ti,omap4-sham";
ti,hwmods = "sham";
reg = <0x53100000 0x200>;
interrupts = <109>;
dmas = <&edma 36>;
dma-names = "rx";
};
...@@ -179,7 +179,9 @@ has the old 'iax' device naming in place) :: ...@@ -179,7 +179,9 @@ has the old 'iax' device naming in place) ::
# configure wq1.0 # configure wq1.0
accel-config config-wq --group-id=0 --mode=dedicated --type=kernel --name="iaa_crypto" --device_name="crypto" iax1/wq1.0 accel-config config-wq --group-id=0 --mode=dedicated --type=kernel --priority=10 --name="iaa_crypto" --driver-name="crypto" iax1/wq1.0
accel-config config-engine iax1/engine1.0 --group-id=0
# enable IAA device iax1 # enable IAA device iax1
...@@ -321,33 +323,30 @@ driver will generate statistics which can be accessed in debugfs at:: ...@@ -321,33 +323,30 @@ driver will generate statistics which can be accessed in debugfs at::
# ls -al /sys/kernel/debug/iaa-crypto/ # ls -al /sys/kernel/debug/iaa-crypto/
total 0 total 0
drwxr-xr-x 2 root root 0 Mar 3 09:35 . drwxr-xr-x 2 root root 0 Mar 3 07:55 .
drwx------ 47 root root 0 Mar 3 09:35 .. drwx------ 53 root root 0 Mar 3 07:55 ..
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_acomp_delay_ns -rw-r--r-- 1 root root 0 Mar 3 07:55 global_stats
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_adecomp_delay_ns -rw-r--r-- 1 root root 0 Mar 3 07:55 stats_reset
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_comp_delay_ns -rw-r--r-- 1 root root 0 Mar 3 07:55 wq_stats
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_decomp_delay_ns
-rw-r--r-- 1 root root 0 Mar 3 09:35 stats_reset
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_comp_bytes_out
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_comp_calls
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_decomp_bytes_in
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_decomp_calls
-rw-r--r-- 1 root root 0 Mar 3 09:35 wq_stats
Most of the above statisticss are self-explanatory. The wq_stats file
shows per-wq stats, a set for each iaa device and wq in addition to
some global stats::
# cat wq_stats The global_stats file shows a set of global statistics collected since
the driver has been loaded or reset::
# cat global_stats
global stats: global stats:
total_comp_calls: 100 total_comp_calls: 4300
total_decomp_calls: 100 total_decomp_calls: 4164
total_comp_bytes_out: 22800 total_sw_decomp_calls: 0
total_decomp_bytes_in: 22800 total_comp_bytes_out: 5993989
total_decomp_bytes_in: 5993989
total_completion_einval_errors: 0 total_completion_einval_errors: 0
total_completion_timeout_errors: 0 total_completion_timeout_errors: 0
total_completion_comp_buf_overflow_errors: 0 total_completion_comp_buf_overflow_errors: 136
The wq_stats file shows per-wq stats, a set for each iaa device and wq
in addition to some global stats::
# cat wq_stats
iaa device: iaa device:
id: 1 id: 1
n_wqs: 1 n_wqs: 1
...@@ -379,21 +378,36 @@ some global stats:: ...@@ -379,21 +378,36 @@ some global stats::
iaa device: iaa device:
id: 5 id: 5
n_wqs: 1 n_wqs: 1
comp_calls: 100 comp_calls: 1360
comp_bytes: 22800 comp_bytes: 1999776
decomp_calls: 100 decomp_calls: 0
decomp_bytes: 22800 decomp_bytes: 0
wqs: wqs:
name: iaa_crypto name: iaa_crypto
comp_calls: 100 comp_calls: 1360
comp_bytes: 22800 comp_bytes: 1999776
decomp_calls: 100 decomp_calls: 0
decomp_bytes: 22800 decomp_bytes: 0
Writing 0 to 'stats_reset' resets all the stats, including the iaa device:
id: 7
n_wqs: 1
comp_calls: 2940
comp_bytes: 3994213
decomp_calls: 4164
decomp_bytes: 5993989
wqs:
name: iaa_crypto
comp_calls: 2940
comp_bytes: 3994213
decomp_calls: 4164
decomp_bytes: 5993989
...
Writing to 'stats_reset' resets all the stats, including the
per-device and per-wq stats:: per-device and per-wq stats::
# echo 0 > stats_reset # echo 1 > stats_reset
# cat wq_stats # cat wq_stats
global stats: global stats:
total_comp_calls: 0 total_comp_calls: 0
...@@ -536,12 +550,20 @@ The below script automatically does that:: ...@@ -536,12 +550,20 @@ The below script automatically does that::
echo "End Disable IAA" echo "End Disable IAA"
echo "Reload iaa_crypto module"
rmmod iaa_crypto
modprobe iaa_crypto
echo "End Reload iaa_crypto module"
# #
# configure iaa wqs and devices # configure iaa wqs and devices
# #
echo "Configure IAA" echo "Configure IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
accel-config config-wq --group-id=0 --mode=dedicated --size=128 --priority=10 --type=kernel --name="iaa_crypto" --driver_name="crypto" iax${i}/wq${i} accel-config config-wq --group-id=0 --mode=dedicated --wq-size=128 --priority=10 --type=kernel --name="iaa_crypto" --driver-name="crypto" iax${i}/wq${i}.0
accel-config config-engine iax${i}/engine${i}.0 --group-id=0
done done
echo "End Configure IAA" echo "End Configure IAA"
...@@ -552,10 +574,10 @@ The below script automatically does that:: ...@@ -552,10 +574,10 @@ The below script automatically does that::
echo "Enable IAA" echo "Enable IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
echo enable iaa iaa${i} echo enable iaa iax${i}
accel-config enable-device iaa${i} accel-config enable-device iax${i}
echo enable wq iaa${i}/wq${i}.0 echo enable wq iax${i}/wq${i}.0
accel-config enable-wq iaa${i}/wq${i}.0 accel-config enable-wq iax${i}/wq${i}.0
done done
echo "End Enable IAA" echo "End Enable IAA"
......
...@@ -21764,6 +21764,11 @@ M: Prashant Gaikwad <pgaikwad@nvidia.com> ...@@ -21764,6 +21764,11 @@ M: Prashant Gaikwad <pgaikwad@nvidia.com>
S: Supported S: Supported
F: drivers/clk/tegra/ F: drivers/clk/tegra/
TEGRA CRYPTO DRIVERS
M: Akhil R <akhilrajeev@nvidia.com>
S: Supported
F: drivers/crypto/tegra/*
TEGRA DMA DRIVERS TEGRA DMA DRIVERS
M: Laxman Dewangan <ldewangan@nvidia.com> M: Laxman Dewangan <ldewangan@nvidia.com>
M: Jon Hunter <jonathanh@nvidia.com> M: Jon Hunter <jonathanh@nvidia.com>
......
...@@ -25,33 +25,28 @@ ...@@ -25,33 +25,28 @@
.endm .endm
/* preload all round keys */ /* preload all round keys */
.macro load_round_keys, rounds, rk .macro load_round_keys, rk, nr, tmp
cmp \rounds, #12 add \tmp, \rk, \nr, sxtw #4
blo 2222f /* 128 bits */ sub \tmp, \tmp, #160
beq 1111f /* 192 bits */ ld1 {v17.4s-v20.4s}, [\rk]
ld1 {v17.4s-v18.4s}, [\rk], #32 ld1 {v21.4s-v24.4s}, [\tmp], #64
1111: ld1 {v19.4s-v20.4s}, [\rk], #32 ld1 {v25.4s-v28.4s}, [\tmp], #64
2222: ld1 {v21.4s-v24.4s}, [\rk], #64 ld1 {v29.4s-v31.4s}, [\tmp]
ld1 {v25.4s-v28.4s}, [\rk], #64
ld1 {v29.4s-v31.4s}, [\rk]
.endm .endm
/* prepare for encryption with key in rk[] */ /* prepare for encryption with key in rk[] */
.macro enc_prepare, rounds, rk, temp .macro enc_prepare, rounds, rk, temp
mov \temp, \rk load_round_keys \rk, \rounds, \temp
load_round_keys \rounds, \temp
.endm .endm
/* prepare for encryption (again) but with new key in rk[] */ /* prepare for encryption (again) but with new key in rk[] */
.macro enc_switch_key, rounds, rk, temp .macro enc_switch_key, rounds, rk, temp
mov \temp, \rk load_round_keys \rk, \rounds, \temp
load_round_keys \rounds, \temp
.endm .endm
/* prepare for decryption with key in rk[] */ /* prepare for decryption with key in rk[] */
.macro dec_prepare, rounds, rk, temp .macro dec_prepare, rounds, rk, temp
mov \temp, \rk load_round_keys \rk, \rounds, \temp
load_round_keys \rounds, \temp
.endm .endm
.macro do_enc_Nx, de, mc, k, i0, i1, i2, i3, i4 .macro do_enc_Nx, de, mc, k, i0, i1, i2, i3, i4
...@@ -110,14 +105,13 @@ ...@@ -110,14 +105,13 @@
/* up to 5 interleaved blocks */ /* up to 5 interleaved blocks */
.macro do_block_Nx, enc, rounds, i0, i1, i2, i3, i4 .macro do_block_Nx, enc, rounds, i0, i1, i2, i3, i4
cmp \rounds, #12 tbz \rounds, #2, .L\@ /* 128 bits */
blo 2222f /* 128 bits */
beq 1111f /* 192 bits */
round_Nx \enc, v17, \i0, \i1, \i2, \i3, \i4 round_Nx \enc, v17, \i0, \i1, \i2, \i3, \i4
round_Nx \enc, v18, \i0, \i1, \i2, \i3, \i4 round_Nx \enc, v18, \i0, \i1, \i2, \i3, \i4
1111: round_Nx \enc, v19, \i0, \i1, \i2, \i3, \i4 tbz \rounds, #1, .L\@ /* 192 bits */
round_Nx \enc, v19, \i0, \i1, \i2, \i3, \i4
round_Nx \enc, v20, \i0, \i1, \i2, \i3, \i4 round_Nx \enc, v20, \i0, \i1, \i2, \i3, \i4
2222: .irp key, v21, v22, v23, v24, v25, v26, v27, v28, v29 .L\@: .irp key, v21, v22, v23, v24, v25, v26, v27, v28, v29
round_Nx \enc, \key, \i0, \i1, \i2, \i3, \i4 round_Nx \enc, \key, \i0, \i1, \i2, \i3, \i4
.endr .endr
fin_round_Nx \enc, v30, v31, \i0, \i1, \i2, \i3, \i4 fin_round_Nx \enc, v30, v31, \i0, \i1, \i2, \i3, \i4
......
...@@ -99,16 +99,16 @@ ...@@ -99,16 +99,16 @@
ld1 {v15.4s}, [\rk] ld1 {v15.4s}, [\rk]
add \rkp, \rk, #16 add \rkp, \rk, #16
mov \i, \rounds mov \i, \rounds
1111: eor \in\().16b, \in\().16b, v15.16b /* ^round key */ .La\@: eor \in\().16b, \in\().16b, v15.16b /* ^round key */
movi v15.16b, #0x40 movi v15.16b, #0x40
tbl \in\().16b, {\in\().16b}, v13.16b /* ShiftRows */ tbl \in\().16b, {\in\().16b}, v13.16b /* ShiftRows */
sub_bytes \in sub_bytes \in
subs \i, \i, #1 sub \i, \i, #1
ld1 {v15.4s}, [\rkp], #16 ld1 {v15.4s}, [\rkp], #16
beq 2222f cbz \i, .Lb\@
mix_columns \in, \enc mix_columns \in, \enc
b 1111b b .La\@
2222: eor \in\().16b, \in\().16b, v15.16b /* ^round key */ .Lb\@: eor \in\().16b, \in\().16b, v15.16b /* ^round key */
.endm .endm
.macro encrypt_block, in, rounds, rk, rkp, i .macro encrypt_block, in, rounds, rk, rkp, i
...@@ -206,7 +206,7 @@ ...@@ -206,7 +206,7 @@
ld1 {v15.4s}, [\rk] ld1 {v15.4s}, [\rk]
add \rkp, \rk, #16 add \rkp, \rk, #16
mov \i, \rounds mov \i, \rounds
1111: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ .La\@: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */
eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */
eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */ eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */
eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */ eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */
...@@ -216,13 +216,13 @@ ...@@ -216,13 +216,13 @@
tbl \in2\().16b, {\in2\().16b}, v13.16b /* ShiftRows */ tbl \in2\().16b, {\in2\().16b}, v13.16b /* ShiftRows */
tbl \in3\().16b, {\in3\().16b}, v13.16b /* ShiftRows */ tbl \in3\().16b, {\in3\().16b}, v13.16b /* ShiftRows */
sub_bytes_4x \in0, \in1, \in2, \in3 sub_bytes_4x \in0, \in1, \in2, \in3
subs \i, \i, #1 sub \i, \i, #1
ld1 {v15.4s}, [\rkp], #16 ld1 {v15.4s}, [\rkp], #16
beq 2222f cbz \i, .Lb\@
mix_columns_2x \in0, \in1, \enc mix_columns_2x \in0, \in1, \enc
mix_columns_2x \in2, \in3, \enc mix_columns_2x \in2, \in3, \enc
b 1111b b .La\@
2222: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ .Lb\@: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */
eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */
eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */ eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */
eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */ eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */
......
...@@ -760,7 +760,6 @@ CONFIG_CRYPTO_USER_API_HASH=m ...@@ -760,7 +760,6 @@ CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_STATS=y
CONFIG_CRYPTO_CRC32_S390=y CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
......
...@@ -745,7 +745,6 @@ CONFIG_CRYPTO_USER_API_HASH=m ...@@ -745,7 +745,6 @@ CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_STATS=y
CONFIG_CRYPTO_CRC32_S390=y CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
......
...@@ -25,6 +25,16 @@ config AS_GFNI ...@@ -25,6 +25,16 @@ config AS_GFNI
help help
Supported by binutils >= 2.30 and LLVM integrated assembler Supported by binutils >= 2.30 and LLVM integrated assembler
config AS_VAES
def_bool $(as-instr,vaesenc %ymm0$(comma)%ymm1$(comma)%ymm2)
help
Supported by binutils >= 2.30 and LLVM integrated assembler
config AS_VPCLMULQDQ
def_bool $(as-instr,vpclmulqdq \$0x10$(comma)%ymm0$(comma)%ymm1$(comma)%ymm2)
help
Supported by binutils >= 2.30 and LLVM integrated assembler
config AS_WRUSS config AS_WRUSS
def_bool $(as-instr,wrussq %rax$(comma)(%rbx)) def_bool $(as-instr,wrussq %rax$(comma)(%rbx))
help help
......
...@@ -48,7 +48,8 @@ chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o ...@@ -48,7 +48,8 @@ chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o
obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o
aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o
aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o \
aes_ctrby8_avx-x86_64.o aes-xts-avx-x86_64.o
obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o
sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -154,5 +154,6 @@ SYM_TYPED_FUNC_START(nh_avx2) ...@@ -154,5 +154,6 @@ SYM_TYPED_FUNC_START(nh_avx2)
vpaddq T1, T0, T0 vpaddq T1, T0, T0
vpaddq T4, T0, T0 vpaddq T4, T0, T0
vmovdqu T0, (HASH) vmovdqu T0, (HASH)
vzeroupper
RET RET
SYM_FUNC_END(nh_avx2) SYM_FUNC_END(nh_avx2)
...@@ -716,6 +716,7 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx) ...@@ -716,6 +716,7 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
popq %r13 popq %r13
popq %r12 popq %r12
popq %rbx popq %rbx
vzeroupper
RET RET
SYM_FUNC_END(sha256_transform_rorx) SYM_FUNC_END(sha256_transform_rorx)
......
...@@ -62,20 +62,41 @@ ...@@ -62,20 +62,41 @@
#define SHA256CONSTANTS %rax #define SHA256CONSTANTS %rax
#define MSG %xmm0 #define MSG %xmm0 /* sha256rnds2 implicit operand */
#define STATE0 %xmm1 #define STATE0 %xmm1
#define STATE1 %xmm2 #define STATE1 %xmm2
#define MSGTMP0 %xmm3 #define MSG0 %xmm3
#define MSGTMP1 %xmm4 #define MSG1 %xmm4
#define MSGTMP2 %xmm5 #define MSG2 %xmm5
#define MSGTMP3 %xmm6 #define MSG3 %xmm6
#define MSGTMP4 %xmm7 #define TMP %xmm7
#define SHUF_MASK %xmm8 #define SHUF_MASK %xmm8
#define ABEF_SAVE %xmm9 #define ABEF_SAVE %xmm9
#define CDGH_SAVE %xmm10 #define CDGH_SAVE %xmm10
.macro do_4rounds i, m0, m1, m2, m3
.if \i < 16
movdqu \i*4(DATA_PTR), \m0
pshufb SHUF_MASK, \m0
.endif
movdqa (\i-32)*4(SHA256CONSTANTS), MSG
paddd \m0, MSG
sha256rnds2 STATE0, STATE1
.if \i >= 12 && \i < 60
movdqa \m0, TMP
palignr $4, \m3, TMP
paddd TMP, \m1
sha256msg2 \m0, \m1
.endif
punpckhqdq MSG, MSG
sha256rnds2 STATE1, STATE0
.if \i >= 4 && \i < 52
sha256msg1 \m0, \m3
.endif
.endm
/* /*
* Intel SHA Extensions optimized implementation of a SHA-256 update function * Intel SHA Extensions optimized implementation of a SHA-256 update function
* *
...@@ -86,9 +107,6 @@ ...@@ -86,9 +107,6 @@
* store partial blocks. All message padding and hash value initialization must * store partial blocks. All message padding and hash value initialization must
* be done outside the update function. * be done outside the update function.
* *
* The indented lines in the loop are instructions related to rounds processing.
* The non-indented lines are instructions related to the message schedule.
*
* void sha256_ni_transform(uint32_t *digest, const void *data, * void sha256_ni_transform(uint32_t *digest, const void *data,
uint32_t numBlocks); uint32_t numBlocks);
* digest : pointer to digest * digest : pointer to digest
...@@ -108,202 +126,29 @@ SYM_TYPED_FUNC_START(sha256_ni_transform) ...@@ -108,202 +126,29 @@ SYM_TYPED_FUNC_START(sha256_ni_transform)
* Need to reorder these appropriately * Need to reorder these appropriately
* DCBA, HGFE -> ABEF, CDGH * DCBA, HGFE -> ABEF, CDGH
*/ */
movdqu 0*16(DIGEST_PTR), STATE0 movdqu 0*16(DIGEST_PTR), STATE0 /* DCBA */
movdqu 1*16(DIGEST_PTR), STATE1 movdqu 1*16(DIGEST_PTR), STATE1 /* HGFE */
pshufd $0xB1, STATE0, STATE0 /* CDAB */ movdqa STATE0, TMP
pshufd $0x1B, STATE1, STATE1 /* EFGH */ punpcklqdq STATE1, STATE0 /* FEBA */
movdqa STATE0, MSGTMP4 punpckhqdq TMP, STATE1 /* DCHG */
palignr $8, STATE1, STATE0 /* ABEF */ pshufd $0x1B, STATE0, STATE0 /* ABEF */
pblendw $0xF0, MSGTMP4, STATE1 /* CDGH */ pshufd $0xB1, STATE1, STATE1 /* CDGH */
movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK
lea K256(%rip), SHA256CONSTANTS lea K256+32*4(%rip), SHA256CONSTANTS
.Lloop0: .Lloop0:
/* Save hash values for addition after rounds */ /* Save hash values for addition after rounds */
movdqa STATE0, ABEF_SAVE movdqa STATE0, ABEF_SAVE
movdqa STATE1, CDGH_SAVE movdqa STATE1, CDGH_SAVE
/* Rounds 0-3 */ .irp i, 0, 16, 32, 48
movdqu 0*16(DATA_PTR), MSG do_4rounds (\i + 0), MSG0, MSG1, MSG2, MSG3
pshufb SHUF_MASK, MSG do_4rounds (\i + 4), MSG1, MSG2, MSG3, MSG0
movdqa MSG, MSGTMP0 do_4rounds (\i + 8), MSG2, MSG3, MSG0, MSG1
paddd 0*16(SHA256CONSTANTS), MSG do_4rounds (\i + 12), MSG3, MSG0, MSG1, MSG2
sha256rnds2 STATE0, STATE1 .endr
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
/* Rounds 4-7 */
movdqu 1*16(DATA_PTR), MSG
pshufb SHUF_MASK, MSG
movdqa MSG, MSGTMP1
paddd 1*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP1, MSGTMP0
/* Rounds 8-11 */
movdqu 2*16(DATA_PTR), MSG
pshufb SHUF_MASK, MSG
movdqa MSG, MSGTMP2
paddd 2*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP2, MSGTMP1
/* Rounds 12-15 */
movdqu 3*16(DATA_PTR), MSG
pshufb SHUF_MASK, MSG
movdqa MSG, MSGTMP3
paddd 3*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP3, MSGTMP4
palignr $4, MSGTMP2, MSGTMP4
paddd MSGTMP4, MSGTMP0
sha256msg2 MSGTMP3, MSGTMP0
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP3, MSGTMP2
/* Rounds 16-19 */
movdqa MSGTMP0, MSG
paddd 4*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP0, MSGTMP4
palignr $4, MSGTMP3, MSGTMP4
paddd MSGTMP4, MSGTMP1
sha256msg2 MSGTMP0, MSGTMP1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP0, MSGTMP3
/* Rounds 20-23 */
movdqa MSGTMP1, MSG
paddd 5*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP1, MSGTMP4
palignr $4, MSGTMP0, MSGTMP4
paddd MSGTMP4, MSGTMP2
sha256msg2 MSGTMP1, MSGTMP2
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP1, MSGTMP0
/* Rounds 24-27 */
movdqa MSGTMP2, MSG
paddd 6*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP2, MSGTMP4
palignr $4, MSGTMP1, MSGTMP4
paddd MSGTMP4, MSGTMP3
sha256msg2 MSGTMP2, MSGTMP3
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP2, MSGTMP1
/* Rounds 28-31 */
movdqa MSGTMP3, MSG
paddd 7*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP3, MSGTMP4
palignr $4, MSGTMP2, MSGTMP4
paddd MSGTMP4, MSGTMP0
sha256msg2 MSGTMP3, MSGTMP0
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP3, MSGTMP2
/* Rounds 32-35 */
movdqa MSGTMP0, MSG
paddd 8*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP0, MSGTMP4
palignr $4, MSGTMP3, MSGTMP4
paddd MSGTMP4, MSGTMP1
sha256msg2 MSGTMP0, MSGTMP1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP0, MSGTMP3
/* Rounds 36-39 */
movdqa MSGTMP1, MSG
paddd 9*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP1, MSGTMP4
palignr $4, MSGTMP0, MSGTMP4
paddd MSGTMP4, MSGTMP2
sha256msg2 MSGTMP1, MSGTMP2
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP1, MSGTMP0
/* Rounds 40-43 */
movdqa MSGTMP2, MSG
paddd 10*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP2, MSGTMP4
palignr $4, MSGTMP1, MSGTMP4
paddd MSGTMP4, MSGTMP3
sha256msg2 MSGTMP2, MSGTMP3
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP2, MSGTMP1
/* Rounds 44-47 */
movdqa MSGTMP3, MSG
paddd 11*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP3, MSGTMP4
palignr $4, MSGTMP2, MSGTMP4
paddd MSGTMP4, MSGTMP0
sha256msg2 MSGTMP3, MSGTMP0
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP3, MSGTMP2
/* Rounds 48-51 */
movdqa MSGTMP0, MSG
paddd 12*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP0, MSGTMP4
palignr $4, MSGTMP3, MSGTMP4
paddd MSGTMP4, MSGTMP1
sha256msg2 MSGTMP0, MSGTMP1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
sha256msg1 MSGTMP0, MSGTMP3
/* Rounds 52-55 */
movdqa MSGTMP1, MSG
paddd 13*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP1, MSGTMP4
palignr $4, MSGTMP0, MSGTMP4
paddd MSGTMP4, MSGTMP2
sha256msg2 MSGTMP1, MSGTMP2
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
/* Rounds 56-59 */
movdqa MSGTMP2, MSG
paddd 14*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
movdqa MSGTMP2, MSGTMP4
palignr $4, MSGTMP1, MSGTMP4
paddd MSGTMP4, MSGTMP3
sha256msg2 MSGTMP2, MSGTMP3
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
/* Rounds 60-63 */
movdqa MSGTMP3, MSG
paddd 15*16(SHA256CONSTANTS), MSG
sha256rnds2 STATE0, STATE1
pshufd $0x0E, MSG, MSG
sha256rnds2 STATE1, STATE0
/* Add current hash values with previously saved */ /* Add current hash values with previously saved */
paddd ABEF_SAVE, STATE0 paddd ABEF_SAVE, STATE0
...@@ -315,14 +160,14 @@ SYM_TYPED_FUNC_START(sha256_ni_transform) ...@@ -315,14 +160,14 @@ SYM_TYPED_FUNC_START(sha256_ni_transform)
jne .Lloop0 jne .Lloop0
/* Write hash values back in the correct order */ /* Write hash values back in the correct order */
pshufd $0x1B, STATE0, STATE0 /* FEBA */ movdqa STATE0, TMP
pshufd $0xB1, STATE1, STATE1 /* DCHG */ punpcklqdq STATE1, STATE0 /* GHEF */
movdqa STATE0, MSGTMP4 punpckhqdq TMP, STATE1 /* ABCD */
pblendw $0xF0, STATE1, STATE0 /* DCBA */ pshufd $0xB1, STATE0, STATE0 /* HGFE */
palignr $8, MSGTMP4, STATE1 /* HGFE */ pshufd $0x1B, STATE1, STATE1 /* DCBA */
movdqu STATE0, 0*16(DIGEST_PTR) movdqu STATE1, 0*16(DIGEST_PTR)
movdqu STATE1, 1*16(DIGEST_PTR) movdqu STATE0, 1*16(DIGEST_PTR)
.Ldone_hash: .Ldone_hash:
......
...@@ -680,6 +680,7 @@ SYM_TYPED_FUNC_START(sha512_transform_rorx) ...@@ -680,6 +680,7 @@ SYM_TYPED_FUNC_START(sha512_transform_rorx)
pop %r12 pop %r12
pop %rbx pop %rbx
vzeroupper
RET RET
SYM_FUNC_END(sha512_transform_rorx) SYM_FUNC_END(sha512_transform_rorx)
......
...@@ -1456,26 +1456,6 @@ config CRYPTO_USER_API_ENABLE_OBSOLETE ...@@ -1456,26 +1456,6 @@ config CRYPTO_USER_API_ENABLE_OBSOLETE
already been phased out from internal use by the kernel, and are already been phased out from internal use by the kernel, and are
only useful for userspace clients that still rely on them. only useful for userspace clients that still rely on them.
config CRYPTO_STATS
bool "Crypto usage statistics"
depends on CRYPTO_USER
help
Enable the gathering of crypto stats.
Enabling this option reduces the performance of the crypto API. It
should only be enabled when there is actually a use case for it.
This collects data sizes, numbers of requests, and numbers
of errors processed by:
- AEAD ciphers (encrypt, decrypt)
- asymmetric key ciphers (encrypt, decrypt, verify, sign)
- symmetric key ciphers (encrypt, decrypt)
- compression algorithms (compress, decompress)
- hash algorithms (hash)
- key-agreement protocol primitives (setsecret, generate
public key, compute shared secret)
- RNG (generate, seed)
endmenu endmenu
config CRYPTO_HASH_INFO config CRYPTO_HASH_INFO
......
...@@ -69,8 +69,6 @@ cryptomgr-y := algboss.o testmgr.o ...@@ -69,8 +69,6 @@ cryptomgr-y := algboss.o testmgr.o
obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o
obj-$(CONFIG_CRYPTO_USER) += crypto_user.o obj-$(CONFIG_CRYPTO_USER) += crypto_user.o
crypto_user-y := crypto_user_base.o
crypto_user-$(CONFIG_CRYPTO_STATS) += crypto_user_stat.o
obj-$(CONFIG_CRYPTO_CMAC) += cmac.o obj-$(CONFIG_CRYPTO_CMAC) += cmac.o
obj-$(CONFIG_CRYPTO_HMAC) += hmac.o obj-$(CONFIG_CRYPTO_HMAC) += hmac.o
obj-$(CONFIG_CRYPTO_VMAC) += vmac.o obj-$(CONFIG_CRYPTO_VMAC) += vmac.o
......
...@@ -93,32 +93,6 @@ static unsigned int crypto_acomp_extsize(struct crypto_alg *alg) ...@@ -93,32 +93,6 @@ static unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
return extsize; return extsize;
} }
static inline int __crypto_acomp_report_stat(struct sk_buff *skb,
struct crypto_alg *alg)
{
struct comp_alg_common *calg = __crypto_comp_alg_common(alg);
struct crypto_istat_compress *istat = comp_get_stat(calg);
struct crypto_stat_compress racomp;
memset(&racomp, 0, sizeof(racomp));
strscpy(racomp.type, "acomp", sizeof(racomp.type));
racomp.stat_compress_cnt = atomic64_read(&istat->compress_cnt);
racomp.stat_compress_tlen = atomic64_read(&istat->compress_tlen);
racomp.stat_decompress_cnt = atomic64_read(&istat->decompress_cnt);
racomp.stat_decompress_tlen = atomic64_read(&istat->decompress_tlen);
racomp.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_ACOMP, sizeof(racomp), &racomp);
}
#ifdef CONFIG_CRYPTO_STATS
int crypto_acomp_report_stat(struct sk_buff *skb, struct crypto_alg *alg)
{
return __crypto_acomp_report_stat(skb, alg);
}
#endif
static const struct crypto_type crypto_acomp_type = { static const struct crypto_type crypto_acomp_type = {
.extsize = crypto_acomp_extsize, .extsize = crypto_acomp_extsize,
.init_tfm = crypto_acomp_init_tfm, .init_tfm = crypto_acomp_init_tfm,
...@@ -127,9 +101,6 @@ static const struct crypto_type crypto_acomp_type = { ...@@ -127,9 +101,6 @@ static const struct crypto_type crypto_acomp_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_acomp_report, .report = crypto_acomp_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_acomp_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK, .maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
...@@ -184,13 +155,9 @@ EXPORT_SYMBOL_GPL(acomp_request_free); ...@@ -184,13 +155,9 @@ EXPORT_SYMBOL_GPL(acomp_request_free);
void comp_prepare_alg(struct comp_alg_common *alg) void comp_prepare_alg(struct comp_alg_common *alg)
{ {
struct crypto_istat_compress *istat = comp_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
} }
int crypto_register_acomp(struct acomp_alg *alg) int crypto_register_acomp(struct acomp_alg *alg)
......
...@@ -20,15 +20,6 @@ ...@@ -20,15 +20,6 @@
#include "internal.h" #include "internal.h"
static inline struct crypto_istat_aead *aead_get_stat(struct aead_alg *alg)
{
#ifdef CONFIG_CRYPTO_STATS
return &alg->stat;
#else
return NULL;
#endif
}
static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key, static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
...@@ -45,8 +36,7 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key, ...@@ -45,8 +36,7 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(alignbuffer, key, keylen); memcpy(alignbuffer, key, keylen);
ret = crypto_aead_alg(tfm)->setkey(tfm, alignbuffer, keylen); ret = crypto_aead_alg(tfm)->setkey(tfm, alignbuffer, keylen);
memset(alignbuffer, 0, keylen); kfree_sensitive(buffer);
kfree(buffer);
return ret; return ret;
} }
...@@ -90,62 +80,28 @@ int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize) ...@@ -90,62 +80,28 @@ int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
} }
EXPORT_SYMBOL_GPL(crypto_aead_setauthsize); EXPORT_SYMBOL_GPL(crypto_aead_setauthsize);
static inline int crypto_aead_errstat(struct crypto_istat_aead *istat, int err)
{
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err && err != -EINPROGRESS && err != -EBUSY)
atomic64_inc(&istat->err_cnt);
return err;
}
int crypto_aead_encrypt(struct aead_request *req) int crypto_aead_encrypt(struct aead_request *req)
{ {
struct crypto_aead *aead = crypto_aead_reqtfm(req); struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct aead_alg *alg = crypto_aead_alg(aead);
struct crypto_istat_aead *istat;
int ret;
istat = aead_get_stat(alg);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
atomic64_inc(&istat->encrypt_cnt);
atomic64_add(req->cryptlen, &istat->encrypt_tlen);
}
if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY) if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; return -ENOKEY;
else
ret = alg->encrypt(req);
return crypto_aead_errstat(istat, ret); return crypto_aead_alg(aead)->encrypt(req);
} }
EXPORT_SYMBOL_GPL(crypto_aead_encrypt); EXPORT_SYMBOL_GPL(crypto_aead_encrypt);
int crypto_aead_decrypt(struct aead_request *req) int crypto_aead_decrypt(struct aead_request *req)
{ {
struct crypto_aead *aead = crypto_aead_reqtfm(req); struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct aead_alg *alg = crypto_aead_alg(aead);
struct crypto_istat_aead *istat;
int ret;
istat = aead_get_stat(alg);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
atomic64_inc(&istat->encrypt_cnt);
atomic64_add(req->cryptlen, &istat->encrypt_tlen);
}
if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY) if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; return -ENOKEY;
else if (req->cryptlen < crypto_aead_authsize(aead))
ret = -EINVAL; if (req->cryptlen < crypto_aead_authsize(aead))
else return -EINVAL;
ret = alg->decrypt(req);
return crypto_aead_errstat(istat, ret); return crypto_aead_alg(aead)->decrypt(req);
} }
EXPORT_SYMBOL_GPL(crypto_aead_decrypt); EXPORT_SYMBOL_GPL(crypto_aead_decrypt);
...@@ -215,26 +171,6 @@ static void crypto_aead_free_instance(struct crypto_instance *inst) ...@@ -215,26 +171,6 @@ static void crypto_aead_free_instance(struct crypto_instance *inst)
aead->free(aead); aead->free(aead);
} }
static int __maybe_unused crypto_aead_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct aead_alg *aead = container_of(alg, struct aead_alg, base);
struct crypto_istat_aead *istat = aead_get_stat(aead);
struct crypto_stat_aead raead;
memset(&raead, 0, sizeof(raead));
strscpy(raead.type, "aead", sizeof(raead.type));
raead.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
raead.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
raead.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
raead.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
raead.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_AEAD, sizeof(raead), &raead);
}
static const struct crypto_type crypto_aead_type = { static const struct crypto_type crypto_aead_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_aead_init_tfm, .init_tfm = crypto_aead_init_tfm,
...@@ -244,9 +180,6 @@ static const struct crypto_type crypto_aead_type = { ...@@ -244,9 +180,6 @@ static const struct crypto_type crypto_aead_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_aead_report, .report = crypto_aead_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_aead_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
...@@ -277,7 +210,6 @@ EXPORT_SYMBOL_GPL(crypto_has_aead); ...@@ -277,7 +210,6 @@ EXPORT_SYMBOL_GPL(crypto_has_aead);
static int aead_prepare_alg(struct aead_alg *alg) static int aead_prepare_alg(struct aead_alg *alg)
{ {
struct crypto_istat_aead *istat = aead_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
if (max3(alg->maxauthsize, alg->ivsize, alg->chunksize) > if (max3(alg->maxauthsize, alg->ivsize, alg->chunksize) >
...@@ -291,9 +223,6 @@ static int aead_prepare_alg(struct aead_alg *alg) ...@@ -291,9 +223,6 @@ static int aead_prepare_alg(struct aead_alg *alg)
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AEAD; base->cra_flags |= CRYPTO_ALG_TYPE_AEAD;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0; return 0;
} }
......
...@@ -27,22 +27,6 @@ ...@@ -27,22 +27,6 @@
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e #define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg)
{
return hash_get_stat(&alg->halg);
}
static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err)
{
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err && err != -EINPROGRESS && err != -EBUSY)
atomic64_inc(&ahash_get_stat(alg)->err_cnt);
return err;
}
/* /*
* For an ahash tfm that is using an shash algorithm (instead of an ahash * For an ahash tfm that is using an shash algorithm (instead of an ahash
* algorithm), this returns the underlying shash tfm. * algorithm), this returns the underlying shash tfm.
...@@ -344,75 +328,47 @@ static void ahash_restore_req(struct ahash_request *req, int err) ...@@ -344,75 +328,47 @@ static void ahash_restore_req(struct ahash_request *req, int err)
int crypto_ahash_update(struct ahash_request *req) int crypto_ahash_update(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ahash_alg *alg;
if (likely(tfm->using_shash)) if (likely(tfm->using_shash))
return shash_ahash_update(req, ahash_request_ctx(req)); return shash_ahash_update(req, ahash_request_ctx(req));
alg = crypto_ahash_alg(tfm); return crypto_ahash_alg(tfm)->update(req);
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen);
return crypto_ahash_errstat(alg, alg->update(req));
} }
EXPORT_SYMBOL_GPL(crypto_ahash_update); EXPORT_SYMBOL_GPL(crypto_ahash_update);
int crypto_ahash_final(struct ahash_request *req) int crypto_ahash_final(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ahash_alg *alg;
if (likely(tfm->using_shash)) if (likely(tfm->using_shash))
return crypto_shash_final(ahash_request_ctx(req), req->result); return crypto_shash_final(ahash_request_ctx(req), req->result);
alg = crypto_ahash_alg(tfm); return crypto_ahash_alg(tfm)->final(req);
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&ahash_get_stat(alg)->hash_cnt);
return crypto_ahash_errstat(alg, alg->final(req));
} }
EXPORT_SYMBOL_GPL(crypto_ahash_final); EXPORT_SYMBOL_GPL(crypto_ahash_final);
int crypto_ahash_finup(struct ahash_request *req) int crypto_ahash_finup(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ahash_alg *alg;
if (likely(tfm->using_shash)) if (likely(tfm->using_shash))
return shash_ahash_finup(req, ahash_request_ctx(req)); return shash_ahash_finup(req, ahash_request_ctx(req));
alg = crypto_ahash_alg(tfm); return crypto_ahash_alg(tfm)->finup(req);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}
return crypto_ahash_errstat(alg, alg->finup(req));
} }
EXPORT_SYMBOL_GPL(crypto_ahash_finup); EXPORT_SYMBOL_GPL(crypto_ahash_finup);
int crypto_ahash_digest(struct ahash_request *req) int crypto_ahash_digest(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ahash_alg *alg;
int err;
if (likely(tfm->using_shash)) if (likely(tfm->using_shash))
return shash_ahash_digest(req, prepare_shash_desc(req, tfm)); return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
err = -ENOKEY; return -ENOKEY;
else
err = alg->digest(req);
return crypto_ahash_errstat(alg, err); return crypto_ahash_alg(tfm)->digest(req);
} }
EXPORT_SYMBOL_GPL(crypto_ahash_digest); EXPORT_SYMBOL_GPL(crypto_ahash_digest);
...@@ -571,12 +527,6 @@ static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg) ...@@ -571,12 +527,6 @@ static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
__crypto_hash_alg_common(alg)->digestsize); __crypto_hash_alg_common(alg)->digestsize);
} }
static int __maybe_unused crypto_ahash_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
return crypto_hash_report_stat(skb, alg, "ahash");
}
static const struct crypto_type crypto_ahash_type = { static const struct crypto_type crypto_ahash_type = {
.extsize = crypto_ahash_extsize, .extsize = crypto_ahash_extsize,
.init_tfm = crypto_ahash_init_tfm, .init_tfm = crypto_ahash_init_tfm,
...@@ -586,9 +536,6 @@ static const struct crypto_type crypto_ahash_type = { ...@@ -586,9 +536,6 @@ static const struct crypto_type crypto_ahash_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_ahash_report, .report = crypto_ahash_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_ahash_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_AHASH_MASK, .maskset = CRYPTO_ALG_TYPE_AHASH_MASK,
......
...@@ -70,30 +70,6 @@ static void crypto_akcipher_free_instance(struct crypto_instance *inst) ...@@ -70,30 +70,6 @@ static void crypto_akcipher_free_instance(struct crypto_instance *inst)
akcipher->free(akcipher); akcipher->free(akcipher);
} }
static int __maybe_unused crypto_akcipher_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct akcipher_alg *akcipher = __crypto_akcipher_alg(alg);
struct crypto_istat_akcipher *istat;
struct crypto_stat_akcipher rakcipher;
istat = akcipher_get_stat(akcipher);
memset(&rakcipher, 0, sizeof(rakcipher));
strscpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
rakcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
rakcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
rakcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
rakcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
rakcipher.stat_sign_cnt = atomic64_read(&istat->sign_cnt);
rakcipher.stat_verify_cnt = atomic64_read(&istat->verify_cnt);
rakcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_AKCIPHER,
sizeof(rakcipher), &rakcipher);
}
static const struct crypto_type crypto_akcipher_type = { static const struct crypto_type crypto_akcipher_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_akcipher_init_tfm, .init_tfm = crypto_akcipher_init_tfm,
...@@ -103,9 +79,6 @@ static const struct crypto_type crypto_akcipher_type = { ...@@ -103,9 +79,6 @@ static const struct crypto_type crypto_akcipher_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_akcipher_report, .report = crypto_akcipher_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_akcipher_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_AHASH_MASK, .maskset = CRYPTO_ALG_TYPE_AHASH_MASK,
...@@ -131,15 +104,11 @@ EXPORT_SYMBOL_GPL(crypto_alloc_akcipher); ...@@ -131,15 +104,11 @@ EXPORT_SYMBOL_GPL(crypto_alloc_akcipher);
static void akcipher_prepare_alg(struct akcipher_alg *alg) static void akcipher_prepare_alg(struct akcipher_alg *alg)
{ {
struct crypto_istat_akcipher *istat = akcipher_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
base->cra_type = &crypto_akcipher_type; base->cra_type = &crypto_akcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER; base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
} }
static int akcipher_default_op(struct akcipher_request *req) static int akcipher_default_op(struct akcipher_request *req)
......
...@@ -138,9 +138,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval) ...@@ -138,9 +138,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
goto err_free_param; goto err_free_param;
} }
if (!i)
goto err_free_param;
param->tb[i + 1] = NULL; param->tb[i + 1] = NULL;
param->type.attr.rta_len = sizeof(param->type); param->type.attr.rta_len = sizeof(param->type);
......
...@@ -202,18 +202,18 @@ static void crypto_start_test(struct crypto_larval *larval) ...@@ -202,18 +202,18 @@ static void crypto_start_test(struct crypto_larval *larval)
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg) static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
{ {
struct crypto_larval *larval = (void *)alg; struct crypto_larval *larval = (void *)alg;
long timeout; long time_left;
if (!crypto_boot_test_finished()) if (!crypto_boot_test_finished())
crypto_start_test(larval); crypto_start_test(larval);
timeout = wait_for_completion_killable_timeout( time_left = wait_for_completion_killable_timeout(
&larval->completion, 60 * HZ); &larval->completion, 60 * HZ);
alg = larval->adult; alg = larval->adult;
if (timeout < 0) if (time_left < 0)
alg = ERR_PTR(-EINTR); alg = ERR_PTR(-EINTR);
else if (!timeout) else if (!time_left)
alg = ERR_PTR(-ETIMEDOUT); alg = ERR_PTR(-ETIMEDOUT);
else if (!alg) else if (!alg)
alg = ERR_PTR(-ENOENT); alg = ERR_PTR(-ENOENT);
......
...@@ -234,6 +234,7 @@ static int software_key_query(const struct kernel_pkey_params *params, ...@@ -234,6 +234,7 @@ static int software_key_query(const struct kernel_pkey_params *params,
info->key_size = len * 8; info->key_size = len * 8;
if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) { if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
int slen = len;
/* /*
* ECDSA key sizes are much smaller than RSA, and thus could * ECDSA key sizes are much smaller than RSA, and thus could
* operate on (hashed) inputs that are larger than key size. * operate on (hashed) inputs that are larger than key size.
...@@ -247,8 +248,19 @@ static int software_key_query(const struct kernel_pkey_params *params, ...@@ -247,8 +248,19 @@ static int software_key_query(const struct kernel_pkey_params *params,
* Verify takes ECDSA-Sig (described in RFC 5480) as input, * Verify takes ECDSA-Sig (described in RFC 5480) as input,
* which is actually 2 'key_size'-bit integers encoded in * which is actually 2 'key_size'-bit integers encoded in
* ASN.1. Account for the ASN.1 encoding overhead here. * ASN.1. Account for the ASN.1 encoding overhead here.
*
* NIST P192/256/384 may prepend a '0' to a coordinate to
* indicate a positive integer. NIST P521 never needs it.
*/
if (strcmp(pkey->pkey_algo, "ecdsa-nist-p521") != 0)
slen += 1;
/* Length of encoding the x & y coordinates */
slen = 2 * (slen + 2);
/*
* If coordinate encoding takes at least 128 bytes then an
* additional byte for length encoding is needed.
*/ */
info->max_sig_size = 2 * (len + 3) + 2; info->max_sig_size = 1 + (slen >= 128) + 1 + slen;
} else { } else {
info->max_data_size = len; info->max_data_size = len;
info->max_sig_size = len; info->max_sig_size = len;
......
...@@ -60,24 +60,23 @@ EXPORT_SYMBOL_GPL(x509_free_certificate); ...@@ -60,24 +60,23 @@ EXPORT_SYMBOL_GPL(x509_free_certificate);
*/ */
struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
{ {
struct x509_certificate *cert; struct x509_certificate *cert __free(x509_free_certificate);
struct x509_parse_context *ctx; struct x509_parse_context *ctx __free(kfree) = NULL;
struct asymmetric_key_id *kid; struct asymmetric_key_id *kid;
long ret; long ret;
ret = -ENOMEM;
cert = kzalloc(sizeof(struct x509_certificate), GFP_KERNEL); cert = kzalloc(sizeof(struct x509_certificate), GFP_KERNEL);
if (!cert) if (!cert)
goto error_no_cert; return ERR_PTR(-ENOMEM);
cert->pub = kzalloc(sizeof(struct public_key), GFP_KERNEL); cert->pub = kzalloc(sizeof(struct public_key), GFP_KERNEL);
if (!cert->pub) if (!cert->pub)
goto error_no_ctx; return ERR_PTR(-ENOMEM);
cert->sig = kzalloc(sizeof(struct public_key_signature), GFP_KERNEL); cert->sig = kzalloc(sizeof(struct public_key_signature), GFP_KERNEL);
if (!cert->sig) if (!cert->sig)
goto error_no_ctx; return ERR_PTR(-ENOMEM);
ctx = kzalloc(sizeof(struct x509_parse_context), GFP_KERNEL); ctx = kzalloc(sizeof(struct x509_parse_context), GFP_KERNEL);
if (!ctx) if (!ctx)
goto error_no_ctx; return ERR_PTR(-ENOMEM);
ctx->cert = cert; ctx->cert = cert;
ctx->data = (unsigned long)data; ctx->data = (unsigned long)data;
...@@ -85,7 +84,7 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) ...@@ -85,7 +84,7 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
/* Attempt to decode the certificate */ /* Attempt to decode the certificate */
ret = asn1_ber_decoder(&x509_decoder, ctx, data, datalen); ret = asn1_ber_decoder(&x509_decoder, ctx, data, datalen);
if (ret < 0) if (ret < 0)
goto error_decode; return ERR_PTR(ret);
/* Decode the AuthorityKeyIdentifier */ /* Decode the AuthorityKeyIdentifier */
if (ctx->raw_akid) { if (ctx->raw_akid) {
...@@ -95,20 +94,19 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) ...@@ -95,20 +94,19 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
ctx->raw_akid, ctx->raw_akid_size); ctx->raw_akid, ctx->raw_akid_size);
if (ret < 0) { if (ret < 0) {
pr_warn("Couldn't decode AuthKeyIdentifier\n"); pr_warn("Couldn't decode AuthKeyIdentifier\n");
goto error_decode; return ERR_PTR(ret);
} }
} }
ret = -ENOMEM;
cert->pub->key = kmemdup(ctx->key, ctx->key_size, GFP_KERNEL); cert->pub->key = kmemdup(ctx->key, ctx->key_size, GFP_KERNEL);
if (!cert->pub->key) if (!cert->pub->key)
goto error_decode; return ERR_PTR(-ENOMEM);
cert->pub->keylen = ctx->key_size; cert->pub->keylen = ctx->key_size;
cert->pub->params = kmemdup(ctx->params, ctx->params_size, GFP_KERNEL); cert->pub->params = kmemdup(ctx->params, ctx->params_size, GFP_KERNEL);
if (!cert->pub->params) if (!cert->pub->params)
goto error_decode; return ERR_PTR(-ENOMEM);
cert->pub->paramlen = ctx->params_size; cert->pub->paramlen = ctx->params_size;
cert->pub->algo = ctx->key_algo; cert->pub->algo = ctx->key_algo;
...@@ -116,33 +114,23 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) ...@@ -116,33 +114,23 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
/* Grab the signature bits */ /* Grab the signature bits */
ret = x509_get_sig_params(cert); ret = x509_get_sig_params(cert);
if (ret < 0) if (ret < 0)
goto error_decode; return ERR_PTR(ret);
/* Generate cert issuer + serial number key ID */ /* Generate cert issuer + serial number key ID */
kid = asymmetric_key_generate_id(cert->raw_serial, kid = asymmetric_key_generate_id(cert->raw_serial,
cert->raw_serial_size, cert->raw_serial_size,
cert->raw_issuer, cert->raw_issuer,
cert->raw_issuer_size); cert->raw_issuer_size);
if (IS_ERR(kid)) { if (IS_ERR(kid))
ret = PTR_ERR(kid); return ERR_CAST(kid);
goto error_decode;
}
cert->id = kid; cert->id = kid;
/* Detect self-signed certificates */ /* Detect self-signed certificates */
ret = x509_check_for_self_signed(cert); ret = x509_check_for_self_signed(cert);
if (ret < 0) if (ret < 0)
goto error_decode;
kfree(ctx);
return cert;
error_decode:
kfree(ctx);
error_no_ctx:
x509_free_certificate(cert);
error_no_cert:
return ERR_PTR(ret); return ERR_PTR(ret);
return_ptr(cert);
} }
EXPORT_SYMBOL_GPL(x509_cert_parse); EXPORT_SYMBOL_GPL(x509_cert_parse);
...@@ -546,6 +534,9 @@ int x509_extract_key_data(void *context, size_t hdrlen, ...@@ -546,6 +534,9 @@ int x509_extract_key_data(void *context, size_t hdrlen,
case OID_id_ansip384r1: case OID_id_ansip384r1:
ctx->cert->pub->pkey_algo = "ecdsa-nist-p384"; ctx->cert->pub->pkey_algo = "ecdsa-nist-p384";
break; break;
case OID_id_ansip521r1:
ctx->cert->pub->pkey_algo = "ecdsa-nist-p521";
break;
default: default:
return -ENOPKG; return -ENOPKG;
} }
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Written by David Howells (dhowells@redhat.com) * Written by David Howells (dhowells@redhat.com)
*/ */
#include <linux/cleanup.h>
#include <linux/time.h> #include <linux/time.h>
#include <crypto/public_key.h> #include <crypto/public_key.h>
#include <keys/asymmetric-type.h> #include <keys/asymmetric-type.h>
...@@ -44,6 +45,8 @@ struct x509_certificate { ...@@ -44,6 +45,8 @@ struct x509_certificate {
* x509_cert_parser.c * x509_cert_parser.c
*/ */
extern void x509_free_certificate(struct x509_certificate *cert); extern void x509_free_certificate(struct x509_certificate *cert);
DEFINE_FREE(x509_free_certificate, struct x509_certificate *,
if (!IS_ERR(_T)) x509_free_certificate(_T))
extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen); extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
extern int x509_decode_time(time64_t *_t, size_t hdrlen, extern int x509_decode_time(time64_t *_t, size_t hdrlen,
unsigned char tag, unsigned char tag,
......
...@@ -161,12 +161,11 @@ int x509_check_for_self_signed(struct x509_certificate *cert) ...@@ -161,12 +161,11 @@ int x509_check_for_self_signed(struct x509_certificate *cert)
*/ */
static int x509_key_preparse(struct key_preparsed_payload *prep) static int x509_key_preparse(struct key_preparsed_payload *prep)
{ {
struct asymmetric_key_ids *kids; struct x509_certificate *cert __free(x509_free_certificate);
struct x509_certificate *cert; struct asymmetric_key_ids *kids __free(kfree) = NULL;
char *p, *desc __free(kfree) = NULL;
const char *q; const char *q;
size_t srlen, sulen; size_t srlen, sulen;
char *desc = NULL, *p;
int ret;
cert = x509_cert_parse(prep->data, prep->datalen); cert = x509_cert_parse(prep->data, prep->datalen);
if (IS_ERR(cert)) if (IS_ERR(cert))
...@@ -188,9 +187,8 @@ static int x509_key_preparse(struct key_preparsed_payload *prep) ...@@ -188,9 +187,8 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
} }
/* Don't permit addition of blacklisted keys */ /* Don't permit addition of blacklisted keys */
ret = -EKEYREJECTED;
if (cert->blacklisted) if (cert->blacklisted)
goto error_free_cert; return -EKEYREJECTED;
/* Propose a description */ /* Propose a description */
sulen = strlen(cert->subject); sulen = strlen(cert->subject);
...@@ -202,10 +200,9 @@ static int x509_key_preparse(struct key_preparsed_payload *prep) ...@@ -202,10 +200,9 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
q = cert->raw_serial; q = cert->raw_serial;
} }
ret = -ENOMEM;
desc = kmalloc(sulen + 2 + srlen * 2 + 1, GFP_KERNEL); desc = kmalloc(sulen + 2 + srlen * 2 + 1, GFP_KERNEL);
if (!desc) if (!desc)
goto error_free_cert; return -ENOMEM;
p = memcpy(desc, cert->subject, sulen); p = memcpy(desc, cert->subject, sulen);
p += sulen; p += sulen;
*p++ = ':'; *p++ = ':';
...@@ -215,16 +212,14 @@ static int x509_key_preparse(struct key_preparsed_payload *prep) ...@@ -215,16 +212,14 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
kids = kmalloc(sizeof(struct asymmetric_key_ids), GFP_KERNEL); kids = kmalloc(sizeof(struct asymmetric_key_ids), GFP_KERNEL);
if (!kids) if (!kids)
goto error_free_desc; return -ENOMEM;
kids->id[0] = cert->id; kids->id[0] = cert->id;
kids->id[1] = cert->skid; kids->id[1] = cert->skid;
kids->id[2] = asymmetric_key_generate_id(cert->raw_subject, kids->id[2] = asymmetric_key_generate_id(cert->raw_subject,
cert->raw_subject_size, cert->raw_subject_size,
"", 0); "", 0);
if (IS_ERR(kids->id[2])) { if (IS_ERR(kids->id[2]))
ret = PTR_ERR(kids->id[2]); return PTR_ERR(kids->id[2]);
goto error_free_kids;
}
/* We're pinning the module by being linked against it */ /* We're pinning the module by being linked against it */
__module_get(public_key_subtype.owner); __module_get(public_key_subtype.owner);
...@@ -242,15 +237,7 @@ static int x509_key_preparse(struct key_preparsed_payload *prep) ...@@ -242,15 +237,7 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
cert->sig = NULL; cert->sig = NULL;
desc = NULL; desc = NULL;
kids = NULL; kids = NULL;
ret = 0; return 0;
error_free_kids:
kfree(kids);
error_free_desc:
kfree(desc);
error_free_cert:
x509_free_certificate(cert);
return ret;
} }
static struct asymmetric_key_parser x509_key_parser = { static struct asymmetric_key_parser x509_key_parser = {
......
...@@ -34,8 +34,7 @@ static int setkey_unaligned(struct crypto_cipher *tfm, const u8 *key, ...@@ -34,8 +34,7 @@ static int setkey_unaligned(struct crypto_cipher *tfm, const u8 *key,
alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(alignbuffer, key, keylen); memcpy(alignbuffer, key, keylen);
ret = cia->cia_setkey(crypto_cipher_tfm(tfm), alignbuffer, keylen); ret = cia->cia_setkey(crypto_cipher_tfm(tfm), alignbuffer, keylen);
memset(alignbuffer, 0, keylen); kfree_sensitive(buffer);
kfree(buffer);
return ret; return ret;
} }
......
...@@ -13,14 +13,11 @@ ...@@ -13,14 +13,11 @@
struct acomp_req; struct acomp_req;
struct comp_alg_common; struct comp_alg_common;
struct sk_buff;
int crypto_init_scomp_ops_async(struct crypto_tfm *tfm); int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req); struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
void crypto_acomp_scomp_free_ctx(struct acomp_req *req); void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
int crypto_acomp_report_stat(struct sk_buff *skb, struct crypto_alg *alg);
void comp_prepare_alg(struct comp_alg_common *alg); void comp_prepare_alg(struct comp_alg_common *alg);
#endif /* _LOCAL_CRYPTO_COMPRESS_H */ #endif /* _LOCAL_CRYPTO_COMPRESS_H */
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <crypto/internal/rng.h> #include <crypto/internal/rng.h>
#include <crypto/akcipher.h> #include <crypto/akcipher.h>
#include <crypto/kpp.h> #include <crypto/kpp.h>
#include <crypto/internal/cryptouser.h>
#include "internal.h" #include "internal.h"
...@@ -33,7 +32,7 @@ struct crypto_dump_info { ...@@ -33,7 +32,7 @@ struct crypto_dump_info {
u16 nlmsg_flags; u16 nlmsg_flags;
}; };
struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact) static struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact)
{ {
struct crypto_alg *q, *alg = NULL; struct crypto_alg *q, *alg = NULL;
...@@ -387,6 +386,13 @@ static int crypto_del_rng(struct sk_buff *skb, struct nlmsghdr *nlh, ...@@ -387,6 +386,13 @@ static int crypto_del_rng(struct sk_buff *skb, struct nlmsghdr *nlh,
return crypto_del_default_rng(); return crypto_del_default_rng();
} }
static int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
struct nlattr **attrs)
{
/* No longer supported */
return -ENOTSUPP;
}
#define MSGSIZE(type) sizeof(struct type) #define MSGSIZE(type) sizeof(struct type)
static const int crypto_msg_min[CRYPTO_NR_MSGTYPES] = { static const int crypto_msg_min[CRYPTO_NR_MSGTYPES] = {
......
// SPDX-License-Identifier: GPL-2.0
/*
* Crypto user configuration API.
*
* Copyright (C) 2017-2018 Corentin Labbe <clabbe@baylibre.com>
*
*/
#include <crypto/algapi.h>
#include <crypto/internal/cryptouser.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/string.h>
#include <net/netlink.h>
#include <net/sock.h>
#define null_terminated(x) (strnlen(x, sizeof(x)) < sizeof(x))
struct crypto_dump_info {
struct sk_buff *in_skb;
struct sk_buff *out_skb;
u32 nlmsg_seq;
u16 nlmsg_flags;
};
static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg)
{
struct crypto_stat_cipher rcipher;
memset(&rcipher, 0, sizeof(rcipher));
strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
}
static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg)
{
struct crypto_stat_compress rcomp;
memset(&rcomp, 0, sizeof(rcomp));
strscpy(rcomp.type, "compression", sizeof(rcomp.type));
return nla_put(skb, CRYPTOCFGA_STAT_COMPRESS, sizeof(rcomp), &rcomp);
}
static int crypto_reportstat_one(struct crypto_alg *alg,
struct crypto_user_alg *ualg,
struct sk_buff *skb)
{
memset(ualg, 0, sizeof(*ualg));
strscpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
strscpy(ualg->cru_driver_name, alg->cra_driver_name,
sizeof(ualg->cru_driver_name));
strscpy(ualg->cru_module_name, module_name(alg->cra_module),
sizeof(ualg->cru_module_name));
ualg->cru_type = 0;
ualg->cru_mask = 0;
ualg->cru_flags = alg->cra_flags;
ualg->cru_refcnt = refcount_read(&alg->cra_refcnt);
if (nla_put_u32(skb, CRYPTOCFGA_PRIORITY_VAL, alg->cra_priority))
goto nla_put_failure;
if (alg->cra_flags & CRYPTO_ALG_LARVAL) {
struct crypto_stat_larval rl;
memset(&rl, 0, sizeof(rl));
strscpy(rl.type, "larval", sizeof(rl.type));
if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL, sizeof(rl), &rl))
goto nla_put_failure;
goto out;
}
if (alg->cra_type && alg->cra_type->report_stat) {
if (alg->cra_type->report_stat(skb, alg))
goto nla_put_failure;
goto out;
}
switch (alg->cra_flags & (CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_LARVAL)) {
case CRYPTO_ALG_TYPE_CIPHER:
if (crypto_report_cipher(skb, alg))
goto nla_put_failure;
break;
case CRYPTO_ALG_TYPE_COMPRESS:
if (crypto_report_comp(skb, alg))
goto nla_put_failure;
break;
default:
pr_err("ERROR: Unhandled alg %d in %s\n",
alg->cra_flags & (CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_LARVAL),
__func__);
}
out:
return 0;
nla_put_failure:
return -EMSGSIZE;
}
static int crypto_reportstat_alg(struct crypto_alg *alg,
struct crypto_dump_info *info)
{
struct sk_buff *in_skb = info->in_skb;
struct sk_buff *skb = info->out_skb;
struct nlmsghdr *nlh;
struct crypto_user_alg *ualg;
int err = 0;
nlh = nlmsg_put(skb, NETLINK_CB(in_skb).portid, info->nlmsg_seq,
CRYPTO_MSG_GETSTAT, sizeof(*ualg), info->nlmsg_flags);
if (!nlh) {
err = -EMSGSIZE;
goto out;
}
ualg = nlmsg_data(nlh);
err = crypto_reportstat_one(alg, ualg, skb);
if (err) {
nlmsg_cancel(skb, nlh);
goto out;
}
nlmsg_end(skb, nlh);
out:
return err;
}
int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
struct nlattr **attrs)
{
struct net *net = sock_net(in_skb->sk);
struct crypto_user_alg *p = nlmsg_data(in_nlh);
struct crypto_alg *alg;
struct sk_buff *skb;
struct crypto_dump_info info;
int err;
if (!null_terminated(p->cru_name) || !null_terminated(p->cru_driver_name))
return -EINVAL;
alg = crypto_alg_match(p, 0);
if (!alg)
return -ENOENT;
err = -ENOMEM;
skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
if (!skb)
goto drop_alg;
info.in_skb = in_skb;
info.out_skb = skb;
info.nlmsg_seq = in_nlh->nlmsg_seq;
info.nlmsg_flags = 0;
err = crypto_reportstat_alg(alg, &info);
drop_alg:
crypto_mod_put(alg);
if (err) {
kfree_skb(skb);
return err;
}
return nlmsg_unicast(net->crypto_nlsk, skb, NETLINK_CB(in_skb).portid);
}
MODULE_LICENSE("GPL");
...@@ -60,6 +60,8 @@ const struct ecc_curve *ecc_get_curve(unsigned int curve_id) ...@@ -60,6 +60,8 @@ const struct ecc_curve *ecc_get_curve(unsigned int curve_id)
return &nist_p256; return &nist_p256;
case ECC_CURVE_NIST_P384: case ECC_CURVE_NIST_P384:
return &nist_p384; return &nist_p384;
case ECC_CURVE_NIST_P521:
return &nist_p521;
default: default:
return NULL; return NULL;
} }
...@@ -689,7 +691,7 @@ static void vli_mmod_barrett(u64 *result, u64 *product, const u64 *mod, ...@@ -689,7 +691,7 @@ static void vli_mmod_barrett(u64 *result, u64 *product, const u64 *mod,
static void vli_mmod_fast_192(u64 *result, const u64 *product, static void vli_mmod_fast_192(u64 *result, const u64 *product,
const u64 *curve_prime, u64 *tmp) const u64 *curve_prime, u64 *tmp)
{ {
const unsigned int ndigits = 3; const unsigned int ndigits = ECC_CURVE_NIST_P192_DIGITS;
int carry; int carry;
vli_set(result, product, ndigits); vli_set(result, product, ndigits);
...@@ -717,7 +719,7 @@ static void vli_mmod_fast_256(u64 *result, const u64 *product, ...@@ -717,7 +719,7 @@ static void vli_mmod_fast_256(u64 *result, const u64 *product,
const u64 *curve_prime, u64 *tmp) const u64 *curve_prime, u64 *tmp)
{ {
int carry; int carry;
const unsigned int ndigits = 4; const unsigned int ndigits = ECC_CURVE_NIST_P256_DIGITS;
/* t */ /* t */
vli_set(result, product, ndigits); vli_set(result, product, ndigits);
...@@ -800,7 +802,7 @@ static void vli_mmod_fast_384(u64 *result, const u64 *product, ...@@ -800,7 +802,7 @@ static void vli_mmod_fast_384(u64 *result, const u64 *product,
const u64 *curve_prime, u64 *tmp) const u64 *curve_prime, u64 *tmp)
{ {
int carry; int carry;
const unsigned int ndigits = 6; const unsigned int ndigits = ECC_CURVE_NIST_P384_DIGITS;
/* t */ /* t */
vli_set(result, product, ndigits); vli_set(result, product, ndigits);
...@@ -902,6 +904,28 @@ static void vli_mmod_fast_384(u64 *result, const u64 *product, ...@@ -902,6 +904,28 @@ static void vli_mmod_fast_384(u64 *result, const u64 *product,
#undef AND64H #undef AND64H
#undef AND64L #undef AND64L
/*
* Computes result = product % curve_prime
* from "Recommendations for Discrete Logarithm-Based Cryptography:
* Elliptic Curve Domain Parameters" section G.1.4
*/
static void vli_mmod_fast_521(u64 *result, const u64 *product,
const u64 *curve_prime, u64 *tmp)
{
const unsigned int ndigits = ECC_CURVE_NIST_P521_DIGITS;
size_t i;
/* Initialize result with lowest 521 bits from product */
vli_set(result, product, ndigits);
result[8] &= 0x1ff;
for (i = 0; i < ndigits; i++)
tmp[i] = (product[8 + i] >> 9) | (product[9 + i] << 55);
tmp[8] &= 0x1ff;
vli_mod_add(result, result, tmp, curve_prime, ndigits);
}
/* Computes result = product % curve_prime for different curve_primes. /* Computes result = product % curve_prime for different curve_primes.
* *
* Note that curve_primes are distinguished just by heuristic check and * Note that curve_primes are distinguished just by heuristic check and
...@@ -932,15 +956,18 @@ static bool vli_mmod_fast(u64 *result, u64 *product, ...@@ -932,15 +956,18 @@ static bool vli_mmod_fast(u64 *result, u64 *product,
} }
switch (ndigits) { switch (ndigits) {
case 3: case ECC_CURVE_NIST_P192_DIGITS:
vli_mmod_fast_192(result, product, curve_prime, tmp); vli_mmod_fast_192(result, product, curve_prime, tmp);
break; break;
case 4: case ECC_CURVE_NIST_P256_DIGITS:
vli_mmod_fast_256(result, product, curve_prime, tmp); vli_mmod_fast_256(result, product, curve_prime, tmp);
break; break;
case 6: case ECC_CURVE_NIST_P384_DIGITS:
vli_mmod_fast_384(result, product, curve_prime, tmp); vli_mmod_fast_384(result, product, curve_prime, tmp);
break; break;
case ECC_CURVE_NIST_P521_DIGITS:
vli_mmod_fast_521(result, product, curve_prime, tmp);
break;
default: default:
pr_err_ratelimited("ecc: unsupported digits size!\n"); pr_err_ratelimited("ecc: unsupported digits size!\n");
return false; return false;
...@@ -1295,6 +1322,9 @@ static void ecc_point_mult(struct ecc_point *result, ...@@ -1295,6 +1322,9 @@ static void ecc_point_mult(struct ecc_point *result,
carry = vli_add(sk[0], scalar, curve->n, ndigits); carry = vli_add(sk[0], scalar, curve->n, ndigits);
vli_add(sk[1], sk[0], curve->n, ndigits); vli_add(sk[1], sk[0], curve->n, ndigits);
scalar = sk[!carry]; scalar = sk[!carry];
if (curve->nbits == 521) /* NIST P521 */
num_bits = curve->nbits + 2;
else
num_bits = sizeof(u64) * ndigits * 8 + 1; num_bits = sizeof(u64) * ndigits * 8 + 1;
vli_set(rx[1], point->x, ndigits); vli_set(rx[1], point->x, ndigits);
...@@ -1416,6 +1446,12 @@ void ecc_point_mult_shamir(const struct ecc_point *result, ...@@ -1416,6 +1446,12 @@ void ecc_point_mult_shamir(const struct ecc_point *result,
} }
EXPORT_SYMBOL(ecc_point_mult_shamir); EXPORT_SYMBOL(ecc_point_mult_shamir);
/*
* This function performs checks equivalent to Appendix A.4.2 of FIPS 186-5.
* Whereas A.4.2 results in an integer in the interval [1, n-1], this function
* ensures that the integer is in the range of [2, n-3]. We are slightly
* stricter because of the currently used scalar multiplication algorithm.
*/
static int __ecc_is_key_valid(const struct ecc_curve *curve, static int __ecc_is_key_valid(const struct ecc_curve *curve,
const u64 *private_key, unsigned int ndigits) const u64 *private_key, unsigned int ndigits)
{ {
...@@ -1455,31 +1491,29 @@ int ecc_is_key_valid(unsigned int curve_id, unsigned int ndigits, ...@@ -1455,31 +1491,29 @@ int ecc_is_key_valid(unsigned int curve_id, unsigned int ndigits,
EXPORT_SYMBOL(ecc_is_key_valid); EXPORT_SYMBOL(ecc_is_key_valid);
/* /*
* ECC private keys are generated using the method of extra random bits, * ECC private keys are generated using the method of rejection sampling,
* equivalent to that described in FIPS 186-4, Appendix B.4.1. * equivalent to that described in FIPS 186-5, Appendix A.2.2.
*
* d = (c mod(n–1)) + 1 where c is a string of random bits, 64 bits longer
* than requested
* 0 <= c mod(n-1) <= n-2 and implies that
* 1 <= d <= n-1
* *
* This method generates a private key uniformly distributed in the range * This method generates a private key uniformly distributed in the range
* [1, n-1]. * [2, n-3].
*/ */
int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey) int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits,
u64 *private_key)
{ {
const struct ecc_curve *curve = ecc_get_curve(curve_id); const struct ecc_curve *curve = ecc_get_curve(curve_id);
u64 priv[ECC_MAX_DIGITS];
unsigned int nbytes = ndigits << ECC_DIGITS_TO_BYTES_SHIFT; unsigned int nbytes = ndigits << ECC_DIGITS_TO_BYTES_SHIFT;
unsigned int nbits = vli_num_bits(curve->n, ndigits); unsigned int nbits = vli_num_bits(curve->n, ndigits);
int err; int err;
/* Check that N is included in Table 1 of FIPS 186-4, section 6.1.1 */ /*
if (nbits < 160 || ndigits > ARRAY_SIZE(priv)) * Step 1 & 2: check that N is included in Table 1 of FIPS 186-5,
* section 6.1.1.
*/
if (nbits < 224)
return -EINVAL; return -EINVAL;
/* /*
* FIPS 186-4 recommends that the private key should be obtained from a * FIPS 186-5 recommends that the private key should be obtained from a
* RBG with a security strength equal to or greater than the security * RBG with a security strength equal to or greater than the security
* strength associated with N. * strength associated with N.
* *
...@@ -1492,17 +1526,17 @@ int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey) ...@@ -1492,17 +1526,17 @@ int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey)
if (crypto_get_default_rng()) if (crypto_get_default_rng())
return -EFAULT; return -EFAULT;
err = crypto_rng_get_bytes(crypto_default_rng, (u8 *)priv, nbytes); /* Step 3: obtain N returned_bits from the DRBG. */
err = crypto_rng_get_bytes(crypto_default_rng,
(u8 *)private_key, nbytes);
crypto_put_default_rng(); crypto_put_default_rng();
if (err) if (err)
return err; return err;
/* Make sure the private key is in the valid range. */ /* Step 4: make sure the private key is in the valid range. */
if (__ecc_is_key_valid(curve, priv, ndigits)) if (__ecc_is_key_valid(curve, private_key, ndigits))
return -EINVAL; return -EINVAL;
ecc_swap_digits(priv, privkey, ndigits);
return 0; return 0;
} }
EXPORT_SYMBOL(ecc_gen_privkey); EXPORT_SYMBOL(ecc_gen_privkey);
...@@ -1512,23 +1546,20 @@ int ecc_make_pub_key(unsigned int curve_id, unsigned int ndigits, ...@@ -1512,23 +1546,20 @@ int ecc_make_pub_key(unsigned int curve_id, unsigned int ndigits,
{ {
int ret = 0; int ret = 0;
struct ecc_point *pk; struct ecc_point *pk;
u64 priv[ECC_MAX_DIGITS];
const struct ecc_curve *curve = ecc_get_curve(curve_id); const struct ecc_curve *curve = ecc_get_curve(curve_id);
if (!private_key || !curve || ndigits > ARRAY_SIZE(priv)) { if (!private_key) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
ecc_swap_digits(private_key, priv, ndigits);
pk = ecc_alloc_point(ndigits); pk = ecc_alloc_point(ndigits);
if (!pk) { if (!pk) {
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;
} }
ecc_point_mult(pk, &curve->g, priv, NULL, curve, ndigits); ecc_point_mult(pk, &curve->g, private_key, NULL, curve, ndigits);
/* SP800-56A rev 3 5.6.2.1.3 key check */ /* SP800-56A rev 3 5.6.2.1.3 key check */
if (ecc_is_pubkey_valid_full(curve, pk)) { if (ecc_is_pubkey_valid_full(curve, pk)) {
...@@ -1612,13 +1643,11 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits, ...@@ -1612,13 +1643,11 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits,
{ {
int ret = 0; int ret = 0;
struct ecc_point *product, *pk; struct ecc_point *product, *pk;
u64 priv[ECC_MAX_DIGITS];
u64 rand_z[ECC_MAX_DIGITS]; u64 rand_z[ECC_MAX_DIGITS];
unsigned int nbytes; unsigned int nbytes;
const struct ecc_curve *curve = ecc_get_curve(curve_id); const struct ecc_curve *curve = ecc_get_curve(curve_id);
if (!private_key || !public_key || !curve || if (!private_key || !public_key || ndigits > ARRAY_SIZE(rand_z)) {
ndigits > ARRAY_SIZE(priv) || ndigits > ARRAY_SIZE(rand_z)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
...@@ -1639,15 +1668,13 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits, ...@@ -1639,15 +1668,13 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits,
if (ret) if (ret)
goto err_alloc_product; goto err_alloc_product;
ecc_swap_digits(private_key, priv, ndigits);
product = ecc_alloc_point(ndigits); product = ecc_alloc_point(ndigits);
if (!product) { if (!product) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_alloc_product; goto err_alloc_product;
} }
ecc_point_mult(product, pk, priv, rand_z, curve, ndigits); ecc_point_mult(product, pk, private_key, rand_z, curve, ndigits);
if (ecc_point_is_zero(product)) { if (ecc_point_is_zero(product)) {
ret = -EFAULT; ret = -EFAULT;
...@@ -1657,7 +1684,6 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits, ...@@ -1657,7 +1684,6 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits,
ecc_swap_digits(product->x, secret, ndigits); ecc_swap_digits(product->x, secret, ndigits);
err_validity: err_validity:
memzero_explicit(priv, sizeof(priv));
memzero_explicit(rand_z, sizeof(rand_z)); memzero_explicit(rand_z, sizeof(rand_z));
ecc_free_point(product); ecc_free_point(product);
err_alloc_product: err_alloc_product:
......
...@@ -17,6 +17,7 @@ static u64 nist_p192_b[] = { 0xFEB8DEECC146B9B1ull, 0x0FA7E9AB72243049ull, ...@@ -17,6 +17,7 @@ static u64 nist_p192_b[] = { 0xFEB8DEECC146B9B1ull, 0x0FA7E9AB72243049ull,
0x64210519E59C80E7ull }; 0x64210519E59C80E7ull };
static struct ecc_curve nist_p192 = { static struct ecc_curve nist_p192 = {
.name = "nist_192", .name = "nist_192",
.nbits = 192,
.g = { .g = {
.x = nist_p192_g_x, .x = nist_p192_g_x,
.y = nist_p192_g_y, .y = nist_p192_g_y,
...@@ -43,6 +44,7 @@ static u64 nist_p256_b[] = { 0x3BCE3C3E27D2604Bull, 0x651D06B0CC53B0F6ull, ...@@ -43,6 +44,7 @@ static u64 nist_p256_b[] = { 0x3BCE3C3E27D2604Bull, 0x651D06B0CC53B0F6ull,
0xB3EBBD55769886BCull, 0x5AC635D8AA3A93E7ull }; 0xB3EBBD55769886BCull, 0x5AC635D8AA3A93E7ull };
static struct ecc_curve nist_p256 = { static struct ecc_curve nist_p256 = {
.name = "nist_256", .name = "nist_256",
.nbits = 256,
.g = { .g = {
.x = nist_p256_g_x, .x = nist_p256_g_x,
.y = nist_p256_g_y, .y = nist_p256_g_y,
...@@ -75,6 +77,7 @@ static u64 nist_p384_b[] = { 0x2a85c8edd3ec2aefull, 0xc656398d8a2ed19dull, ...@@ -75,6 +77,7 @@ static u64 nist_p384_b[] = { 0x2a85c8edd3ec2aefull, 0xc656398d8a2ed19dull,
0x988e056be3f82d19ull, 0xb3312fa7e23ee7e4ull }; 0x988e056be3f82d19ull, 0xb3312fa7e23ee7e4ull };
static struct ecc_curve nist_p384 = { static struct ecc_curve nist_p384 = {
.name = "nist_384", .name = "nist_384",
.nbits = 384,
.g = { .g = {
.x = nist_p384_g_x, .x = nist_p384_g_x,
.y = nist_p384_g_y, .y = nist_p384_g_y,
...@@ -86,6 +89,51 @@ static struct ecc_curve nist_p384 = { ...@@ -86,6 +89,51 @@ static struct ecc_curve nist_p384 = {
.b = nist_p384_b .b = nist_p384_b
}; };
/* NIST P-521 */
static u64 nist_p521_g_x[] = { 0xf97e7e31c2e5bd66ull, 0x3348b3c1856a429bull,
0xfe1dc127a2ffa8deull, 0xa14b5e77efe75928ull,
0xf828af606b4d3dbaull, 0x9c648139053fb521ull,
0x9e3ecb662395b442ull, 0x858e06b70404e9cdull,
0xc6ull };
static u64 nist_p521_g_y[] = { 0x88be94769fd16650ull, 0x353c7086a272c240ull,
0xc550b9013fad0761ull, 0x97ee72995ef42640ull,
0x17afbd17273e662cull, 0x98f54449579b4468ull,
0x5c8a5fb42c7d1bd9ull, 0x39296a789a3bc004ull,
0x118ull };
static u64 nist_p521_p[] = { 0xffffffffffffffffull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0x1ffull };
static u64 nist_p521_n[] = { 0xbb6fb71e91386409ull, 0x3bb5c9b8899c47aeull,
0x7fcc0148f709a5d0ull, 0x51868783bf2f966bull,
0xfffffffffffffffaull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0x1ffull };
static u64 nist_p521_a[] = { 0xfffffffffffffffcull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0xffffffffffffffffull, 0xffffffffffffffffull,
0x1ffull };
static u64 nist_p521_b[] = { 0xef451fd46b503f00ull, 0x3573df883d2c34f1ull,
0x1652c0bd3bb1bf07ull, 0x56193951ec7e937bull,
0xb8b489918ef109e1ull, 0xa2da725b99b315f3ull,
0x929a21a0b68540eeull, 0x953eb9618e1c9a1full,
0x051ull };
static struct ecc_curve nist_p521 = {
.name = "nist_521",
.nbits = 521,
.g = {
.x = nist_p521_g_x,
.y = nist_p521_g_y,
.ndigits = 9,
},
.p = nist_p521_p,
.n = nist_p521_n,
.a = nist_p521_a,
.b = nist_p521_b
};
/* curve25519 */ /* curve25519 */
static u64 curve25519_g_x[] = { 0x0000000000000009, 0x0000000000000000, static u64 curve25519_g_x[] = { 0x0000000000000009, 0x0000000000000000,
0x0000000000000000, 0x0000000000000000 }; 0x0000000000000000, 0x0000000000000000 };
...@@ -95,6 +143,7 @@ static u64 curve25519_a[] = { 0x000000000001DB41, 0x0000000000000000, ...@@ -95,6 +143,7 @@ static u64 curve25519_a[] = { 0x000000000001DB41, 0x0000000000000000,
0x0000000000000000, 0x0000000000000000 }; 0x0000000000000000, 0x0000000000000000 };
static const struct ecc_curve ecc_25519 = { static const struct ecc_curve ecc_25519 = {
.name = "curve25519", .name = "curve25519",
.nbits = 255,
.g = { .g = {
.x = curve25519_g_x, .x = curve25519_g_x,
.ndigits = 4, .ndigits = 4,
......
...@@ -28,23 +28,28 @@ static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, ...@@ -28,23 +28,28 @@ static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf,
{ {
struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); struct ecdh_ctx *ctx = ecdh_get_ctx(tfm);
struct ecdh params; struct ecdh params;
int ret = 0;
if (crypto_ecdh_decode_key(buf, len, &params) < 0 || if (crypto_ecdh_decode_key(buf, len, &params) < 0 ||
params.key_size > sizeof(u64) * ctx->ndigits) params.key_size > sizeof(u64) * ctx->ndigits)
return -EINVAL; return -EINVAL;
memset(ctx->private_key, 0, sizeof(ctx->private_key));
if (!params.key || !params.key_size) if (!params.key || !params.key_size)
return ecc_gen_privkey(ctx->curve_id, ctx->ndigits, return ecc_gen_privkey(ctx->curve_id, ctx->ndigits,
ctx->private_key); ctx->private_key);
memcpy(ctx->private_key, params.key, params.key_size); ecc_digits_from_bytes(params.key, params.key_size,
ctx->private_key, ctx->ndigits);
if (ecc_is_key_valid(ctx->curve_id, ctx->ndigits, if (ecc_is_key_valid(ctx->curve_id, ctx->ndigits,
ctx->private_key, params.key_size) < 0) { ctx->private_key, params.key_size) < 0) {
memzero_explicit(ctx->private_key, params.key_size); memzero_explicit(ctx->private_key, params.key_size);
return -EINVAL; ret = -EINVAL;
} }
return 0;
return ret;
} }
static int ecdh_compute_value(struct kpp_request *req) static int ecdh_compute_value(struct kpp_request *req)
......
...@@ -35,8 +35,8 @@ struct ecdsa_signature_ctx { ...@@ -35,8 +35,8 @@ struct ecdsa_signature_ctx {
static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag, static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag,
const void *value, size_t vlen, unsigned int ndigits) const void *value, size_t vlen, unsigned int ndigits)
{ {
size_t keylen = ndigits * sizeof(u64); size_t bufsize = ndigits * sizeof(u64);
ssize_t diff = vlen - keylen; ssize_t diff = vlen - bufsize;
const char *d = value; const char *d = value;
u8 rs[ECC_MAX_BYTES]; u8 rs[ECC_MAX_BYTES];
...@@ -58,7 +58,7 @@ static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag, ...@@ -58,7 +58,7 @@ static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag,
if (diff) if (diff)
return -EINVAL; return -EINVAL;
} }
if (-diff >= keylen) if (-diff >= bufsize)
return -EINVAL; return -EINVAL;
if (diff) { if (diff) {
...@@ -122,7 +122,7 @@ static int _ecdsa_verify(struct ecc_ctx *ctx, const u64 *hash, const u64 *r, con ...@@ -122,7 +122,7 @@ static int _ecdsa_verify(struct ecc_ctx *ctx, const u64 *hash, const u64 *r, con
/* res.x = res.x mod n (if res.x > order) */ /* res.x = res.x mod n (if res.x > order) */
if (unlikely(vli_cmp(res.x, curve->n, ndigits) == 1)) if (unlikely(vli_cmp(res.x, curve->n, ndigits) == 1))
/* faster alternative for NIST p384, p256 & p192 */ /* faster alternative for NIST p521, p384, p256 & p192 */
vli_sub(res.x, res.x, curve->n, ndigits); vli_sub(res.x, res.x, curve->n, ndigits);
if (!vli_cmp(res.x, r, ndigits)) if (!vli_cmp(res.x, r, ndigits))
...@@ -138,7 +138,7 @@ static int ecdsa_verify(struct akcipher_request *req) ...@@ -138,7 +138,7 @@ static int ecdsa_verify(struct akcipher_request *req)
{ {
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm); struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm);
size_t keylen = ctx->curve->g.ndigits * sizeof(u64); size_t bufsize = ctx->curve->g.ndigits * sizeof(u64);
struct ecdsa_signature_ctx sig_ctx = { struct ecdsa_signature_ctx sig_ctx = {
.curve = ctx->curve, .curve = ctx->curve,
}; };
...@@ -165,14 +165,14 @@ static int ecdsa_verify(struct akcipher_request *req) ...@@ -165,14 +165,14 @@ static int ecdsa_verify(struct akcipher_request *req)
goto error; goto error;
/* if the hash is shorter then we will add leading zeros to fit to ndigits */ /* if the hash is shorter then we will add leading zeros to fit to ndigits */
diff = keylen - req->dst_len; diff = bufsize - req->dst_len;
if (diff >= 0) { if (diff >= 0) {
if (diff) if (diff)
memset(rawhash, 0, diff); memset(rawhash, 0, diff);
memcpy(&rawhash[diff], buffer + req->src_len, req->dst_len); memcpy(&rawhash[diff], buffer + req->src_len, req->dst_len);
} else if (diff < 0) { } else if (diff < 0) {
/* given hash is longer, we take the left-most bytes */ /* given hash is longer, we take the left-most bytes */
memcpy(&rawhash, buffer + req->src_len, keylen); memcpy(&rawhash, buffer + req->src_len, bufsize);
} }
ecc_swap_digits((u64 *)rawhash, hash, ctx->curve->g.ndigits); ecc_swap_digits((u64 *)rawhash, hash, ctx->curve->g.ndigits);
...@@ -222,28 +222,32 @@ static int ecdsa_ecc_ctx_reset(struct ecc_ctx *ctx) ...@@ -222,28 +222,32 @@ static int ecdsa_ecc_ctx_reset(struct ecc_ctx *ctx)
static int ecdsa_set_pub_key(struct crypto_akcipher *tfm, const void *key, unsigned int keylen) static int ecdsa_set_pub_key(struct crypto_akcipher *tfm, const void *key, unsigned int keylen)
{ {
struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm); struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm);
unsigned int digitlen, ndigits;
const unsigned char *d = key; const unsigned char *d = key;
const u64 *digits = (const u64 *)&d[1];
unsigned int ndigits;
int ret; int ret;
ret = ecdsa_ecc_ctx_reset(ctx); ret = ecdsa_ecc_ctx_reset(ctx);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (keylen < 1 || (((keylen - 1) >> 1) % sizeof(u64)) != 0) if (keylen < 1 || ((keylen - 1) & 1) != 0)
return -EINVAL; return -EINVAL;
/* we only accept uncompressed format indicated by '4' */ /* we only accept uncompressed format indicated by '4' */
if (d[0] != 4) if (d[0] != 4)
return -EINVAL; return -EINVAL;
keylen--; keylen--;
ndigits = (keylen >> 1) / sizeof(u64); digitlen = keylen >> 1;
ndigits = DIV_ROUND_UP(digitlen, sizeof(u64));
if (ndigits != ctx->curve->g.ndigits) if (ndigits != ctx->curve->g.ndigits)
return -EINVAL; return -EINVAL;
ecc_swap_digits(digits, ctx->pub_key.x, ndigits); d++;
ecc_swap_digits(&digits[ndigits], ctx->pub_key.y, ndigits);
ecc_digits_from_bytes(d, digitlen, ctx->pub_key.x, ndigits);
ecc_digits_from_bytes(&d[digitlen], digitlen, ctx->pub_key.y, ndigits);
ret = ecc_is_pubkey_valid_full(ctx->curve, &ctx->pub_key); ret = ecc_is_pubkey_valid_full(ctx->curve, &ctx->pub_key);
ctx->pub_key_set = ret == 0; ctx->pub_key_set = ret == 0;
...@@ -262,9 +266,31 @@ static unsigned int ecdsa_max_size(struct crypto_akcipher *tfm) ...@@ -262,9 +266,31 @@ static unsigned int ecdsa_max_size(struct crypto_akcipher *tfm)
{ {
struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm); struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm);
return ctx->pub_key.ndigits << ECC_DIGITS_TO_BYTES_SHIFT; return DIV_ROUND_UP(ctx->curve->nbits, 8);
}
static int ecdsa_nist_p521_init_tfm(struct crypto_akcipher *tfm)
{
struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm);
return ecdsa_ecc_ctx_init(ctx, ECC_CURVE_NIST_P521);
} }
static struct akcipher_alg ecdsa_nist_p521 = {
.verify = ecdsa_verify,
.set_pub_key = ecdsa_set_pub_key,
.max_size = ecdsa_max_size,
.init = ecdsa_nist_p521_init_tfm,
.exit = ecdsa_exit_tfm,
.base = {
.cra_name = "ecdsa-nist-p521",
.cra_driver_name = "ecdsa-nist-p521-generic",
.cra_priority = 100,
.cra_module = THIS_MODULE,
.cra_ctxsize = sizeof(struct ecc_ctx),
},
};
static int ecdsa_nist_p384_init_tfm(struct crypto_akcipher *tfm) static int ecdsa_nist_p384_init_tfm(struct crypto_akcipher *tfm)
{ {
struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm); struct ecc_ctx *ctx = akcipher_tfm_ctx(tfm);
...@@ -348,8 +374,15 @@ static int __init ecdsa_init(void) ...@@ -348,8 +374,15 @@ static int __init ecdsa_init(void)
if (ret) if (ret)
goto nist_p384_error; goto nist_p384_error;
ret = crypto_register_akcipher(&ecdsa_nist_p521);
if (ret)
goto nist_p521_error;
return 0; return 0;
nist_p521_error:
crypto_unregister_akcipher(&ecdsa_nist_p384);
nist_p384_error: nist_p384_error:
crypto_unregister_akcipher(&ecdsa_nist_p256); crypto_unregister_akcipher(&ecdsa_nist_p256);
...@@ -365,6 +398,7 @@ static void __exit ecdsa_exit(void) ...@@ -365,6 +398,7 @@ static void __exit ecdsa_exit(void)
crypto_unregister_akcipher(&ecdsa_nist_p192); crypto_unregister_akcipher(&ecdsa_nist_p192);
crypto_unregister_akcipher(&ecdsa_nist_p256); crypto_unregister_akcipher(&ecdsa_nist_p256);
crypto_unregister_akcipher(&ecdsa_nist_p384); crypto_unregister_akcipher(&ecdsa_nist_p384);
crypto_unregister_akcipher(&ecdsa_nist_p521);
} }
subsys_initcall(ecdsa_init); subsys_initcall(ecdsa_init);
...@@ -373,4 +407,8 @@ module_exit(ecdsa_exit); ...@@ -373,4 +407,8 @@ module_exit(ecdsa_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Stefan Berger <stefanb@linux.ibm.com>"); MODULE_AUTHOR("Stefan Berger <stefanb@linux.ibm.com>");
MODULE_DESCRIPTION("ECDSA generic algorithm"); MODULE_DESCRIPTION("ECDSA generic algorithm");
MODULE_ALIAS_CRYPTO("ecdsa-nist-p192");
MODULE_ALIAS_CRYPTO("ecdsa-nist-p256");
MODULE_ALIAS_CRYPTO("ecdsa-nist-p384");
MODULE_ALIAS_CRYPTO("ecdsa-nist-p521");
MODULE_ALIAS_CRYPTO("ecdsa-generic"); MODULE_ALIAS_CRYPTO("ecdsa-generic");
...@@ -294,4 +294,5 @@ module_exit(ecrdsa_mod_fini); ...@@ -294,4 +294,5 @@ module_exit(ecrdsa_mod_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Vitaly Chikunov <vt@altlinux.org>"); MODULE_AUTHOR("Vitaly Chikunov <vt@altlinux.org>");
MODULE_DESCRIPTION("EC-RDSA generic algorithm"); MODULE_DESCRIPTION("EC-RDSA generic algorithm");
MODULE_ALIAS_CRYPTO("ecrdsa");
MODULE_ALIAS_CRYPTO("ecrdsa-generic"); MODULE_ALIAS_CRYPTO("ecrdsa-generic");
...@@ -47,6 +47,7 @@ static u64 cp256a_b[] = { ...@@ -47,6 +47,7 @@ static u64 cp256a_b[] = {
static struct ecc_curve gost_cp256a = { static struct ecc_curve gost_cp256a = {
.name = "cp256a", .name = "cp256a",
.nbits = 256,
.g = { .g = {
.x = cp256a_g_x, .x = cp256a_g_x,
.y = cp256a_g_y, .y = cp256a_g_y,
...@@ -80,6 +81,7 @@ static u64 cp256b_b[] = { ...@@ -80,6 +81,7 @@ static u64 cp256b_b[] = {
static struct ecc_curve gost_cp256b = { static struct ecc_curve gost_cp256b = {
.name = "cp256b", .name = "cp256b",
.nbits = 256,
.g = { .g = {
.x = cp256b_g_x, .x = cp256b_g_x,
.y = cp256b_g_y, .y = cp256b_g_y,
...@@ -117,6 +119,7 @@ static u64 cp256c_b[] = { ...@@ -117,6 +119,7 @@ static u64 cp256c_b[] = {
static struct ecc_curve gost_cp256c = { static struct ecc_curve gost_cp256c = {
.name = "cp256c", .name = "cp256c",
.nbits = 256,
.g = { .g = {
.x = cp256c_g_x, .x = cp256c_g_x,
.y = cp256c_g_y, .y = cp256c_g_y,
...@@ -166,6 +169,7 @@ static u64 tc512a_b[] = { ...@@ -166,6 +169,7 @@ static u64 tc512a_b[] = {
static struct ecc_curve gost_tc512a = { static struct ecc_curve gost_tc512a = {
.name = "tc512a", .name = "tc512a",
.nbits = 512,
.g = { .g = {
.x = tc512a_g_x, .x = tc512a_g_x,
.y = tc512a_g_y, .y = tc512a_g_y,
...@@ -211,6 +215,7 @@ static u64 tc512b_b[] = { ...@@ -211,6 +215,7 @@ static u64 tc512b_b[] = {
static struct ecc_curve gost_tc512b = { static struct ecc_curve gost_tc512b = {
.name = "tc512b", .name = "tc512b",
.nbits = 512,
.g = { .g = {
.x = tc512b_g_x, .x = tc512b_g_x,
.y = tc512b_g_y, .y = tc512b_g_y,
......
...@@ -63,7 +63,6 @@ static struct ctl_table crypto_sysctl_table[] = { ...@@ -63,7 +63,6 @@ static struct ctl_table crypto_sysctl_table[] = {
.mode = 0444, .mode = 0444,
.proc_handler = proc_dostring .proc_handler = proc_dostring
}, },
{}
}; };
static struct ctl_table_header *crypto_sysctls; static struct ctl_table_header *crypto_sysctls;
......
...@@ -8,39 +8,9 @@ ...@@ -8,39 +8,9 @@
#define _LOCAL_CRYPTO_HASH_H #define _LOCAL_CRYPTO_HASH_H
#include <crypto/internal/hash.h> #include <crypto/internal/hash.h>
#include <linux/cryptouser.h>
#include "internal.h" #include "internal.h"
static inline struct crypto_istat_hash *hash_get_stat(
struct hash_alg_common *alg)
{
#ifdef CONFIG_CRYPTO_STATS
return &alg->stat;
#else
return NULL;
#endif
}
static inline int crypto_hash_report_stat(struct sk_buff *skb,
struct crypto_alg *alg,
const char *type)
{
struct hash_alg_common *halg = __crypto_hash_alg_common(alg);
struct crypto_istat_hash *istat = hash_get_stat(halg);
struct crypto_stat_hash rhash;
memset(&rhash, 0, sizeof(rhash));
strscpy(rhash.type, type, sizeof(rhash.type));
rhash.stat_hash_cnt = atomic64_read(&istat->hash_cnt);
rhash.stat_hash_tlen = atomic64_read(&istat->hash_tlen);
rhash.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash);
}
extern const struct crypto_type crypto_shash_type; extern const struct crypto_type crypto_shash_type;
int hash_prepare_alg(struct hash_alg_common *alg); int hash_prepare_alg(struct hash_alg_common *alg);
......
...@@ -61,8 +61,7 @@ void *jent_kvzalloc(unsigned int len) ...@@ -61,8 +61,7 @@ void *jent_kvzalloc(unsigned int len)
void jent_kvzfree(void *ptr, unsigned int len) void jent_kvzfree(void *ptr, unsigned int len)
{ {
memzero_explicit(ptr, len); kvfree_sensitive(ptr, len);
kvfree(ptr);
} }
void *jent_zalloc(unsigned int len) void *jent_zalloc(unsigned int len)
......
...@@ -157,8 +157,8 @@ struct rand_data { ...@@ -157,8 +157,8 @@ struct rand_data {
/* /*
* See the SP 800-90B comment #10b for the corrected cutoff for the SP 800-90B * See the SP 800-90B comment #10b for the corrected cutoff for the SP 800-90B
* APT. * APT.
* http://www.untruth.org/~josh/sp80090b/UL%20SP800-90B-final%20comments%20v1.9%2020191212.pdf * https://www.untruth.org/~josh/sp80090b/UL%20SP800-90B-final%20comments%20v1.9%2020191212.pdf
* In in the syntax of R, this is C = 2 + qbinom(1 − 2^(−30), 511, 2^(-1/osr)). * In the syntax of R, this is C = 2 + qbinom(1 − 2^(−30), 511, 2^(-1/osr)).
* (The original formula wasn't correct because the first symbol must * (The original formula wasn't correct because the first symbol must
* necessarily have been observed, so there is no chance of observing 0 of these * necessarily have been observed, so there is no chance of observing 0 of these
* symbols.) * symbols.)
......
...@@ -66,29 +66,6 @@ static void crypto_kpp_free_instance(struct crypto_instance *inst) ...@@ -66,29 +66,6 @@ static void crypto_kpp_free_instance(struct crypto_instance *inst)
kpp->free(kpp); kpp->free(kpp);
} }
static int __maybe_unused crypto_kpp_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct kpp_alg *kpp = __crypto_kpp_alg(alg);
struct crypto_istat_kpp *istat;
struct crypto_stat_kpp rkpp;
istat = kpp_get_stat(kpp);
memset(&rkpp, 0, sizeof(rkpp));
strscpy(rkpp.type, "kpp", sizeof(rkpp.type));
rkpp.stat_setsecret_cnt = atomic64_read(&istat->setsecret_cnt);
rkpp.stat_generate_public_key_cnt =
atomic64_read(&istat->generate_public_key_cnt);
rkpp.stat_compute_shared_secret_cnt =
atomic64_read(&istat->compute_shared_secret_cnt);
rkpp.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_KPP, sizeof(rkpp), &rkpp);
}
static const struct crypto_type crypto_kpp_type = { static const struct crypto_type crypto_kpp_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_kpp_init_tfm, .init_tfm = crypto_kpp_init_tfm,
...@@ -98,9 +75,6 @@ static const struct crypto_type crypto_kpp_type = { ...@@ -98,9 +75,6 @@ static const struct crypto_type crypto_kpp_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_kpp_report, .report = crypto_kpp_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_kpp_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
...@@ -131,15 +105,11 @@ EXPORT_SYMBOL_GPL(crypto_has_kpp); ...@@ -131,15 +105,11 @@ EXPORT_SYMBOL_GPL(crypto_has_kpp);
static void kpp_prepare_alg(struct kpp_alg *alg) static void kpp_prepare_alg(struct kpp_alg *alg)
{ {
struct crypto_istat_kpp *istat = kpp_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
base->cra_type = &crypto_kpp_type; base->cra_type = &crypto_kpp_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_KPP; base->cra_flags |= CRYPTO_ALG_TYPE_KPP;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
} }
int crypto_register_kpp(struct kpp_alg *alg) int crypto_register_kpp(struct kpp_alg *alg)
......
...@@ -29,25 +29,6 @@ static inline struct lskcipher_alg *__crypto_lskcipher_alg( ...@@ -29,25 +29,6 @@ static inline struct lskcipher_alg *__crypto_lskcipher_alg(
return container_of(alg, struct lskcipher_alg, co.base); return container_of(alg, struct lskcipher_alg, co.base);
} }
static inline struct crypto_istat_cipher *lskcipher_get_stat(
struct lskcipher_alg *alg)
{
return skcipher_get_stat_common(&alg->co);
}
static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
{
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err)
atomic64_inc(&istat->err_cnt);
return err;
}
static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm, static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
const u8 *key, unsigned int keylen) const u8 *key, unsigned int keylen)
{ {
...@@ -147,20 +128,13 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src, ...@@ -147,20 +128,13 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
u32 flags)) u32 flags))
{ {
unsigned long alignmask = crypto_lskcipher_alignmask(tfm); unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
int ret;
if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) & if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
alignmask) { alignmask)
ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv, return crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
crypt); crypt);
goto out;
}
ret = crypt(tfm, src, dst, len, iv, CRYPTO_LSKCIPHER_FLAG_FINAL); return crypt(tfm, src, dst, len, iv, CRYPTO_LSKCIPHER_FLAG_FINAL);
out:
return crypto_lskcipher_errstat(alg, ret);
} }
int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src, int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
...@@ -168,13 +142,6 @@ int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src, ...@@ -168,13 +142,6 @@ int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
{ {
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm); struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
atomic64_inc(&istat->encrypt_cnt);
atomic64_add(len, &istat->encrypt_tlen);
}
return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt); return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
} }
EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt); EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
...@@ -184,13 +151,6 @@ int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src, ...@@ -184,13 +151,6 @@ int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
{ {
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm); struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
atomic64_inc(&istat->decrypt_cnt);
atomic64_add(len, &istat->decrypt_tlen);
}
return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt); return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
} }
EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt); EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
...@@ -320,28 +280,6 @@ static int __maybe_unused crypto_lskcipher_report( ...@@ -320,28 +280,6 @@ static int __maybe_unused crypto_lskcipher_report(
sizeof(rblkcipher), &rblkcipher); sizeof(rblkcipher), &rblkcipher);
} }
static int __maybe_unused crypto_lskcipher_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
struct crypto_istat_cipher *istat;
struct crypto_stat_cipher rcipher;
istat = lskcipher_get_stat(skcipher);
memset(&rcipher, 0, sizeof(rcipher));
strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
rcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
rcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
}
static const struct crypto_type crypto_lskcipher_type = { static const struct crypto_type crypto_lskcipher_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_lskcipher_init_tfm, .init_tfm = crypto_lskcipher_init_tfm,
...@@ -351,9 +289,6 @@ static const struct crypto_type crypto_lskcipher_type = { ...@@ -351,9 +289,6 @@ static const struct crypto_type crypto_lskcipher_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_lskcipher_report, .report = crypto_lskcipher_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_lskcipher_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
......
...@@ -30,30 +30,24 @@ static int crypto_default_rng_refcnt; ...@@ -30,30 +30,24 @@ static int crypto_default_rng_refcnt;
int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen) int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
{ {
struct rng_alg *alg = crypto_rng_alg(tfm);
u8 *buf = NULL; u8 *buf = NULL;
int err; int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&rng_get_stat(alg)->seed_cnt);
if (!seed && slen) { if (!seed && slen) {
buf = kmalloc(slen, GFP_KERNEL); buf = kmalloc(slen, GFP_KERNEL);
err = -ENOMEM;
if (!buf) if (!buf)
goto out; return -ENOMEM;
err = get_random_bytes_wait(buf, slen); err = get_random_bytes_wait(buf, slen);
if (err) if (err)
goto free_buf; goto out;
seed = buf; seed = buf;
} }
err = alg->seed(tfm, seed, slen); err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
free_buf:
kfree_sensitive(buf);
out: out:
return crypto_rng_errstat(alg, err); kfree_sensitive(buf);
return err;
} }
EXPORT_SYMBOL_GPL(crypto_rng_reset); EXPORT_SYMBOL_GPL(crypto_rng_reset);
...@@ -91,27 +85,6 @@ static void crypto_rng_show(struct seq_file *m, struct crypto_alg *alg) ...@@ -91,27 +85,6 @@ static void crypto_rng_show(struct seq_file *m, struct crypto_alg *alg)
seq_printf(m, "seedsize : %u\n", seedsize(alg)); seq_printf(m, "seedsize : %u\n", seedsize(alg));
} }
static int __maybe_unused crypto_rng_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct rng_alg *rng = __crypto_rng_alg(alg);
struct crypto_istat_rng *istat;
struct crypto_stat_rng rrng;
istat = rng_get_stat(rng);
memset(&rrng, 0, sizeof(rrng));
strscpy(rrng.type, "rng", sizeof(rrng.type));
rrng.stat_generate_cnt = atomic64_read(&istat->generate_cnt);
rrng.stat_generate_tlen = atomic64_read(&istat->generate_tlen);
rrng.stat_seed_cnt = atomic64_read(&istat->seed_cnt);
rrng.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_RNG, sizeof(rrng), &rrng);
}
static const struct crypto_type crypto_rng_type = { static const struct crypto_type crypto_rng_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_rng_init_tfm, .init_tfm = crypto_rng_init_tfm,
...@@ -120,9 +93,6 @@ static const struct crypto_type crypto_rng_type = { ...@@ -120,9 +93,6 @@ static const struct crypto_type crypto_rng_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_rng_report, .report = crypto_rng_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_rng_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
...@@ -199,7 +169,6 @@ EXPORT_SYMBOL_GPL(crypto_del_default_rng); ...@@ -199,7 +169,6 @@ EXPORT_SYMBOL_GPL(crypto_del_default_rng);
int crypto_register_rng(struct rng_alg *alg) int crypto_register_rng(struct rng_alg *alg)
{ {
struct crypto_istat_rng *istat = rng_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
if (alg->seedsize > PAGE_SIZE / 8) if (alg->seedsize > PAGE_SIZE / 8)
...@@ -209,9 +178,6 @@ int crypto_register_rng(struct rng_alg *alg) ...@@ -209,9 +178,6 @@ int crypto_register_rng(struct rng_alg *alg)
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_RNG; base->cra_flags |= CRYPTO_ALG_TYPE_RNG;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return crypto_register_alg(base); return crypto_register_alg(base);
} }
EXPORT_SYMBOL_GPL(crypto_register_rng); EXPORT_SYMBOL_GPL(crypto_register_rng);
......
...@@ -270,9 +270,6 @@ static const struct crypto_type crypto_scomp_type = { ...@@ -270,9 +270,6 @@ static const struct crypto_type crypto_scomp_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_scomp_report, .report = crypto_scomp_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_acomp_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
......
...@@ -16,18 +16,6 @@ ...@@ -16,18 +16,6 @@
#include "hash.h" #include "hash.h"
static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
{
return hash_get_stat(&alg->halg);
}
static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
{
if (IS_ENABLED(CONFIG_CRYPTO_STATS) && err)
atomic64_inc(&shash_get_stat(alg)->err_cnt);
return err;
}
int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
...@@ -61,29 +49,13 @@ EXPORT_SYMBOL_GPL(crypto_shash_setkey); ...@@ -61,29 +49,13 @@ EXPORT_SYMBOL_GPL(crypto_shash_setkey);
int crypto_shash_update(struct shash_desc *desc, const u8 *data, int crypto_shash_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len)
{ {
struct shash_alg *shash = crypto_shash_alg(desc->tfm); return crypto_shash_alg(desc->tfm)->update(desc, data, len);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_add(len, &shash_get_stat(shash)->hash_tlen);
err = shash->update(desc, data, len);
return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_update); EXPORT_SYMBOL_GPL(crypto_shash_update);
int crypto_shash_final(struct shash_desc *desc, u8 *out) int crypto_shash_final(struct shash_desc *desc, u8 *out)
{ {
struct shash_alg *shash = crypto_shash_alg(desc->tfm); return crypto_shash_alg(desc->tfm)->final(desc, out);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&shash_get_stat(shash)->hash_cnt);
err = shash->final(desc, out);
return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_final); EXPORT_SYMBOL_GPL(crypto_shash_final);
...@@ -99,20 +71,7 @@ static int shash_default_finup(struct shash_desc *desc, const u8 *data, ...@@ -99,20 +71,7 @@ static int shash_default_finup(struct shash_desc *desc, const u8 *data,
int crypto_shash_finup(struct shash_desc *desc, const u8 *data, int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
struct crypto_shash *tfm = desc->tfm; return crypto_shash_alg(desc->tfm)->finup(desc, data, len, out);
struct shash_alg *shash = crypto_shash_alg(tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = shash_get_stat(shash);
atomic64_inc(&istat->hash_cnt);
atomic64_add(len, &istat->hash_tlen);
}
err = shash->finup(desc, data, len, out);
return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_finup); EXPORT_SYMBOL_GPL(crypto_shash_finup);
...@@ -129,22 +88,11 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data, ...@@ -129,22 +88,11 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
struct crypto_shash *tfm = desc->tfm; struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = shash_get_stat(shash);
atomic64_inc(&istat->hash_cnt);
atomic64_add(len, &istat->hash_tlen);
}
if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
err = -ENOKEY; return -ENOKEY;
else
err = shash->digest(desc, data, len, out);
return crypto_shash_errstat(shash, err); return crypto_shash_alg(tfm)->digest(desc, data, len, out);
} }
EXPORT_SYMBOL_GPL(crypto_shash_digest); EXPORT_SYMBOL_GPL(crypto_shash_digest);
...@@ -265,12 +213,6 @@ static void crypto_shash_show(struct seq_file *m, struct crypto_alg *alg) ...@@ -265,12 +213,6 @@ static void crypto_shash_show(struct seq_file *m, struct crypto_alg *alg)
seq_printf(m, "digestsize : %u\n", salg->digestsize); seq_printf(m, "digestsize : %u\n", salg->digestsize);
} }
static int __maybe_unused crypto_shash_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
return crypto_hash_report_stat(skb, alg, "shash");
}
const struct crypto_type crypto_shash_type = { const struct crypto_type crypto_shash_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_shash_init_tfm, .init_tfm = crypto_shash_init_tfm,
...@@ -280,9 +222,6 @@ const struct crypto_type crypto_shash_type = { ...@@ -280,9 +222,6 @@ const struct crypto_type crypto_shash_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_shash_report, .report = crypto_shash_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_shash_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_MASK,
...@@ -350,7 +289,6 @@ EXPORT_SYMBOL_GPL(crypto_clone_shash); ...@@ -350,7 +289,6 @@ EXPORT_SYMBOL_GPL(crypto_clone_shash);
int hash_prepare_alg(struct hash_alg_common *alg) int hash_prepare_alg(struct hash_alg_common *alg)
{ {
struct crypto_istat_hash *istat = hash_get_stat(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
if (alg->digestsize > HASH_MAX_DIGESTSIZE) if (alg->digestsize > HASH_MAX_DIGESTSIZE)
...@@ -362,9 +300,6 @@ int hash_prepare_alg(struct hash_alg_common *alg) ...@@ -362,9 +300,6 @@ int hash_prepare_alg(struct hash_alg_common *alg)
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0; return 0;
} }
......
...@@ -45,16 +45,6 @@ static int __maybe_unused crypto_sig_report(struct sk_buff *skb, ...@@ -45,16 +45,6 @@ static int __maybe_unused crypto_sig_report(struct sk_buff *skb,
return nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, sizeof(rsig), &rsig); return nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, sizeof(rsig), &rsig);
} }
static int __maybe_unused crypto_sig_report_stat(struct sk_buff *skb,
struct crypto_alg *alg)
{
struct crypto_stat_akcipher rsig = {};
strscpy(rsig.type, "sig", sizeof(rsig.type));
return nla_put(skb, CRYPTOCFGA_STAT_AKCIPHER, sizeof(rsig), &rsig);
}
static const struct crypto_type crypto_sig_type = { static const struct crypto_type crypto_sig_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_sig_init_tfm, .init_tfm = crypto_sig_init_tfm,
...@@ -63,9 +53,6 @@ static const struct crypto_type crypto_sig_type = { ...@@ -63,9 +53,6 @@ static const struct crypto_type crypto_sig_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_sig_report, .report = crypto_sig_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_sig_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_SIG_MASK, .maskset = CRYPTO_ALG_TYPE_SIG_MASK,
......
...@@ -89,25 +89,6 @@ static inline struct skcipher_alg *__crypto_skcipher_alg( ...@@ -89,25 +89,6 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
return container_of(alg, struct skcipher_alg, base); return container_of(alg, struct skcipher_alg, base);
} }
static inline struct crypto_istat_cipher *skcipher_get_stat(
struct skcipher_alg *alg)
{
return skcipher_get_stat_common(&alg->co);
}
static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
{
struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err && err != -EINPROGRESS && err != -EBUSY)
atomic64_inc(&istat->err_cnt);
return err;
}
static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
{ {
u8 *addr; u8 *addr;
...@@ -654,23 +635,12 @@ int crypto_skcipher_encrypt(struct skcipher_request *req) ...@@ -654,23 +635,12 @@ int crypto_skcipher_encrypt(struct skcipher_request *req)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
int ret;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
atomic64_inc(&istat->encrypt_cnt);
atomic64_add(req->cryptlen, &istat->encrypt_tlen);
}
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; return -ENOKEY;
else if (alg->co.base.cra_type != &crypto_skcipher_type) if (alg->co.base.cra_type != &crypto_skcipher_type)
ret = crypto_lskcipher_encrypt_sg(req); return crypto_lskcipher_encrypt_sg(req);
else return alg->encrypt(req);
ret = alg->encrypt(req);
return crypto_skcipher_errstat(alg, ret);
} }
EXPORT_SYMBOL_GPL(crypto_skcipher_encrypt); EXPORT_SYMBOL_GPL(crypto_skcipher_encrypt);
...@@ -678,23 +648,12 @@ int crypto_skcipher_decrypt(struct skcipher_request *req) ...@@ -678,23 +648,12 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
int ret;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
atomic64_inc(&istat->decrypt_cnt);
atomic64_add(req->cryptlen, &istat->decrypt_tlen);
}
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; return -ENOKEY;
else if (alg->co.base.cra_type != &crypto_skcipher_type) if (alg->co.base.cra_type != &crypto_skcipher_type)
ret = crypto_lskcipher_decrypt_sg(req); return crypto_lskcipher_decrypt_sg(req);
else return alg->decrypt(req);
ret = alg->decrypt(req);
return crypto_skcipher_errstat(alg, ret);
} }
EXPORT_SYMBOL_GPL(crypto_skcipher_decrypt); EXPORT_SYMBOL_GPL(crypto_skcipher_decrypt);
...@@ -846,28 +805,6 @@ static int __maybe_unused crypto_skcipher_report( ...@@ -846,28 +805,6 @@ static int __maybe_unused crypto_skcipher_report(
sizeof(rblkcipher), &rblkcipher); sizeof(rblkcipher), &rblkcipher);
} }
static int __maybe_unused crypto_skcipher_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct skcipher_alg *skcipher = __crypto_skcipher_alg(alg);
struct crypto_istat_cipher *istat;
struct crypto_stat_cipher rcipher;
istat = skcipher_get_stat(skcipher);
memset(&rcipher, 0, sizeof(rcipher));
strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
rcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
rcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
}
static const struct crypto_type crypto_skcipher_type = { static const struct crypto_type crypto_skcipher_type = {
.extsize = crypto_skcipher_extsize, .extsize = crypto_skcipher_extsize,
.init_tfm = crypto_skcipher_init_tfm, .init_tfm = crypto_skcipher_init_tfm,
...@@ -877,9 +814,6 @@ static const struct crypto_type crypto_skcipher_type = { ...@@ -877,9 +814,6 @@ static const struct crypto_type crypto_skcipher_type = {
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_USER) #if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_skcipher_report, .report = crypto_skcipher_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_skcipher_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK, .maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
...@@ -935,7 +869,6 @@ EXPORT_SYMBOL_GPL(crypto_has_skcipher); ...@@ -935,7 +869,6 @@ EXPORT_SYMBOL_GPL(crypto_has_skcipher);
int skcipher_prepare_alg_common(struct skcipher_alg_common *alg) int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
{ {
struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 || if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
...@@ -948,9 +881,6 @@ int skcipher_prepare_alg_common(struct skcipher_alg_common *alg) ...@@ -948,9 +881,6 @@ int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0; return 0;
} }
......
...@@ -10,16 +10,6 @@ ...@@ -10,16 +10,6 @@
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include "internal.h" #include "internal.h"
static inline struct crypto_istat_cipher *skcipher_get_stat_common(
struct skcipher_alg_common *alg)
{
#ifdef CONFIG_CRYPTO_STATS
return &alg->stat;
#else
return NULL;
#endif
}
int crypto_lskcipher_encrypt_sg(struct skcipher_request *req); int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
int crypto_lskcipher_decrypt_sg(struct skcipher_request *req); int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm); int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
......
...@@ -5097,6 +5097,13 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -5097,6 +5097,13 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = { .suite = {
.akcipher = __VECS(ecdsa_nist_p384_tv_template) .akcipher = __VECS(ecdsa_nist_p384_tv_template)
} }
}, {
.alg = "ecdsa-nist-p521",
.test = alg_test_akcipher,
.fips_allowed = 1,
.suite = {
.akcipher = __VECS(ecdsa_nist_p521_tv_template)
}
}, { }, {
.alg = "ecrdsa", .alg = "ecrdsa",
.test = alg_test_akcipher, .test = alg_test_akcipher,
......
This diff is collapsed.
...@@ -382,7 +382,7 @@ static ssize_t rng_current_show(struct device *dev, ...@@ -382,7 +382,7 @@ static ssize_t rng_current_show(struct device *dev,
if (IS_ERR(rng)) if (IS_ERR(rng))
return PTR_ERR(rng); return PTR_ERR(rng);
ret = snprintf(buf, PAGE_SIZE, "%s\n", rng ? rng->name : "none"); ret = sysfs_emit(buf, "%s\n", rng ? rng->name : "none");
put_rng(rng); put_rng(rng);
return ret; return ret;
......
...@@ -131,7 +131,7 @@ static void mxc_rnga_cleanup(struct hwrng *rng) ...@@ -131,7 +131,7 @@ static void mxc_rnga_cleanup(struct hwrng *rng)
__raw_writel(ctrl & ~RNGA_CONTROL_GO, mxc_rng->mem + RNGA_CONTROL); __raw_writel(ctrl & ~RNGA_CONTROL_GO, mxc_rng->mem + RNGA_CONTROL);
} }
static int __init mxc_rnga_probe(struct platform_device *pdev) static int mxc_rnga_probe(struct platform_device *pdev)
{ {
int err; int err;
struct mxc_rng *mxc_rng; struct mxc_rng *mxc_rng;
...@@ -176,7 +176,7 @@ static int __init mxc_rnga_probe(struct platform_device *pdev) ...@@ -176,7 +176,7 @@ static int __init mxc_rnga_probe(struct platform_device *pdev)
return err; return err;
} }
static void __exit mxc_rnga_remove(struct platform_device *pdev) static void mxc_rnga_remove(struct platform_device *pdev)
{ {
struct mxc_rng *mxc_rng = platform_get_drvdata(pdev); struct mxc_rng *mxc_rng = platform_get_drvdata(pdev);
...@@ -197,10 +197,11 @@ static struct platform_driver mxc_rnga_driver = { ...@@ -197,10 +197,11 @@ static struct platform_driver mxc_rnga_driver = {
.name = "mxc_rnga", .name = "mxc_rnga",
.of_match_table = mxc_rnga_of_match, .of_match_table = mxc_rnga_of_match,
}, },
.remove_new = __exit_p(mxc_rnga_remove), .probe = mxc_rnga_probe,
.remove_new = mxc_rnga_remove,
}; };
module_platform_driver_probe(mxc_rnga_driver, mxc_rnga_probe); module_platform_driver(mxc_rnga_driver);
MODULE_AUTHOR("Freescale Semiconductor, Inc."); MODULE_AUTHOR("Freescale Semiconductor, Inc.");
MODULE_DESCRIPTION("H/W RNGA driver for i.MX"); MODULE_DESCRIPTION("H/W RNGA driver for i.MX");
......
...@@ -220,7 +220,8 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) ...@@ -220,7 +220,8 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
if (err && i > RNG_NB_RECOVER_TRIES) { if (err && i > RNG_NB_RECOVER_TRIES) {
dev_err((struct device *)priv->rng.priv, dev_err((struct device *)priv->rng.priv,
"Couldn't recover from seed error\n"); "Couldn't recover from seed error\n");
return -ENOTRECOVERABLE; retval = -ENOTRECOVERABLE;
goto exit_rpm;
} }
continue; continue;
...@@ -238,7 +239,8 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) ...@@ -238,7 +239,8 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
if (err && i > RNG_NB_RECOVER_TRIES) { if (err && i > RNG_NB_RECOVER_TRIES) {
dev_err((struct device *)priv->rng.priv, dev_err((struct device *)priv->rng.priv,
"Couldn't recover from seed error"); "Couldn't recover from seed error");
return -ENOTRECOVERABLE; retval = -ENOTRECOVERABLE;
goto exit_rpm;
} }
continue; continue;
...@@ -250,6 +252,7 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) ...@@ -250,6 +252,7 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
max -= sizeof(u32); max -= sizeof(u32);
} }
exit_rpm:
pm_runtime_mark_last_busy((struct device *) priv->rng.priv); pm_runtime_mark_last_busy((struct device *) priv->rng.priv);
pm_runtime_put_sync_autosuspend((struct device *) priv->rng.priv); pm_runtime_put_sync_autosuspend((struct device *) priv->rng.priv);
...@@ -353,13 +356,15 @@ static int stm32_rng_init(struct hwrng *rng) ...@@ -353,13 +356,15 @@ static int stm32_rng_init(struct hwrng *rng)
err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, reg, err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, reg,
reg & RNG_SR_DRDY, reg & RNG_SR_DRDY,
10, 100000); 10, 100000);
if (err | (reg & ~RNG_SR_DRDY)) { if (err || (reg & ~RNG_SR_DRDY)) {
clk_disable_unprepare(priv->clk); clk_disable_unprepare(priv->clk);
dev_err((struct device *)priv->rng.priv, dev_err((struct device *)priv->rng.priv,
"%s: timeout:%x SR: %x!\n", __func__, err, reg); "%s: timeout:%x SR: %x!\n", __func__, err, reg);
return -EINVAL; return -EINVAL;
} }
clk_disable_unprepare(priv->clk);
return 0; return 0;
} }
...@@ -384,6 +389,11 @@ static int __maybe_unused stm32_rng_runtime_suspend(struct device *dev) ...@@ -384,6 +389,11 @@ static int __maybe_unused stm32_rng_runtime_suspend(struct device *dev)
static int __maybe_unused stm32_rng_suspend(struct device *dev) static int __maybe_unused stm32_rng_suspend(struct device *dev)
{ {
struct stm32_rng_private *priv = dev_get_drvdata(dev); struct stm32_rng_private *priv = dev_get_drvdata(dev);
int err;
err = clk_prepare_enable(priv->clk);
if (err)
return err;
if (priv->data->has_cond_reset) { if (priv->data->has_cond_reset) {
priv->pm_conf.nscr = readl_relaxed(priv->base + RNG_NSCR); priv->pm_conf.nscr = readl_relaxed(priv->base + RNG_NSCR);
...@@ -465,6 +475,8 @@ static int __maybe_unused stm32_rng_resume(struct device *dev) ...@@ -465,6 +475,8 @@ static int __maybe_unused stm32_rng_resume(struct device *dev)
writel_relaxed(reg, priv->base + RNG_CR); writel_relaxed(reg, priv->base + RNG_CR);
} }
clk_disable_unprepare(priv->clk);
return 0; return 0;
} }
......
...@@ -644,6 +644,14 @@ config CRYPTO_DEV_ROCKCHIP_DEBUG ...@@ -644,6 +644,14 @@ config CRYPTO_DEV_ROCKCHIP_DEBUG
This will create /sys/kernel/debug/rk3288_crypto/stats for displaying This will create /sys/kernel/debug/rk3288_crypto/stats for displaying
the number of requests per algorithm and other internal stats. the number of requests per algorithm and other internal stats.
config CRYPTO_DEV_TEGRA
tristate "Enable Tegra Security Engine"
depends on TEGRA_HOST1X
select CRYPTO_ENGINE
help
Select this to enable Tegra Security Engine which accelerates various
AES encryption/decryption and HASH algorithms.
config CRYPTO_DEV_ZYNQMP_AES config CRYPTO_DEV_ZYNQMP_AES
tristate "Support for Xilinx ZynqMP AES hw accelerator" tristate "Support for Xilinx ZynqMP AES hw accelerator"
......
...@@ -41,6 +41,7 @@ obj-$(CONFIG_CRYPTO_DEV_SAHARA) += sahara.o ...@@ -41,6 +41,7 @@ obj-$(CONFIG_CRYPTO_DEV_SAHARA) += sahara.o
obj-$(CONFIG_CRYPTO_DEV_SL3516) += gemini/ obj-$(CONFIG_CRYPTO_DEV_SL3516) += gemini/
obj-y += stm32/ obj-y += stm32/
obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o
obj-$(CONFIG_CRYPTO_DEV_TEGRA) += tegra/
obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio/ obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio/
#obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/ #obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/ obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -495,7 +495,7 @@ static void spu2_dump_omd(u8 *omd, u16 hash_key_len, u16 ciph_key_len, ...@@ -495,7 +495,7 @@ static void spu2_dump_omd(u8 *omd, u16 hash_key_len, u16 ciph_key_len,
if (hash_iv_len) { if (hash_iv_len) {
packet_log(" Hash IV Length %u bytes\n", hash_iv_len); packet_log(" Hash IV Length %u bytes\n", hash_iv_len);
packet_dump(" hash IV: ", ptr, hash_iv_len); packet_dump(" hash IV: ", ptr, hash_iv_len);
ptr += ciph_key_len; ptr += hash_iv_len;
} }
if (ciph_iv_len) { if (ciph_iv_len) {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -481,8 +481,10 @@ static void sec_alg_resource_free(struct sec_ctx *ctx, ...@@ -481,8 +481,10 @@ static void sec_alg_resource_free(struct sec_ctx *ctx,
if (ctx->pbuf_supported) if (ctx->pbuf_supported)
sec_free_pbuf_resource(dev, qp_ctx->res); sec_free_pbuf_resource(dev, qp_ctx->res);
if (ctx->alg_type == SEC_AEAD) if (ctx->alg_type == SEC_AEAD) {
sec_free_mac_resource(dev, qp_ctx->res); sec_free_mac_resource(dev, qp_ctx->res);
sec_free_aiv_resource(dev, qp_ctx->res);
}
} }
static int sec_alloc_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx) static int sec_alloc_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment