Commit d4c90396 authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "Here is the crypto update for 4.3:

  API:

   - the AEAD interface transition is now complete.
   - add top-level skcipher interface.

  Drivers:

   - x86-64 acceleration for chacha20/poly1305.
   - add sunxi-ss Allwinner Security System crypto accelerator.
   - add RSA algorithm to qat driver.
   - add SRIOV support to qat driver.
   - add LS1021A support to caam.
   - add i.MX6 support to caam"

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (163 commits)
  crypto: algif_aead - fix for multiple operations on AF_ALG sockets
  crypto: qat - enable legacy VFs
  MPI: Fix mpi_read_buffer
  crypto: qat - silence a static checker warning
  crypto: vmx - Fixing opcode issue
  crypto: caam - Use the preferred style for memory allocations
  crypto: caam - Propagate the real error code in caam_probe
  crypto: caam - Fix the error handling in caam_probe
  crypto: caam - fix writing to JQCR_MS when using service interface
  crypto: hash - Add AHASH_REQUEST_ON_STACK
  crypto: testmgr - Use new skcipher interface
  crypto: skcipher - Add top-level skcipher interface
  crypto: cmac - allow usage in FIPS mode
  crypto: sahara - Use dmam_alloc_coherent
  crypto: caam - Add support for LS1021A
  crypto: qat - Don't move data inside output buffer
  crypto: vmx - Fixing GHASH Key issue on little endian
  crypto: vmx - Fixing AES-CTR counter bug
  crypto: null - Add missing Kconfig tristate for NULL2
  crypto: nx - Add forward declaration for struct crypto_aead
  ...
parents f36fc04e bf433416
...@@ -585,7 +585,7 @@ kernel crypto API | IPSEC Layer ...@@ -585,7 +585,7 @@ kernel crypto API | IPSEC Layer
+-----------+ | +-----------+ |
| | (1) | | (1)
| aead | <----------------------------------- esp_output | aead | <----------------------------------- esp_output
| (seqniv) | ---+ | (seqiv) | ---+
+-----------+ | +-----------+ |
| (2) | (2)
+-----------+ | +-----------+ |
...@@ -1101,7 +1101,7 @@ kernel crypto API | Caller ...@@ -1101,7 +1101,7 @@ kernel crypto API | Caller
</para> </para>
<para> <para>
[1] http://www.chronox.de/libkcapi.html [1] <ulink url="http://www.chronox.de/libkcapi.html">http://www.chronox.de/libkcapi.html</ulink>
</para> </para>
</sect1> </sect1>
...@@ -1661,7 +1661,7 @@ read(opfd, out, outlen); ...@@ -1661,7 +1661,7 @@ read(opfd, out, outlen);
</para> </para>
<para> <para>
[1] http://www.chronox.de/libkcapi.html [1] <ulink url="http://www.chronox.de/libkcapi.html">http://www.chronox.de/libkcapi.html</ulink>
</para> </para>
</sect1> </sect1>
...@@ -1687,7 +1687,7 @@ read(opfd, out, outlen); ...@@ -1687,7 +1687,7 @@ read(opfd, out, outlen);
!Pinclude/linux/crypto.h Block Cipher Algorithm Definitions !Pinclude/linux/crypto.h Block Cipher Algorithm Definitions
!Finclude/linux/crypto.h crypto_alg !Finclude/linux/crypto.h crypto_alg
!Finclude/linux/crypto.h ablkcipher_alg !Finclude/linux/crypto.h ablkcipher_alg
!Finclude/linux/crypto.h aead_alg !Finclude/crypto/aead.h aead_alg
!Finclude/linux/crypto.h blkcipher_alg !Finclude/linux/crypto.h blkcipher_alg
!Finclude/linux/crypto.h cipher_alg !Finclude/linux/crypto.h cipher_alg
!Finclude/crypto/rng.h rng_alg !Finclude/crypto/rng.h rng_alg
......
...@@ -106,6 +106,18 @@ PROPERTIES ...@@ -106,6 +106,18 @@ PROPERTIES
to the interrupt parent to which the child domain to the interrupt parent to which the child domain
is being mapped. is being mapped.
- clocks
Usage: required if SEC 4.0 requires explicit enablement of clocks
Value type: <prop_encoded-array>
Definition: A list of phandle and clock specifier pairs describing
the clocks required for enabling and disabling SEC 4.0.
- clock-names
Usage: required if SEC 4.0 requires explicit enablement of clocks
Value type: <string>
Definition: A list of clock name strings in the same order as the
clocks property.
Note: All other standard properties (see the ePAPR) are allowed Note: All other standard properties (see the ePAPR) are allowed
but are optional. but are optional.
...@@ -120,6 +132,11 @@ EXAMPLE ...@@ -120,6 +132,11 @@ EXAMPLE
ranges = <0 0x300000 0x10000>; ranges = <0 0x300000 0x10000>;
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
interrupts = <92 2>; interrupts = <92 2>;
clocks = <&clks IMX6QDL_CLK_CAAM_MEM>,
<&clks IMX6QDL_CLK_CAAM_ACLK>,
<&clks IMX6QDL_CLK_CAAM_IPG>,
<&clks IMX6QDL_CLK_EIM_SLOW>;
clock-names = "mem", "aclk", "ipg", "emi_slow";
}; };
===================================================================== =====================================================================
......
* Allwinner Security System found on A20 SoC
Required properties:
- compatible : Should be "allwinner,sun4i-a10-crypto".
- reg: Should contain the Security System register location and length.
- interrupts: Should contain the IRQ line for the Security System.
- clocks : List of clock specifiers, corresponding to ahb and ss.
- clock-names : Name of the functional clock, should be
* "ahb" : AHB gating clock
* "mod" : SS controller clock
Optional properties:
- resets : phandle + reset specifier pair
- reset-names : must contain "ahb"
Example:
crypto: crypto-engine@01c15000 {
compatible = "allwinner,sun4i-a10-crypto";
reg = <0x01c15000 0x1000>;
interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ahb_gates 5>, <&ss_clk>;
clock-names = "ahb", "mod";
};
...@@ -556,6 +556,12 @@ S: Maintained ...@@ -556,6 +556,12 @@ S: Maintained
F: Documentation/i2c/busses/i2c-ali1563 F: Documentation/i2c/busses/i2c-ali1563
F: drivers/i2c/busses/i2c-ali1563.c F: drivers/i2c/busses/i2c-ali1563.c
ALLWINNER SECURITY SYSTEM
M: Corentin Labbe <clabbe.montjoie@gmail.com>
L: linux-crypto@vger.kernel.org
S: Maintained
F: drivers/crypto/sunxi-ss/
ALPHA PORT ALPHA PORT
M: Richard Henderson <rth@twiddle.net> M: Richard Henderson <rth@twiddle.net>
M: Ivan Kokshaysky <ink@jurassic.park.msu.ru> M: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
...@@ -5078,9 +5084,21 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git ...@@ -5078,9 +5084,21 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git
S: Maintained S: Maintained
F: arch/ia64/ F: arch/ia64/
IBM Power VMX Cryptographic instructions
M: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com>
M: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com>
L: linux-crypto@vger.kernel.org
S: Supported
F: drivers/crypto/vmx/Makefile
F: drivers/crypto/vmx/Kconfig
F: drivers/crypto/vmx/vmx.c
F: drivers/crypto/vmx/aes*
F: drivers/crypto/vmx/ghash*
F: drivers/crypto/vmx/ppc-xlate.pl
IBM Power in-Nest Crypto Acceleration IBM Power in-Nest Crypto Acceleration
M: Marcelo Henrique Cerri <mhcerri@linux.vnet.ibm.com> M: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com>
M: Fionnuala Gunter <fin@linux.vnet.ibm.com> M: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
S: Supported S: Supported
F: drivers/crypto/nx/Makefile F: drivers/crypto/nx/Makefile
...@@ -5092,7 +5110,7 @@ F: drivers/crypto/nx/nx_csbcpb.h ...@@ -5092,7 +5110,7 @@ F: drivers/crypto/nx/nx_csbcpb.h
F: drivers/crypto/nx/nx_debugfs.h F: drivers/crypto/nx/nx_debugfs.h
IBM Power 842 compression accelerator IBM Power 842 compression accelerator
M: Dan Streetman <ddstreet@us.ibm.com> M: Dan Streetman <ddstreet@ieee.org>
S: Supported S: Supported
F: drivers/crypto/nx/Makefile F: drivers/crypto/nx/Makefile
F: drivers/crypto/nx/Kconfig F: drivers/crypto/nx/Kconfig
......
...@@ -836,10 +836,31 @@ aips-bus@02100000 { /* AIPS2 */ ...@@ -836,10 +836,31 @@ aips-bus@02100000 { /* AIPS2 */
reg = <0x02100000 0x100000>; reg = <0x02100000 0x100000>;
ranges; ranges;
caam@02100000 { crypto: caam@2100000 {
reg = <0x02100000 0x40000>; compatible = "fsl,sec-v4.0";
interrupts = <0 105 IRQ_TYPE_LEVEL_HIGH>, fsl,sec-era = <4>;
<0 106 IRQ_TYPE_LEVEL_HIGH>; #address-cells = <1>;
#size-cells = <1>;
reg = <0x2100000 0x10000>;
ranges = <0 0x2100000 0x10000>;
interrupt-parent = <&intc>;
clocks = <&clks IMX6QDL_CLK_CAAM_MEM>,
<&clks IMX6QDL_CLK_CAAM_ACLK>,
<&clks IMX6QDL_CLK_CAAM_IPG>,
<&clks IMX6QDL_CLK_EIM_SLOW>;
clock-names = "mem", "aclk", "ipg", "emi_slow";
sec_jr0: jr0@1000 {
compatible = "fsl,sec-v4.0-job-ring";
reg = <0x1000 0x1000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
};
sec_jr1: jr1@2000 {
compatible = "fsl,sec-v4.0-job-ring";
reg = <0x2000 0x1000>;
interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
};
}; };
aipstz@0217c000 { /* AIPSTZ2 */ aipstz@0217c000 { /* AIPSTZ2 */
......
...@@ -738,6 +738,33 @@ aips2: aips-bus@02100000 { ...@@ -738,6 +738,33 @@ aips2: aips-bus@02100000 {
reg = <0x02100000 0x100000>; reg = <0x02100000 0x100000>;
ranges; ranges;
crypto: caam@2100000 {
compatible = "fsl,sec-v4.0";
fsl,sec-era = <4>;
#address-cells = <1>;
#size-cells = <1>;
reg = <0x2100000 0x10000>;
ranges = <0 0x2100000 0x10000>;
interrupt-parent = <&intc>;
clocks = <&clks IMX6SX_CLK_CAAM_MEM>,
<&clks IMX6SX_CLK_CAAM_ACLK>,
<&clks IMX6SX_CLK_CAAM_IPG>,
<&clks IMX6SX_CLK_EIM_SLOW>;
clock-names = "mem", "aclk", "ipg", "emi_slow";
sec_jr0: jr0@1000 {
compatible = "fsl,sec-v4.0-job-ring";
reg = <0x1000 0x1000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
};
sec_jr1: jr1@2000 {
compatible = "fsl,sec-v4.0-job-ring";
reg = <0x2000 0x1000>;
interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
};
};
usbotg1: usb@02184000 { usbotg1: usb@02184000 {
compatible = "fsl,imx6sx-usb", "fsl,imx27-usb"; compatible = "fsl,imx6sx-usb", "fsl,imx27-usb";
reg = <0x02184000 0x200>; reg = <0x02184000 0x200>;
......
...@@ -678,6 +678,14 @@ ohci0: usb@01c14400 { ...@@ -678,6 +678,14 @@ ohci0: usb@01c14400 {
status = "disabled"; status = "disabled";
}; };
crypto: crypto-engine@01c15000 {
compatible = "allwinner,sun4i-a10-crypto";
reg = <0x01c15000 0x1000>;
interrupts = <86>;
clocks = <&ahb_gates 5>, <&ss_clk>;
clock-names = "ahb", "mod";
};
spi2: spi@01c17000 { spi2: spi@01c17000 {
compatible = "allwinner,sun4i-a10-spi"; compatible = "allwinner,sun4i-a10-spi";
reg = <0x01c17000 0x1000>; reg = <0x01c17000 0x1000>;
......
...@@ -367,6 +367,14 @@ mmc3_clk: clk@01c20094 { ...@@ -367,6 +367,14 @@ mmc3_clk: clk@01c20094 {
"mmc3_sample"; "mmc3_sample";
}; };
ss_clk: clk@01c2009c {
#clock-cells = <0>;
compatible = "allwinner,sun4i-a10-mod0-clk";
reg = <0x01c2009c 0x4>;
clocks = <&osc24M>, <&pll6 0>;
clock-output-names = "ss";
};
spi0_clk: clk@01c200a0 { spi0_clk: clk@01c200a0 {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "allwinner,sun4i-a10-mod0-clk"; compatible = "allwinner,sun4i-a10-mod0-clk";
...@@ -894,6 +902,16 @@ gmac: ethernet@01c30000 { ...@@ -894,6 +902,16 @@ gmac: ethernet@01c30000 {
#size-cells = <0>; #size-cells = <0>;
}; };
crypto: crypto-engine@01c15000 {
compatible = "allwinner,sun4i-a10-crypto";
reg = <0x01c15000 0x1000>;
interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ahb1_gates 5>, <&ss_clk>;
clock-names = "ahb", "mod";
resets = <&ahb1_rst 5>;
reset-names = "ahb";
};
timer@01c60000 { timer@01c60000 {
compatible = "allwinner,sun6i-a31-hstimer", compatible = "allwinner,sun6i-a31-hstimer",
"allwinner,sun7i-a20-hstimer"; "allwinner,sun7i-a20-hstimer";
......
...@@ -754,6 +754,14 @@ ohci0: usb@01c14400 { ...@@ -754,6 +754,14 @@ ohci0: usb@01c14400 {
status = "disabled"; status = "disabled";
}; };
crypto: crypto-engine@01c15000 {
compatible = "allwinner,sun4i-a10-crypto";
reg = <0x01c15000 0x1000>;
interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ahb_gates 5>, <&ss_clk>;
clock-names = "ahb", "mod";
};
spi2: spi@01c17000 { spi2: spi@01c17000 {
compatible = "allwinner,sun4i-a10-spi"; compatible = "allwinner,sun4i-a10-spi";
reg = <0x01c17000 0x1000>; reg = <0x01c17000 0x1000>;
......
...@@ -354,8 +354,7 @@ CONFIG_PROVE_LOCKING=y ...@@ -354,8 +354,7 @@ CONFIG_PROVE_LOCKING=y
# CONFIG_FTRACE is not set # CONFIG_FTRACE is not set
# CONFIG_ARM_UNWIND is not set # CONFIG_ARM_UNWIND is not set
CONFIG_SECURITYFS=y CONFIG_SECURITYFS=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set CONFIG_CRYPTO_DEV_FSL_CAAM=y
# CONFIG_CRYPTO_HW is not set
CONFIG_CRC_CCITT=m CONFIG_CRC_CCITT=m
CONFIG_CRC_T10DIF=y CONFIG_CRC_T10DIF=y
CONFIG_CRC7=m CONFIG_CRC7=m
......
aesbs-core.S aesbs-core.S
sha256-core.S
sha512-core.S
...@@ -124,7 +124,7 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) ...@@ -124,7 +124,7 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
ce_aes_ccm_auth_data(mac, (u8 *)&ltag, ltag.len, &macp, ctx->key_enc, ce_aes_ccm_auth_data(mac, (u8 *)&ltag, ltag.len, &macp, ctx->key_enc,
num_rounds(ctx)); num_rounds(ctx));
scatterwalk_start(&walk, req->assoc); scatterwalk_start(&walk, req->src);
do { do {
u32 n = scatterwalk_clamp(&walk, len); u32 n = scatterwalk_clamp(&walk, len);
...@@ -151,6 +151,10 @@ static int ccm_encrypt(struct aead_request *req) ...@@ -151,6 +151,10 @@ static int ccm_encrypt(struct aead_request *req)
struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead);
struct blkcipher_desc desc = { .info = req->iv }; struct blkcipher_desc desc = { .info = req->iv };
struct blkcipher_walk walk; struct blkcipher_walk walk;
struct scatterlist srcbuf[2];
struct scatterlist dstbuf[2];
struct scatterlist *src;
struct scatterlist *dst;
u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 __aligned(8) mac[AES_BLOCK_SIZE];
u8 buf[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE];
u32 len = req->cryptlen; u32 len = req->cryptlen;
...@@ -168,7 +172,12 @@ static int ccm_encrypt(struct aead_request *req) ...@@ -168,7 +172,12 @@ static int ccm_encrypt(struct aead_request *req)
/* preserve the original iv for the final round */ /* preserve the original iv for the final round */
memcpy(buf, req->iv, AES_BLOCK_SIZE); memcpy(buf, req->iv, AES_BLOCK_SIZE);
blkcipher_walk_init(&walk, req->dst, req->src, len); src = scatterwalk_ffwd(srcbuf, req->src, req->assoclen);
dst = src;
if (req->src != req->dst)
dst = scatterwalk_ffwd(dstbuf, req->dst, req->assoclen);
blkcipher_walk_init(&walk, dst, src, len);
err = blkcipher_aead_walk_virt_block(&desc, &walk, aead, err = blkcipher_aead_walk_virt_block(&desc, &walk, aead,
AES_BLOCK_SIZE); AES_BLOCK_SIZE);
...@@ -194,7 +203,7 @@ static int ccm_encrypt(struct aead_request *req) ...@@ -194,7 +203,7 @@ static int ccm_encrypt(struct aead_request *req)
return err; return err;
/* copy authtag to end of dst */ /* copy authtag to end of dst */
scatterwalk_map_and_copy(mac, req->dst, req->cryptlen, scatterwalk_map_and_copy(mac, dst, req->cryptlen,
crypto_aead_authsize(aead), 1); crypto_aead_authsize(aead), 1);
return 0; return 0;
...@@ -207,6 +216,10 @@ static int ccm_decrypt(struct aead_request *req) ...@@ -207,6 +216,10 @@ static int ccm_decrypt(struct aead_request *req)
unsigned int authsize = crypto_aead_authsize(aead); unsigned int authsize = crypto_aead_authsize(aead);
struct blkcipher_desc desc = { .info = req->iv }; struct blkcipher_desc desc = { .info = req->iv };
struct blkcipher_walk walk; struct blkcipher_walk walk;
struct scatterlist srcbuf[2];
struct scatterlist dstbuf[2];
struct scatterlist *src;
struct scatterlist *dst;
u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 __aligned(8) mac[AES_BLOCK_SIZE];
u8 buf[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE];
u32 len = req->cryptlen - authsize; u32 len = req->cryptlen - authsize;
...@@ -224,7 +237,12 @@ static int ccm_decrypt(struct aead_request *req) ...@@ -224,7 +237,12 @@ static int ccm_decrypt(struct aead_request *req)
/* preserve the original iv for the final round */ /* preserve the original iv for the final round */
memcpy(buf, req->iv, AES_BLOCK_SIZE); memcpy(buf, req->iv, AES_BLOCK_SIZE);
blkcipher_walk_init(&walk, req->dst, req->src, len); src = scatterwalk_ffwd(srcbuf, req->src, req->assoclen);
dst = src;
if (req->src != req->dst)
dst = scatterwalk_ffwd(dstbuf, req->dst, req->assoclen);
blkcipher_walk_init(&walk, dst, src, len);
err = blkcipher_aead_walk_virt_block(&desc, &walk, aead, err = blkcipher_aead_walk_virt_block(&desc, &walk, aead,
AES_BLOCK_SIZE); AES_BLOCK_SIZE);
...@@ -250,44 +268,42 @@ static int ccm_decrypt(struct aead_request *req) ...@@ -250,44 +268,42 @@ static int ccm_decrypt(struct aead_request *req)
return err; return err;
/* compare calculated auth tag with the stored one */ /* compare calculated auth tag with the stored one */
scatterwalk_map_and_copy(buf, req->src, req->cryptlen - authsize, scatterwalk_map_and_copy(buf, src, req->cryptlen - authsize,
authsize, 0); authsize, 0);
if (memcmp(mac, buf, authsize)) if (crypto_memneq(mac, buf, authsize))
return -EBADMSG; return -EBADMSG;
return 0; return 0;
} }
static struct crypto_alg ccm_aes_alg = { static struct aead_alg ccm_aes_alg = {
.cra_name = "ccm(aes)", .base = {
.cra_driver_name = "ccm-aes-ce", .cra_name = "ccm(aes)",
.cra_priority = 300, .cra_driver_name = "ccm-aes-ce",
.cra_flags = CRYPTO_ALG_TYPE_AEAD, .cra_priority = 300,
.cra_blocksize = 1, .cra_blocksize = 1,
.cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_alignmask = 7, .cra_alignmask = 7,
.cra_type = &crypto_aead_type, .cra_module = THIS_MODULE,
.cra_module = THIS_MODULE, },
.cra_aead = { .ivsize = AES_BLOCK_SIZE,
.ivsize = AES_BLOCK_SIZE, .maxauthsize = AES_BLOCK_SIZE,
.maxauthsize = AES_BLOCK_SIZE, .setkey = ccm_setkey,
.setkey = ccm_setkey, .setauthsize = ccm_setauthsize,
.setauthsize = ccm_setauthsize, .encrypt = ccm_encrypt,
.encrypt = ccm_encrypt, .decrypt = ccm_decrypt,
.decrypt = ccm_decrypt,
}
}; };
static int __init aes_mod_init(void) static int __init aes_mod_init(void)
{ {
if (!(elf_hwcap & HWCAP_AES)) if (!(elf_hwcap & HWCAP_AES))
return -ENODEV; return -ENODEV;
return crypto_register_alg(&ccm_aes_alg); return crypto_register_aead(&ccm_aes_alg);
} }
static void __exit aes_mod_exit(void) static void __exit aes_mod_exit(void)
{ {
crypto_unregister_alg(&ccm_aes_alg); crypto_unregister_aead(&ccm_aes_alg);
} }
module_init(aes_mod_init); module_init(aes_mod_init);
......
...@@ -29,6 +29,7 @@ static inline void save_early_sprs(struct thread_struct *prev) {} ...@@ -29,6 +29,7 @@ static inline void save_early_sprs(struct thread_struct *prev) {}
extern void enable_kernel_fp(void); extern void enable_kernel_fp(void);
extern void enable_kernel_altivec(void); extern void enable_kernel_altivec(void);
extern void enable_kernel_vsx(void);
extern int emulate_altivec(struct pt_regs *); extern int emulate_altivec(struct pt_regs *);
extern void __giveup_vsx(struct task_struct *); extern void __giveup_vsx(struct task_struct *);
extern void giveup_vsx(struct task_struct *); extern void giveup_vsx(struct task_struct *);
......
...@@ -204,8 +204,6 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread); ...@@ -204,8 +204,6 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
#endif /* CONFIG_ALTIVEC */ #endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
#if 0
/* not currently used, but some crazy RAID module might want to later */
void enable_kernel_vsx(void) void enable_kernel_vsx(void)
{ {
WARN_ON(preemptible()); WARN_ON(preemptible());
...@@ -220,7 +218,6 @@ void enable_kernel_vsx(void) ...@@ -220,7 +218,6 @@ void enable_kernel_vsx(void)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
} }
EXPORT_SYMBOL(enable_kernel_vsx); EXPORT_SYMBOL(enable_kernel_vsx);
#endif
void giveup_vsx(struct task_struct *tsk) void giveup_vsx(struct task_struct *tsk)
{ {
......
...@@ -20,6 +20,7 @@ obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o ...@@ -20,6 +20,7 @@ obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o
obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o
obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o
obj-$(CONFIG_CRYPTO_SALSA20_X86_64) += salsa20-x86_64.o obj-$(CONFIG_CRYPTO_SALSA20_X86_64) += salsa20-x86_64.o
obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha20-x86_64.o
obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o
obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o
obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
...@@ -30,6 +31,7 @@ obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o ...@@ -30,6 +31,7 @@ obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o
obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o
obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o
obj-$(CONFIG_CRYPTO_POLY1305_X86_64) += poly1305-x86_64.o
# These modules require assembler to support AVX. # These modules require assembler to support AVX.
ifeq ($(avx_supported),yes) ifeq ($(avx_supported),yes)
...@@ -60,6 +62,7 @@ blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o ...@@ -60,6 +62,7 @@ blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o
twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o
twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o
salsa20-x86_64-y := salsa20-x86_64-asm_64.o salsa20_glue.o salsa20-x86_64-y := salsa20-x86_64-asm_64.o salsa20_glue.o
chacha20-x86_64-y := chacha20-ssse3-x86_64.o chacha20_glue.o
serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o
ifeq ($(avx_supported),yes) ifeq ($(avx_supported),yes)
...@@ -75,6 +78,7 @@ endif ...@@ -75,6 +78,7 @@ endif
ifeq ($(avx2_supported),yes) ifeq ($(avx2_supported),yes)
camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o
chacha20-x86_64-y += chacha20-avx2-x86_64.o
serpent-avx2-y := serpent-avx2-asm_64.o serpent_avx2_glue.o serpent-avx2-y := serpent-avx2-asm_64.o serpent_avx2_glue.o
endif endif
...@@ -82,8 +86,10 @@ aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o fpu.o ...@@ -82,8 +86,10 @@ aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o fpu.o
aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o
ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o
sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o
poly1305-x86_64-y := poly1305-sse2-x86_64.o poly1305_glue.o
ifeq ($(avx2_supported),yes) ifeq ($(avx2_supported),yes)
sha1-ssse3-y += sha1_avx2_x86_64_asm.o sha1-ssse3-y += sha1_avx2_x86_64_asm.o
poly1305-x86_64-y += poly1305-avx2-x86_64.o
endif endif
crc32c-intel-y := crc32c-intel_glue.o crc32c-intel-y := crc32c-intel_glue.o
crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o
......
...@@ -803,10 +803,7 @@ static int rfc4106_init(struct crypto_aead *aead) ...@@ -803,10 +803,7 @@ static int rfc4106_init(struct crypto_aead *aead)
return PTR_ERR(cryptd_tfm); return PTR_ERR(cryptd_tfm);
*ctx = cryptd_tfm; *ctx = cryptd_tfm;
crypto_aead_set_reqsize( crypto_aead_set_reqsize(aead, crypto_aead_reqsize(&cryptd_tfm->base));
aead,
sizeof(struct aead_request) +
crypto_aead_reqsize(&cryptd_tfm->base));
return 0; return 0;
} }
...@@ -955,8 +952,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req) ...@@ -955,8 +952,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req)
/* Assuming we are supporting rfc4106 64-bit extended */ /* Assuming we are supporting rfc4106 64-bit extended */
/* sequence numbers We need to have the AAD length equal */ /* sequence numbers We need to have the AAD length equal */
/* to 8 or 12 bytes */ /* to 16 or 20 bytes */
if (unlikely(req->assoclen != 8 && req->assoclen != 12)) if (unlikely(req->assoclen != 16 && req->assoclen != 20))
return -EINVAL; return -EINVAL;
/* IV below built */ /* IV below built */
...@@ -992,9 +989,9 @@ static int helper_rfc4106_encrypt(struct aead_request *req) ...@@ -992,9 +989,9 @@ static int helper_rfc4106_encrypt(struct aead_request *req)
} }
kernel_fpu_begin(); kernel_fpu_begin();
aesni_gcm_enc_tfm(aes_ctx, dst, src, (unsigned long)req->cryptlen, iv, aesni_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv,
ctx->hash_subkey, assoc, (unsigned long)req->assoclen, dst ctx->hash_subkey, assoc, req->assoclen - 8,
+ ((unsigned long)req->cryptlen), auth_tag_len); dst + req->cryptlen, auth_tag_len);
kernel_fpu_end(); kernel_fpu_end();
/* The authTag (aka the Integrity Check Value) needs to be written /* The authTag (aka the Integrity Check Value) needs to be written
...@@ -1033,12 +1030,12 @@ static int helper_rfc4106_decrypt(struct aead_request *req) ...@@ -1033,12 +1030,12 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
struct scatter_walk dst_sg_walk; struct scatter_walk dst_sg_walk;
unsigned int i; unsigned int i;
if (unlikely(req->assoclen != 8 && req->assoclen != 12)) if (unlikely(req->assoclen != 16 && req->assoclen != 20))
return -EINVAL; return -EINVAL;
/* Assuming we are supporting rfc4106 64-bit extended */ /* Assuming we are supporting rfc4106 64-bit extended */
/* sequence numbers We need to have the AAD length */ /* sequence numbers We need to have the AAD length */
/* equal to 8 or 12 bytes */ /* equal to 16 or 20 bytes */
tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len); tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len);
/* IV below built */ /* IV below built */
...@@ -1075,8 +1072,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req) ...@@ -1075,8 +1072,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
kernel_fpu_begin(); kernel_fpu_begin();
aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv, aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv,
ctx->hash_subkey, assoc, (unsigned long)req->assoclen, ctx->hash_subkey, assoc, req->assoclen - 8,
authTag, auth_tag_len); authTag, auth_tag_len);
kernel_fpu_end(); kernel_fpu_end();
/* Compare generated tag with passed in tag. */ /* Compare generated tag with passed in tag. */
...@@ -1105,19 +1102,12 @@ static int rfc4106_encrypt(struct aead_request *req) ...@@ -1105,19 +1102,12 @@ static int rfc4106_encrypt(struct aead_request *req)
struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct cryptd_aead **ctx = crypto_aead_ctx(tfm); struct cryptd_aead **ctx = crypto_aead_ctx(tfm);
struct cryptd_aead *cryptd_tfm = *ctx; struct cryptd_aead *cryptd_tfm = *ctx;
struct aead_request *subreq = aead_request_ctx(req);
aead_request_set_tfm(subreq, irq_fpu_usable() ? aead_request_set_tfm(req, irq_fpu_usable() ?
cryptd_aead_child(cryptd_tfm) : cryptd_aead_child(cryptd_tfm) :
&cryptd_tfm->base); &cryptd_tfm->base);
aead_request_set_callback(subreq, req->base.flags, return crypto_aead_encrypt(req);
req->base.complete, req->base.data);
aead_request_set_crypt(subreq, req->src, req->dst,
req->cryptlen, req->iv);
aead_request_set_ad(subreq, req->assoclen);
return crypto_aead_encrypt(subreq);
} }
static int rfc4106_decrypt(struct aead_request *req) static int rfc4106_decrypt(struct aead_request *req)
...@@ -1125,19 +1115,12 @@ static int rfc4106_decrypt(struct aead_request *req) ...@@ -1125,19 +1115,12 @@ static int rfc4106_decrypt(struct aead_request *req)
struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct cryptd_aead **ctx = crypto_aead_ctx(tfm); struct cryptd_aead **ctx = crypto_aead_ctx(tfm);
struct cryptd_aead *cryptd_tfm = *ctx; struct cryptd_aead *cryptd_tfm = *ctx;
struct aead_request *subreq = aead_request_ctx(req);
aead_request_set_tfm(subreq, irq_fpu_usable() ?
cryptd_aead_child(cryptd_tfm) :
&cryptd_tfm->base);
aead_request_set_callback(subreq, req->base.flags, aead_request_set_tfm(req, irq_fpu_usable() ?
req->base.complete, req->base.data); cryptd_aead_child(cryptd_tfm) :
aead_request_set_crypt(subreq, req->src, req->dst, &cryptd_tfm->base);
req->cryptlen, req->iv);
aead_request_set_ad(subreq, req->assoclen);
return crypto_aead_decrypt(subreq); return crypto_aead_decrypt(req);
} }
#endif #endif
......
This diff is collapsed.
This diff is collapsed.
/*
* ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code
*
* Copyright (C) 2015 Martin Willi
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <crypto/algapi.h>
#include <crypto/chacha20.h>
#include <linux/crypto.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/fpu/api.h>
#include <asm/simd.h>
#define CHACHA20_STATE_ALIGN 16
asmlinkage void chacha20_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src);
asmlinkage void chacha20_4block_xor_ssse3(u32 *state, u8 *dst, const u8 *src);
#ifdef CONFIG_AS_AVX2
asmlinkage void chacha20_8block_xor_avx2(u32 *state, u8 *dst, const u8 *src);
static bool chacha20_use_avx2;
#endif
static void chacha20_dosimd(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes)
{
u8 buf[CHACHA20_BLOCK_SIZE];
#ifdef CONFIG_AS_AVX2
if (chacha20_use_avx2) {
while (bytes >= CHACHA20_BLOCK_SIZE * 8) {
chacha20_8block_xor_avx2(state, dst, src);
bytes -= CHACHA20_BLOCK_SIZE * 8;
src += CHACHA20_BLOCK_SIZE * 8;
dst += CHACHA20_BLOCK_SIZE * 8;
state[12] += 8;
}
}
#endif
while (bytes >= CHACHA20_BLOCK_SIZE * 4) {
chacha20_4block_xor_ssse3(state, dst, src);
bytes -= CHACHA20_BLOCK_SIZE * 4;
src += CHACHA20_BLOCK_SIZE * 4;
dst += CHACHA20_BLOCK_SIZE * 4;
state[12] += 4;
}
while (bytes >= CHACHA20_BLOCK_SIZE) {
chacha20_block_xor_ssse3(state, dst, src);
bytes -= CHACHA20_BLOCK_SIZE;
src += CHACHA20_BLOCK_SIZE;
dst += CHACHA20_BLOCK_SIZE;
state[12]++;
}
if (bytes) {
memcpy(buf, src, bytes);
chacha20_block_xor_ssse3(state, buf, buf);
memcpy(dst, buf, bytes);
}
}
static int chacha20_simd(struct blkcipher_desc *desc, struct scatterlist *dst,
struct scatterlist *src, unsigned int nbytes)
{
u32 *state, state_buf[16 + (CHACHA20_STATE_ALIGN / sizeof(u32)) - 1];
struct blkcipher_walk walk;
int err;
if (!may_use_simd())
return crypto_chacha20_crypt(desc, dst, src, nbytes);
state = (u32 *)roundup((uintptr_t)state_buf, CHACHA20_STATE_ALIGN);
blkcipher_walk_init(&walk, dst, src, nbytes);
err = blkcipher_walk_virt_block(desc, &walk, CHACHA20_BLOCK_SIZE);
crypto_chacha20_init(state, crypto_blkcipher_ctx(desc->tfm), walk.iv);
kernel_fpu_begin();
while (walk.nbytes >= CHACHA20_BLOCK_SIZE) {
chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
rounddown(walk.nbytes, CHACHA20_BLOCK_SIZE));
err = blkcipher_walk_done(desc, &walk,
walk.nbytes % CHACHA20_BLOCK_SIZE);
}
if (walk.nbytes) {
chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes);
err = blkcipher_walk_done(desc, &walk, 0);
}
kernel_fpu_end();
return err;
}
static struct crypto_alg alg = {
.cra_name = "chacha20",
.cra_driver_name = "chacha20-simd",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = 1,
.cra_type = &crypto_blkcipher_type,
.cra_ctxsize = sizeof(struct chacha20_ctx),
.cra_alignmask = sizeof(u32) - 1,
.cra_module = THIS_MODULE,
.cra_u = {
.blkcipher = {
.min_keysize = CHACHA20_KEY_SIZE,
.max_keysize = CHACHA20_KEY_SIZE,
.ivsize = CHACHA20_IV_SIZE,
.geniv = "seqiv",
.setkey = crypto_chacha20_setkey,
.encrypt = chacha20_simd,
.decrypt = chacha20_simd,
},
},
};
static int __init chacha20_simd_mod_init(void)
{
if (!cpu_has_ssse3)
return -ENODEV;
#ifdef CONFIG_AS_AVX2
chacha20_use_avx2 = cpu_has_avx && cpu_has_avx2 &&
cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, NULL);
#endif
return crypto_register_alg(&alg);
}
static void __exit chacha20_simd_mod_fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(chacha20_simd_mod_init);
module_exit(chacha20_simd_mod_fini);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Martin Willi <martin@strongswan.org>");
MODULE_DESCRIPTION("chacha20 cipher algorithm, SIMD accelerated");
MODULE_ALIAS_CRYPTO("chacha20");
MODULE_ALIAS_CRYPTO("chacha20-simd");
/*
* Poly1305 authenticator algorithm, RFC7539, x64 AVX2 functions
*
* Copyright (C) 2015 Martin Willi
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/linkage.h>
.data
.align 32
ANMASK: .octa 0x0000000003ffffff0000000003ffffff
.octa 0x0000000003ffffff0000000003ffffff
ORMASK: .octa 0x00000000010000000000000001000000
.octa 0x00000000010000000000000001000000
.text
#define h0 0x00(%rdi)
#define h1 0x04(%rdi)
#define h2 0x08(%rdi)
#define h3 0x0c(%rdi)
#define h4 0x10(%rdi)
#define r0 0x00(%rdx)
#define r1 0x04(%rdx)
#define r2 0x08(%rdx)
#define r3 0x0c(%rdx)
#define r4 0x10(%rdx)
#define u0 0x00(%r8)
#define u1 0x04(%r8)
#define u2 0x08(%r8)
#define u3 0x0c(%r8)
#define u4 0x10(%r8)
#define w0 0x14(%r8)
#define w1 0x18(%r8)
#define w2 0x1c(%r8)
#define w3 0x20(%r8)
#define w4 0x24(%r8)
#define y0 0x28(%r8)
#define y1 0x2c(%r8)
#define y2 0x30(%r8)
#define y3 0x34(%r8)
#define y4 0x38(%r8)
#define m %rsi
#define hc0 %ymm0
#define hc1 %ymm1
#define hc2 %ymm2
#define hc3 %ymm3
#define hc4 %ymm4
#define hc0x %xmm0
#define hc1x %xmm1
#define hc2x %xmm2
#define hc3x %xmm3
#define hc4x %xmm4
#define t1 %ymm5
#define t2 %ymm6
#define t1x %xmm5
#define t2x %xmm6
#define ruwy0 %ymm7
#define ruwy1 %ymm8
#define ruwy2 %ymm9
#define ruwy3 %ymm10
#define ruwy4 %ymm11
#define ruwy0x %xmm7
#define ruwy1x %xmm8
#define ruwy2x %xmm9
#define ruwy3x %xmm10
#define ruwy4x %xmm11
#define svxz1 %ymm12
#define svxz2 %ymm13
#define svxz3 %ymm14
#define svxz4 %ymm15
#define d0 %r9
#define d1 %r10
#define d2 %r11
#define d3 %r12
#define d4 %r13
ENTRY(poly1305_4block_avx2)
# %rdi: Accumulator h[5]
# %rsi: 64 byte input block m
# %rdx: Poly1305 key r[5]
# %rcx: Quadblock count
# %r8: Poly1305 derived key r^2 u[5], r^3 w[5], r^4 y[5],
# This four-block variant uses loop unrolled block processing. It
# requires 4 Poly1305 keys: r, r^2, r^3 and r^4:
# h = (h + m) * r => h = (h + m1) * r^4 + m2 * r^3 + m3 * r^2 + m4 * r
vzeroupper
push %rbx
push %r12
push %r13
# combine r0,u0,w0,y0
vmovd y0,ruwy0x
vmovd w0,t1x
vpunpcklqdq t1,ruwy0,ruwy0
vmovd u0,t1x
vmovd r0,t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,ruwy0,ruwy0
# combine r1,u1,w1,y1 and s1=r1*5,v1=u1*5,x1=w1*5,z1=y1*5
vmovd y1,ruwy1x
vmovd w1,t1x
vpunpcklqdq t1,ruwy1,ruwy1
vmovd u1,t1x
vmovd r1,t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,ruwy1,ruwy1
vpslld $2,ruwy1,svxz1
vpaddd ruwy1,svxz1,svxz1
# combine r2,u2,w2,y2 and s2=r2*5,v2=u2*5,x2=w2*5,z2=y2*5
vmovd y2,ruwy2x
vmovd w2,t1x
vpunpcklqdq t1,ruwy2,ruwy2
vmovd u2,t1x
vmovd r2,t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,ruwy2,ruwy2
vpslld $2,ruwy2,svxz2
vpaddd ruwy2,svxz2,svxz2
# combine r3,u3,w3,y3 and s3=r3*5,v3=u3*5,x3=w3*5,z3=y3*5
vmovd y3,ruwy3x
vmovd w3,t1x
vpunpcklqdq t1,ruwy3,ruwy3
vmovd u3,t1x
vmovd r3,t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,ruwy3,ruwy3
vpslld $2,ruwy3,svxz3
vpaddd ruwy3,svxz3,svxz3
# combine r4,u4,w4,y4 and s4=r4*5,v4=u4*5,x4=w4*5,z4=y4*5
vmovd y4,ruwy4x
vmovd w4,t1x
vpunpcklqdq t1,ruwy4,ruwy4
vmovd u4,t1x
vmovd r4,t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,ruwy4,ruwy4
vpslld $2,ruwy4,svxz4
vpaddd ruwy4,svxz4,svxz4
.Ldoblock4:
# hc0 = [m[48-51] & 0x3ffffff, m[32-35] & 0x3ffffff,
# m[16-19] & 0x3ffffff, m[ 0- 3] & 0x3ffffff + h0]
vmovd 0x00(m),hc0x
vmovd 0x10(m),t1x
vpunpcklqdq t1,hc0,hc0
vmovd 0x20(m),t1x
vmovd 0x30(m),t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,hc0,hc0
vpand ANMASK(%rip),hc0,hc0
vmovd h0,t1x
vpaddd t1,hc0,hc0
# hc1 = [(m[51-54] >> 2) & 0x3ffffff, (m[35-38] >> 2) & 0x3ffffff,
# (m[19-22] >> 2) & 0x3ffffff, (m[ 3- 6] >> 2) & 0x3ffffff + h1]
vmovd 0x03(m),hc1x
vmovd 0x13(m),t1x
vpunpcklqdq t1,hc1,hc1
vmovd 0x23(m),t1x
vmovd 0x33(m),t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,hc1,hc1
vpsrld $2,hc1,hc1
vpand ANMASK(%rip),hc1,hc1
vmovd h1,t1x
vpaddd t1,hc1,hc1
# hc2 = [(m[54-57] >> 4) & 0x3ffffff, (m[38-41] >> 4) & 0x3ffffff,
# (m[22-25] >> 4) & 0x3ffffff, (m[ 6- 9] >> 4) & 0x3ffffff + h2]
vmovd 0x06(m),hc2x
vmovd 0x16(m),t1x
vpunpcklqdq t1,hc2,hc2
vmovd 0x26(m),t1x
vmovd 0x36(m),t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,hc2,hc2
vpsrld $4,hc2,hc2
vpand ANMASK(%rip),hc2,hc2
vmovd h2,t1x
vpaddd t1,hc2,hc2
# hc3 = [(m[57-60] >> 6) & 0x3ffffff, (m[41-44] >> 6) & 0x3ffffff,
# (m[25-28] >> 6) & 0x3ffffff, (m[ 9-12] >> 6) & 0x3ffffff + h3]
vmovd 0x09(m),hc3x
vmovd 0x19(m),t1x
vpunpcklqdq t1,hc3,hc3
vmovd 0x29(m),t1x
vmovd 0x39(m),t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,hc3,hc3
vpsrld $6,hc3,hc3
vpand ANMASK(%rip),hc3,hc3
vmovd h3,t1x
vpaddd t1,hc3,hc3
# hc4 = [(m[60-63] >> 8) | (1<<24), (m[44-47] >> 8) | (1<<24),
# (m[28-31] >> 8) | (1<<24), (m[12-15] >> 8) | (1<<24) + h4]
vmovd 0x0c(m),hc4x
vmovd 0x1c(m),t1x
vpunpcklqdq t1,hc4,hc4
vmovd 0x2c(m),t1x
vmovd 0x3c(m),t2x
vpunpcklqdq t2,t1,t1
vperm2i128 $0x20,t1,hc4,hc4
vpsrld $8,hc4,hc4
vpor ORMASK(%rip),hc4,hc4
vmovd h4,t1x
vpaddd t1,hc4,hc4
# t1 = [ hc0[3] * r0, hc0[2] * u0, hc0[1] * w0, hc0[0] * y0 ]
vpmuludq hc0,ruwy0,t1
# t1 += [ hc1[3] * s4, hc1[2] * v4, hc1[1] * x4, hc1[0] * z4 ]
vpmuludq hc1,svxz4,t2
vpaddq t2,t1,t1
# t1 += [ hc2[3] * s3, hc2[2] * v3, hc2[1] * x3, hc2[0] * z3 ]
vpmuludq hc2,svxz3,t2
vpaddq t2,t1,t1
# t1 += [ hc3[3] * s2, hc3[2] * v2, hc3[1] * x2, hc3[0] * z2 ]
vpmuludq hc3,svxz2,t2
vpaddq t2,t1,t1
# t1 += [ hc4[3] * s1, hc4[2] * v1, hc4[1] * x1, hc4[0] * z1 ]
vpmuludq hc4,svxz1,t2
vpaddq t2,t1,t1
# d0 = t1[0] + t1[1] + t[2] + t[3]
vpermq $0xee,t1,t2
vpaddq t2,t1,t1
vpsrldq $8,t1,t2
vpaddq t2,t1,t1
vmovq t1x,d0
# t1 = [ hc0[3] * r1, hc0[2] * u1,hc0[1] * w1, hc0[0] * y1 ]
vpmuludq hc0,ruwy1,t1
# t1 += [ hc1[3] * r0, hc1[2] * u0, hc1[1] * w0, hc1[0] * y0 ]
vpmuludq hc1,ruwy0,t2
vpaddq t2,t1,t1
# t1 += [ hc2[3] * s4, hc2[2] * v4, hc2[1] * x4, hc2[0] * z4 ]
vpmuludq hc2,svxz4,t2
vpaddq t2,t1,t1
# t1 += [ hc3[3] * s3, hc3[2] * v3, hc3[1] * x3, hc3[0] * z3 ]
vpmuludq hc3,svxz3,t2
vpaddq t2,t1,t1
# t1 += [ hc4[3] * s2, hc4[2] * v2, hc4[1] * x2, hc4[0] * z2 ]
vpmuludq hc4,svxz2,t2
vpaddq t2,t1,t1
# d1 = t1[0] + t1[1] + t1[3] + t1[4]
vpermq $0xee,t1,t2
vpaddq t2,t1,t1
vpsrldq $8,t1,t2
vpaddq t2,t1,t1
vmovq t1x,d1
# t1 = [ hc0[3] * r2, hc0[2] * u2, hc0[1] * w2, hc0[0] * y2 ]
vpmuludq hc0,ruwy2,t1
# t1 += [ hc1[3] * r1, hc1[2] * u1, hc1[1] * w1, hc1[0] * y1 ]
vpmuludq hc1,ruwy1,t2
vpaddq t2,t1,t1
# t1 += [ hc2[3] * r0, hc2[2] * u0, hc2[1] * w0, hc2[0] * y0 ]
vpmuludq hc2,ruwy0,t2
vpaddq t2,t1,t1
# t1 += [ hc3[3] * s4, hc3[2] * v4, hc3[1] * x4, hc3[0] * z4 ]
vpmuludq hc3,svxz4,t2
vpaddq t2,t1,t1
# t1 += [ hc4[3] * s3, hc4[2] * v3, hc4[1] * x3, hc4[0] * z3 ]
vpmuludq hc4,svxz3,t2
vpaddq t2,t1,t1
# d2 = t1[0] + t1[1] + t1[2] + t1[3]
vpermq $0xee,t1,t2
vpaddq t2,t1,t1
vpsrldq $8,t1,t2
vpaddq t2,t1,t1
vmovq t1x,d2
# t1 = [ hc0[3] * r3, hc0[2] * u3, hc0[1] * w3, hc0[0] * y3 ]
vpmuludq hc0,ruwy3,t1
# t1 += [ hc1[3] * r2, hc1[2] * u2, hc1[1] * w2, hc1[0] * y2 ]
vpmuludq hc1,ruwy2,t2
vpaddq t2,t1,t1
# t1 += [ hc2[3] * r1, hc2[2] * u1, hc2[1] * w1, hc2[0] * y1 ]
vpmuludq hc2,ruwy1,t2
vpaddq t2,t1,t1
# t1 += [ hc3[3] * r0, hc3[2] * u0, hc3[1] * w0, hc3[0] * y0 ]
vpmuludq hc3,ruwy0,t2
vpaddq t2,t1,t1
# t1 += [ hc4[3] * s4, hc4[2] * v4, hc4[1] * x4, hc4[0] * z4 ]
vpmuludq hc4,svxz4,t2
vpaddq t2,t1,t1
# d3 = t1[0] + t1[1] + t1[2] + t1[3]
vpermq $0xee,t1,t2
vpaddq t2,t1,t1
vpsrldq $8,t1,t2
vpaddq t2,t1,t1
vmovq t1x,d3
# t1 = [ hc0[3] * r4, hc0[2] * u4, hc0[1] * w4, hc0[0] * y4 ]
vpmuludq hc0,ruwy4,t1
# t1 += [ hc1[3] * r3, hc1[2] * u3, hc1[1] * w3, hc1[0] * y3 ]
vpmuludq hc1,ruwy3,t2
vpaddq t2,t1,t1
# t1 += [ hc2[3] * r2, hc2[2] * u2, hc2[1] * w2, hc2[0] * y2 ]
vpmuludq hc2,ruwy2,t2
vpaddq t2,t1,t1
# t1 += [ hc3[3] * r1, hc3[2] * u1, hc3[1] * w1, hc3[0] * y1 ]
vpmuludq hc3,ruwy1,t2
vpaddq t2,t1,t1
# t1 += [ hc4[3] * r0, hc4[2] * u0, hc4[1] * w0, hc4[0] * y0 ]
vpmuludq hc4,ruwy0,t2
vpaddq t2,t1,t1
# d4 = t1[0] + t1[1] + t1[2] + t1[3]
vpermq $0xee,t1,t2
vpaddq t2,t1,t1
vpsrldq $8,t1,t2
vpaddq t2,t1,t1
vmovq t1x,d4
# d1 += d0 >> 26
mov d0,%rax
shr $26,%rax
add %rax,d1
# h0 = d0 & 0x3ffffff
mov d0,%rbx
and $0x3ffffff,%ebx
# d2 += d1 >> 26
mov d1,%rax
shr $26,%rax
add %rax,d2
# h1 = d1 & 0x3ffffff
mov d1,%rax
and $0x3ffffff,%eax
mov %eax,h1
# d3 += d2 >> 26
mov d2,%rax
shr $26,%rax
add %rax,d3
# h2 = d2 & 0x3ffffff
mov d2,%rax
and $0x3ffffff,%eax
mov %eax,h2
# d4 += d3 >> 26
mov d3,%rax
shr $26,%rax
add %rax,d4
# h3 = d3 & 0x3ffffff
mov d3,%rax
and $0x3ffffff,%eax
mov %eax,h3
# h0 += (d4 >> 26) * 5
mov d4,%rax
shr $26,%rax
lea (%eax,%eax,4),%eax
add %eax,%ebx
# h4 = d4 & 0x3ffffff
mov d4,%rax
and $0x3ffffff,%eax
mov %eax,h4
# h1 += h0 >> 26
mov %ebx,%eax
shr $26,%eax
add %eax,h1
# h0 = h0 & 0x3ffffff
andl $0x3ffffff,%ebx
mov %ebx,h0
add $0x40,m
dec %rcx
jnz .Ldoblock4
vzeroupper
pop %r13
pop %r12
pop %rbx
ret
ENDPROC(poly1305_4block_avx2)
This diff is collapsed.
/*
* Poly1305 authenticator algorithm, RFC7539, SIMD glue code
*
* Copyright (C) 2015 Martin Willi
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <crypto/algapi.h>
#include <crypto/internal/hash.h>
#include <crypto/poly1305.h>
#include <linux/crypto.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/fpu/api.h>
#include <asm/simd.h>
struct poly1305_simd_desc_ctx {
struct poly1305_desc_ctx base;
/* derived key u set? */
bool uset;
#ifdef CONFIG_AS_AVX2
/* derived keys r^3, r^4 set? */
bool wset;
#endif
/* derived Poly1305 key r^2 */
u32 u[5];
/* ... silently appended r^3 and r^4 when using AVX2 */
};
asmlinkage void poly1305_block_sse2(u32 *h, const u8 *src,
const u32 *r, unsigned int blocks);
asmlinkage void poly1305_2block_sse2(u32 *h, const u8 *src, const u32 *r,
unsigned int blocks, const u32 *u);
#ifdef CONFIG_AS_AVX2
asmlinkage void poly1305_4block_avx2(u32 *h, const u8 *src, const u32 *r,
unsigned int blocks, const u32 *u);
static bool poly1305_use_avx2;
#endif
static int poly1305_simd_init(struct shash_desc *desc)
{
struct poly1305_simd_desc_ctx *sctx = shash_desc_ctx(desc);
sctx->uset = false;
#ifdef CONFIG_AS_AVX2
sctx->wset = false;
#endif
return crypto_poly1305_init(desc);
}
static void poly1305_simd_mult(u32 *a, const u32 *b)
{
u8 m[POLY1305_BLOCK_SIZE];
memset(m, 0, sizeof(m));
/* The poly1305 block function adds a hi-bit to the accumulator which
* we don't need for key multiplication; compensate for it. */
a[4] -= 1 << 24;
poly1305_block_sse2(a, m, b, 1);
}
static unsigned int poly1305_simd_blocks(struct poly1305_desc_ctx *dctx,
const u8 *src, unsigned int srclen)
{
struct poly1305_simd_desc_ctx *sctx;
unsigned int blocks, datalen;
BUILD_BUG_ON(offsetof(struct poly1305_simd_desc_ctx, base));
sctx = container_of(dctx, struct poly1305_simd_desc_ctx, base);
if (unlikely(!dctx->sset)) {
datalen = crypto_poly1305_setdesckey(dctx, src, srclen);
src += srclen - datalen;
srclen = datalen;
}
#ifdef CONFIG_AS_AVX2
if (poly1305_use_avx2 && srclen >= POLY1305_BLOCK_SIZE * 4) {
if (unlikely(!sctx->wset)) {
if (!sctx->uset) {
memcpy(sctx->u, dctx->r, sizeof(sctx->u));
poly1305_simd_mult(sctx->u, dctx->r);
sctx->uset = true;
}
memcpy(sctx->u + 5, sctx->u, sizeof(sctx->u));
poly1305_simd_mult(sctx->u + 5, dctx->r);
memcpy(sctx->u + 10, sctx->u + 5, sizeof(sctx->u));
poly1305_simd_mult(sctx->u + 10, dctx->r);
sctx->wset = true;
}
blocks = srclen / (POLY1305_BLOCK_SIZE * 4);
poly1305_4block_avx2(dctx->h, src, dctx->r, blocks, sctx->u);
src += POLY1305_BLOCK_SIZE * 4 * blocks;
srclen -= POLY1305_BLOCK_SIZE * 4 * blocks;
}
#endif
if (likely(srclen >= POLY1305_BLOCK_SIZE * 2)) {
if (unlikely(!sctx->uset)) {
memcpy(sctx->u, dctx->r, sizeof(sctx->u));
poly1305_simd_mult(sctx->u, dctx->r);
sctx->uset = true;
}
blocks = srclen / (POLY1305_BLOCK_SIZE * 2);
poly1305_2block_sse2(dctx->h, src, dctx->r, blocks, sctx->u);
src += POLY1305_BLOCK_SIZE * 2 * blocks;
srclen -= POLY1305_BLOCK_SIZE * 2 * blocks;
}
if (srclen >= POLY1305_BLOCK_SIZE) {
poly1305_block_sse2(dctx->h, src, dctx->r, 1);
srclen -= POLY1305_BLOCK_SIZE;
}
return srclen;
}
static int poly1305_simd_update(struct shash_desc *desc,
const u8 *src, unsigned int srclen)
{
struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc);
unsigned int bytes;
/* kernel_fpu_begin/end is costly, use fallback for small updates */
if (srclen <= 288 || !may_use_simd())
return crypto_poly1305_update(desc, src, srclen);
kernel_fpu_begin();
if (unlikely(dctx->buflen)) {
bytes = min(srclen, POLY1305_BLOCK_SIZE - dctx->buflen);
memcpy(dctx->buf + dctx->buflen, src, bytes);
src += bytes;
srclen -= bytes;
dctx->buflen += bytes;
if (dctx->buflen == POLY1305_BLOCK_SIZE) {
poly1305_simd_blocks(dctx, dctx->buf,
POLY1305_BLOCK_SIZE);
dctx->buflen = 0;
}
}
if (likely(srclen >= POLY1305_BLOCK_SIZE)) {
bytes = poly1305_simd_blocks(dctx, src, srclen);
src += srclen - bytes;
srclen = bytes;
}
kernel_fpu_end();
if (unlikely(srclen)) {
dctx->buflen = srclen;
memcpy(dctx->buf, src, srclen);
}
return 0;
}
static struct shash_alg alg = {
.digestsize = POLY1305_DIGEST_SIZE,
.init = poly1305_simd_init,
.update = poly1305_simd_update,
.final = crypto_poly1305_final,
.setkey = crypto_poly1305_setkey,
.descsize = sizeof(struct poly1305_simd_desc_ctx),
.base = {
.cra_name = "poly1305",
.cra_driver_name = "poly1305-simd",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_alignmask = sizeof(u32) - 1,
.cra_blocksize = POLY1305_BLOCK_SIZE,
.cra_module = THIS_MODULE,
},
};
static int __init poly1305_simd_mod_init(void)
{
if (!cpu_has_xmm2)
return -ENODEV;
#ifdef CONFIG_AS_AVX2
poly1305_use_avx2 = cpu_has_avx && cpu_has_avx2 &&
cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, NULL);
alg.descsize = sizeof(struct poly1305_simd_desc_ctx);
if (poly1305_use_avx2)
alg.descsize += 10 * sizeof(u32);
#endif
return crypto_register_shash(&alg);
}
static void __exit poly1305_simd_mod_exit(void)
{
crypto_unregister_shash(&alg);
}
module_init(poly1305_simd_mod_init);
module_exit(poly1305_simd_mod_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Martin Willi <martin@strongswan.org>");
MODULE_DESCRIPTION("Poly1305 authenticator");
MODULE_ALIAS_CRYPTO("poly1305");
MODULE_ALIAS_CRYPTO("poly1305-simd");
...@@ -48,6 +48,8 @@ config CRYPTO_AEAD ...@@ -48,6 +48,8 @@ config CRYPTO_AEAD
config CRYPTO_AEAD2 config CRYPTO_AEAD2
tristate tristate
select CRYPTO_ALGAPI2 select CRYPTO_ALGAPI2
select CRYPTO_NULL2
select CRYPTO_RNG2
config CRYPTO_BLKCIPHER config CRYPTO_BLKCIPHER
tristate tristate
...@@ -150,12 +152,16 @@ config CRYPTO_GF128MUL ...@@ -150,12 +152,16 @@ config CRYPTO_GF128MUL
config CRYPTO_NULL config CRYPTO_NULL
tristate "Null algorithms" tristate "Null algorithms"
select CRYPTO_ALGAPI select CRYPTO_NULL2
select CRYPTO_BLKCIPHER
select CRYPTO_HASH
help help
These are 'Null' algorithms, used by IPsec, which do nothing. These are 'Null' algorithms, used by IPsec, which do nothing.
config CRYPTO_NULL2
tristate
select CRYPTO_ALGAPI2
select CRYPTO_BLKCIPHER2
select CRYPTO_HASH2
config CRYPTO_PCRYPT config CRYPTO_PCRYPT
tristate "Parallel crypto engine" tristate "Parallel crypto engine"
depends on SMP depends on SMP
...@@ -200,6 +206,7 @@ config CRYPTO_AUTHENC ...@@ -200,6 +206,7 @@ config CRYPTO_AUTHENC
select CRYPTO_BLKCIPHER select CRYPTO_BLKCIPHER
select CRYPTO_MANAGER select CRYPTO_MANAGER
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_NULL
help help
Authenc: Combined mode wrapper for IPsec. Authenc: Combined mode wrapper for IPsec.
This is required for IPSec. This is required for IPSec.
...@@ -470,6 +477,18 @@ config CRYPTO_POLY1305 ...@@ -470,6 +477,18 @@ config CRYPTO_POLY1305
It is used for the ChaCha20-Poly1305 AEAD, specified in RFC7539 for use It is used for the ChaCha20-Poly1305 AEAD, specified in RFC7539 for use
in IETF protocols. This is the portable C implementation of Poly1305. in IETF protocols. This is the portable C implementation of Poly1305.
config CRYPTO_POLY1305_X86_64
tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)"
depends on X86 && 64BIT
select CRYPTO_POLY1305
help
Poly1305 authenticator algorithm, RFC7539.
Poly1305 is an authenticator algorithm designed by Daniel J. Bernstein.
It is used for the ChaCha20-Poly1305 AEAD, specified in RFC7539 for use
in IETF protocols. This is the x86_64 assembler implementation using SIMD
instructions.
config CRYPTO_MD4 config CRYPTO_MD4
tristate "MD4 digest algorithm" tristate "MD4 digest algorithm"
select CRYPTO_HASH select CRYPTO_HASH
...@@ -1213,6 +1232,21 @@ config CRYPTO_CHACHA20 ...@@ -1213,6 +1232,21 @@ config CRYPTO_CHACHA20
See also: See also:
<http://cr.yp.to/chacha/chacha-20080128.pdf> <http://cr.yp.to/chacha/chacha-20080128.pdf>
config CRYPTO_CHACHA20_X86_64
tristate "ChaCha20 cipher algorithm (x86_64/SSSE3/AVX2)"
depends on X86 && 64BIT
select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20
help
ChaCha20 cipher algorithm, RFC7539.
ChaCha20 is a 256-bit high-speed stream cipher designed by Daniel J.
Bernstein and further specified in RFC7539 for use in IETF protocols.
This is the x86_64 assembler implementation using SIMD instructions.
See also:
<http://cr.yp.to/chacha/chacha-20080128.pdf>
config CRYPTO_SEED config CRYPTO_SEED
tristate "SEED cipher algorithm" tristate "SEED cipher algorithm"
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
......
...@@ -17,6 +17,7 @@ obj-$(CONFIG_CRYPTO_AEAD2) += aead.o ...@@ -17,6 +17,7 @@ obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
crypto_blkcipher-y := ablkcipher.o crypto_blkcipher-y := ablkcipher.o
crypto_blkcipher-y += blkcipher.o crypto_blkcipher-y += blkcipher.o
crypto_blkcipher-y += skcipher.o
obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
obj-$(CONFIG_CRYPTO_BLKCIPHER2) += chainiv.o obj-$(CONFIG_CRYPTO_BLKCIPHER2) += chainiv.o
obj-$(CONFIG_CRYPTO_BLKCIPHER2) += eseqiv.o obj-$(CONFIG_CRYPTO_BLKCIPHER2) += eseqiv.o
...@@ -46,7 +47,7 @@ obj-$(CONFIG_CRYPTO_CMAC) += cmac.o ...@@ -46,7 +47,7 @@ obj-$(CONFIG_CRYPTO_CMAC) += cmac.o
obj-$(CONFIG_CRYPTO_HMAC) += hmac.o obj-$(CONFIG_CRYPTO_HMAC) += hmac.o
obj-$(CONFIG_CRYPTO_VMAC) += vmac.o obj-$(CONFIG_CRYPTO_VMAC) += vmac.o
obj-$(CONFIG_CRYPTO_XCBC) += xcbc.o obj-$(CONFIG_CRYPTO_XCBC) += xcbc.o
obj-$(CONFIG_CRYPTO_NULL) += crypto_null.o obj-$(CONFIG_CRYPTO_NULL2) += crypto_null.o
obj-$(CONFIG_CRYPTO_MD4) += md4.o obj-$(CONFIG_CRYPTO_MD4) += md4.o
obj-$(CONFIG_CRYPTO_MD5) += md5.o obj-$(CONFIG_CRYPTO_MD5) += md5.o
obj-$(CONFIG_CRYPTO_RMD128) += rmd128.o obj-$(CONFIG_CRYPTO_RMD128) += rmd128.o
......
This diff is collapsed.
...@@ -67,12 +67,22 @@ static int crypto_check_alg(struct crypto_alg *alg) ...@@ -67,12 +67,22 @@ static int crypto_check_alg(struct crypto_alg *alg)
return crypto_set_driver_name(alg); return crypto_set_driver_name(alg);
} }
static void crypto_free_instance(struct crypto_instance *inst)
{
if (!inst->alg.cra_type->free) {
inst->tmpl->free(inst);
return;
}
inst->alg.cra_type->free(inst);
}
static void crypto_destroy_instance(struct crypto_alg *alg) static void crypto_destroy_instance(struct crypto_alg *alg)
{ {
struct crypto_instance *inst = (void *)alg; struct crypto_instance *inst = (void *)alg;
struct crypto_template *tmpl = inst->tmpl; struct crypto_template *tmpl = inst->tmpl;
tmpl->free(inst); crypto_free_instance(inst);
crypto_tmpl_put(tmpl); crypto_tmpl_put(tmpl);
} }
...@@ -481,7 +491,7 @@ void crypto_unregister_template(struct crypto_template *tmpl) ...@@ -481,7 +491,7 @@ void crypto_unregister_template(struct crypto_template *tmpl)
hlist_for_each_entry_safe(inst, n, list, list) { hlist_for_each_entry_safe(inst, n, list, list) {
BUG_ON(atomic_read(&inst->alg.cra_refcnt) != 1); BUG_ON(atomic_read(&inst->alg.cra_refcnt) != 1);
tmpl->free(inst); crypto_free_instance(inst);
} }
crypto_remove_final(&users); crypto_remove_final(&users);
} }
...@@ -892,7 +902,7 @@ int crypto_enqueue_request(struct crypto_queue *queue, ...@@ -892,7 +902,7 @@ int crypto_enqueue_request(struct crypto_queue *queue,
} }
EXPORT_SYMBOL_GPL(crypto_enqueue_request); EXPORT_SYMBOL_GPL(crypto_enqueue_request);
void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset) struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue)
{ {
struct list_head *request; struct list_head *request;
...@@ -907,14 +917,7 @@ void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset) ...@@ -907,14 +917,7 @@ void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset)
request = queue->list.next; request = queue->list.next;
list_del(request); list_del(request);
return (char *)list_entry(request, struct crypto_async_request, list) - return list_entry(request, struct crypto_async_request, list);
offset;
}
EXPORT_SYMBOL_GPL(__crypto_dequeue_request);
struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue)
{
return __crypto_dequeue_request(queue, 0);
} }
EXPORT_SYMBOL_GPL(crypto_dequeue_request); EXPORT_SYMBOL_GPL(crypto_dequeue_request);
......
...@@ -248,13 +248,11 @@ static int cryptomgr_schedule_test(struct crypto_alg *alg) ...@@ -248,13 +248,11 @@ static int cryptomgr_schedule_test(struct crypto_alg *alg)
type = alg->cra_flags; type = alg->cra_flags;
/* This piece of crap needs to disappear into per-type test hooks. */ /* This piece of crap needs to disappear into per-type test hooks. */
if ((!((type ^ CRYPTO_ALG_TYPE_BLKCIPHER) & if (!((type ^ CRYPTO_ALG_TYPE_BLKCIPHER) &
CRYPTO_ALG_TYPE_BLKCIPHER_MASK) && !(type & CRYPTO_ALG_GENIV) && CRYPTO_ALG_TYPE_BLKCIPHER_MASK) && !(type & CRYPTO_ALG_GENIV) &&
((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_BLKCIPHER ? alg->cra_blkcipher.ivsize : CRYPTO_ALG_TYPE_BLKCIPHER ? alg->cra_blkcipher.ivsize :
alg->cra_ablkcipher.ivsize)) || alg->cra_ablkcipher.ivsize))
(!((type ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK) &&
alg->cra_type == &crypto_nivaead_type && alg->cra_aead.ivsize))
type |= CRYPTO_ALG_TESTED; type |= CRYPTO_ALG_TESTED;
param->type = type; param->type = type;
......
...@@ -90,6 +90,7 @@ static void aead_put_sgl(struct sock *sk) ...@@ -90,6 +90,7 @@ static void aead_put_sgl(struct sock *sk)
put_page(sg_page(sg + i)); put_page(sg_page(sg + i));
sg_assign_page(sg + i, NULL); sg_assign_page(sg + i, NULL);
} }
sg_init_table(sg, ALG_MAX_PAGES);
sgl->cur = 0; sgl->cur = 0;
ctx->used = 0; ctx->used = 0;
ctx->more = 0; ctx->more = 0;
...@@ -514,8 +515,7 @@ static struct proto_ops algif_aead_ops = { ...@@ -514,8 +515,7 @@ static struct proto_ops algif_aead_ops = {
static void *aead_bind(const char *name, u32 type, u32 mask) static void *aead_bind(const char *name, u32 type, u32 mask)
{ {
return crypto_alloc_aead(name, type | CRYPTO_ALG_AEAD_NEW, return crypto_alloc_aead(name, type, mask);
mask | CRYPTO_ALG_AEAD_NEW);
} }
static void aead_release(void *private) static void aead_release(void *private)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -13,14 +13,7 @@ ...@@ -13,14 +13,7 @@
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <crypto/chacha20.h>
#define CHACHA20_NONCE_SIZE 16
#define CHACHA20_KEY_SIZE 32
#define CHACHA20_BLOCK_SIZE 64
struct chacha20_ctx {
u32 key[8];
};
static inline u32 rotl32(u32 v, u8 n) static inline u32 rotl32(u32 v, u8 n)
{ {
...@@ -108,7 +101,7 @@ static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src, ...@@ -108,7 +101,7 @@ static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
} }
} }
static void chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
{ {
static const char constant[16] = "expand 32-byte k"; static const char constant[16] = "expand 32-byte k";
...@@ -129,8 +122,9 @@ static void chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) ...@@ -129,8 +122,9 @@ static void chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
state[14] = le32_to_cpuvp(iv + 8); state[14] = le32_to_cpuvp(iv + 8);
state[15] = le32_to_cpuvp(iv + 12); state[15] = le32_to_cpuvp(iv + 12);
} }
EXPORT_SYMBOL_GPL(crypto_chacha20_init);
static int chacha20_setkey(struct crypto_tfm *tfm, const u8 *key, int crypto_chacha20_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keysize) unsigned int keysize)
{ {
struct chacha20_ctx *ctx = crypto_tfm_ctx(tfm); struct chacha20_ctx *ctx = crypto_tfm_ctx(tfm);
...@@ -144,8 +138,9 @@ static int chacha20_setkey(struct crypto_tfm *tfm, const u8 *key, ...@@ -144,8 +138,9 @@ static int chacha20_setkey(struct crypto_tfm *tfm, const u8 *key,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(crypto_chacha20_setkey);
static int chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, int crypto_chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
struct scatterlist *src, unsigned int nbytes) struct scatterlist *src, unsigned int nbytes)
{ {
struct blkcipher_walk walk; struct blkcipher_walk walk;
...@@ -155,7 +150,7 @@ static int chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, ...@@ -155,7 +150,7 @@ static int chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
blkcipher_walk_init(&walk, dst, src, nbytes); blkcipher_walk_init(&walk, dst, src, nbytes);
err = blkcipher_walk_virt_block(desc, &walk, CHACHA20_BLOCK_SIZE); err = blkcipher_walk_virt_block(desc, &walk, CHACHA20_BLOCK_SIZE);
chacha20_init(state, crypto_blkcipher_ctx(desc->tfm), walk.iv); crypto_chacha20_init(state, crypto_blkcipher_ctx(desc->tfm), walk.iv);
while (walk.nbytes >= CHACHA20_BLOCK_SIZE) { while (walk.nbytes >= CHACHA20_BLOCK_SIZE) {
chacha20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr, chacha20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
...@@ -172,6 +167,7 @@ static int chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, ...@@ -172,6 +167,7 @@ static int chacha20_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
return err; return err;
} }
EXPORT_SYMBOL_GPL(crypto_chacha20_crypt);
static struct crypto_alg alg = { static struct crypto_alg alg = {
.cra_name = "chacha20", .cra_name = "chacha20",
...@@ -187,11 +183,11 @@ static struct crypto_alg alg = { ...@@ -187,11 +183,11 @@ static struct crypto_alg alg = {
.blkcipher = { .blkcipher = {
.min_keysize = CHACHA20_KEY_SIZE, .min_keysize = CHACHA20_KEY_SIZE,
.max_keysize = CHACHA20_KEY_SIZE, .max_keysize = CHACHA20_KEY_SIZE,
.ivsize = CHACHA20_NONCE_SIZE, .ivsize = CHACHA20_IV_SIZE,
.geniv = "seqiv", .geniv = "seqiv",
.setkey = chacha20_setkey, .setkey = crypto_chacha20_setkey,
.encrypt = chacha20_crypt, .encrypt = crypto_chacha20_crypt,
.decrypt = chacha20_crypt, .decrypt = crypto_chacha20_crypt,
}, },
}, },
}; };
......
This diff is collapsed.
...@@ -176,10 +176,9 @@ static inline void cryptd_check_internal(struct rtattr **tb, u32 *type, ...@@ -176,10 +176,9 @@ static inline void cryptd_check_internal(struct rtattr **tb, u32 *type,
algt = crypto_get_attr_type(tb); algt = crypto_get_attr_type(tb);
if (IS_ERR(algt)) if (IS_ERR(algt))
return; return;
if ((algt->type & CRYPTO_ALG_INTERNAL))
*type |= CRYPTO_ALG_INTERNAL; *type |= algt->type & CRYPTO_ALG_INTERNAL;
if ((algt->mask & CRYPTO_ALG_INTERNAL)) *mask |= algt->mask & CRYPTO_ALG_INTERNAL;
*mask |= CRYPTO_ALG_INTERNAL;
} }
static int cryptd_blkcipher_setkey(struct crypto_ablkcipher *parent, static int cryptd_blkcipher_setkey(struct crypto_ablkcipher *parent,
...@@ -688,16 +687,18 @@ static void cryptd_aead_crypt(struct aead_request *req, ...@@ -688,16 +687,18 @@ static void cryptd_aead_crypt(struct aead_request *req,
int (*crypt)(struct aead_request *req)) int (*crypt)(struct aead_request *req))
{ {
struct cryptd_aead_request_ctx *rctx; struct cryptd_aead_request_ctx *rctx;
crypto_completion_t compl;
rctx = aead_request_ctx(req); rctx = aead_request_ctx(req);
compl = rctx->complete;
if (unlikely(err == -EINPROGRESS)) if (unlikely(err == -EINPROGRESS))
goto out; goto out;
aead_request_set_tfm(req, child); aead_request_set_tfm(req, child);
err = crypt( req ); err = crypt( req );
req->base.complete = rctx->complete;
out: out:
local_bh_disable(); local_bh_disable();
rctx->complete(&req->base, err); compl(&req->base, err);
local_bh_enable(); local_bh_enable();
} }
...@@ -708,7 +709,7 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err) ...@@ -708,7 +709,7 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err)
struct aead_request *req; struct aead_request *req;
req = container_of(areq, struct aead_request, base); req = container_of(areq, struct aead_request, base);
cryptd_aead_crypt(req, child, err, crypto_aead_crt(child)->encrypt); cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt);
} }
static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
...@@ -718,7 +719,7 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) ...@@ -718,7 +719,7 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
struct aead_request *req; struct aead_request *req;
req = container_of(areq, struct aead_request, base); req = container_of(areq, struct aead_request, base);
cryptd_aead_crypt(req, child, err, crypto_aead_crt(child)->decrypt); cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt);
} }
static int cryptd_aead_enqueue(struct aead_request *req, static int cryptd_aead_enqueue(struct aead_request *req,
...@@ -756,7 +757,9 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm) ...@@ -756,7 +757,9 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm)
return PTR_ERR(cipher); return PTR_ERR(cipher);
ctx->child = cipher; ctx->child = cipher;
crypto_aead_set_reqsize(tfm, sizeof(struct cryptd_aead_request_ctx)); crypto_aead_set_reqsize(
tfm, max((unsigned)sizeof(struct cryptd_aead_request_ctx),
crypto_aead_reqsize(cipher)));
return 0; return 0;
} }
...@@ -775,7 +778,7 @@ static int cryptd_create_aead(struct crypto_template *tmpl, ...@@ -775,7 +778,7 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
struct aead_alg *alg; struct aead_alg *alg;
const char *name; const char *name;
u32 type = 0; u32 type = 0;
u32 mask = 0; u32 mask = CRYPTO_ALG_ASYNC;
int err; int err;
cryptd_check_internal(tb, &type, &mask); cryptd_check_internal(tb, &type, &mask);
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#include <net/netlink.h> #include <net/netlink.h>
#include <linux/security.h> #include <linux/security.h>
#include <net/net_namespace.h> #include <net/net_namespace.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <crypto/internal/rng.h> #include <crypto/internal/rng.h>
#include <crypto/akcipher.h> #include <crypto/akcipher.h>
...@@ -385,34 +384,6 @@ static struct crypto_alg *crypto_user_skcipher_alg(const char *name, u32 type, ...@@ -385,34 +384,6 @@ static struct crypto_alg *crypto_user_skcipher_alg(const char *name, u32 type,
return ERR_PTR(err); return ERR_PTR(err);
} }
static struct crypto_alg *crypto_user_aead_alg(const char *name, u32 type,
u32 mask)
{
int err;
struct crypto_alg *alg;
type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
type |= CRYPTO_ALG_TYPE_AEAD;
mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
mask |= CRYPTO_ALG_TYPE_MASK;
for (;;) {
alg = crypto_lookup_aead(name, type, mask);
if (!IS_ERR(alg))
return alg;
err = PTR_ERR(alg);
if (err != -EAGAIN)
break;
if (signal_pending(current)) {
err = -EINTR;
break;
}
}
return ERR_PTR(err);
}
static int crypto_add_alg(struct sk_buff *skb, struct nlmsghdr *nlh, static int crypto_add_alg(struct sk_buff *skb, struct nlmsghdr *nlh,
struct nlattr **attrs) struct nlattr **attrs)
{ {
...@@ -446,9 +417,6 @@ static int crypto_add_alg(struct sk_buff *skb, struct nlmsghdr *nlh, ...@@ -446,9 +417,6 @@ static int crypto_add_alg(struct sk_buff *skb, struct nlmsghdr *nlh,
name = p->cru_name; name = p->cru_name;
switch (p->cru_type & p->cru_mask & CRYPTO_ALG_TYPE_MASK) { switch (p->cru_type & p->cru_mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_AEAD:
alg = crypto_user_aead_alg(name, p->cru_type, p->cru_mask);
break;
case CRYPTO_ALG_TYPE_GIVCIPHER: case CRYPTO_ALG_TYPE_GIVCIPHER:
case CRYPTO_ALG_TYPE_BLKCIPHER: case CRYPTO_ALG_TYPE_BLKCIPHER:
case CRYPTO_ALG_TYPE_ABLKCIPHER: case CRYPTO_ALG_TYPE_ABLKCIPHER:
......
This diff is collapsed.
This diff is collapsed.
...@@ -79,7 +79,7 @@ int jent_fips_enabled(void) ...@@ -79,7 +79,7 @@ int jent_fips_enabled(void)
void jent_panic(char *s) void jent_panic(char *s)
{ {
panic(s); panic("%s", s);
} }
void jent_memcpy(void *dest, const void *src, unsigned int n) void jent_memcpy(void *dest, const void *src, unsigned int n)
......
...@@ -274,11 +274,16 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb, ...@@ -274,11 +274,16 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb,
u32 type, u32 mask) u32 type, u32 mask)
{ {
struct pcrypt_instance_ctx *ctx; struct pcrypt_instance_ctx *ctx;
struct crypto_attr_type *algt;
struct aead_instance *inst; struct aead_instance *inst;
struct aead_alg *alg; struct aead_alg *alg;
const char *name; const char *name;
int err; int err;
algt = crypto_get_attr_type(tb);
if (IS_ERR(algt))
return PTR_ERR(algt);
name = crypto_attr_alg_name(tb[1]); name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(name)) if (IS_ERR(name))
return PTR_ERR(name); return PTR_ERR(name);
...@@ -299,6 +304,8 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb, ...@@ -299,6 +304,8 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb,
if (err) if (err)
goto out_drop_aead; goto out_drop_aead;
inst->alg.base.cra_flags = CRYPTO_ALG_ASYNC;
inst->alg.ivsize = crypto_aead_alg_ivsize(alg); inst->alg.ivsize = crypto_aead_alg_ivsize(alg);
inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(alg); inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(alg);
......
This diff is collapsed.
...@@ -267,12 +267,36 @@ static int rsa_verify(struct akcipher_request *req) ...@@ -267,12 +267,36 @@ static int rsa_verify(struct akcipher_request *req)
return ret; return ret;
} }
static int rsa_check_key_length(unsigned int len)
{
switch (len) {
case 512:
case 1024:
case 1536:
case 2048:
case 3072:
case 4096:
return 0;
}
return -EINVAL;
}
static int rsa_setkey(struct crypto_akcipher *tfm, const void *key, static int rsa_setkey(struct crypto_akcipher *tfm, const void *key,
unsigned int keylen) unsigned int keylen)
{ {
struct rsa_key *pkey = akcipher_tfm_ctx(tfm); struct rsa_key *pkey = akcipher_tfm_ctx(tfm);
int ret;
return rsa_parse_key(pkey, key, keylen); ret = rsa_parse_key(pkey, key, keylen);
if (ret)
return ret;
if (rsa_check_key_length(mpi_get_size(pkey->n) << 3)) {
rsa_free_key(pkey);
ret = -EINVAL;
}
return ret;
} }
static void rsa_exit_tfm(struct crypto_akcipher *tfm) static void rsa_exit_tfm(struct crypto_akcipher *tfm)
......
...@@ -28,7 +28,7 @@ int rsa_get_n(void *context, size_t hdrlen, unsigned char tag, ...@@ -28,7 +28,7 @@ int rsa_get_n(void *context, size_t hdrlen, unsigned char tag,
return -ENOMEM; return -ENOMEM;
/* In FIPS mode only allow key size 2K & 3K */ /* In FIPS mode only allow key size 2K & 3K */
if (fips_enabled && (mpi_get_size(key->n) != 256 || if (fips_enabled && (mpi_get_size(key->n) != 256 &&
mpi_get_size(key->n) != 384)) { mpi_get_size(key->n) != 384)) {
pr_err("RSA: key size not allowed in FIPS mode\n"); pr_err("RSA: key size not allowed in FIPS mode\n");
mpi_free(key->n); mpi_free(key->n);
...@@ -62,7 +62,7 @@ int rsa_get_d(void *context, size_t hdrlen, unsigned char tag, ...@@ -62,7 +62,7 @@ int rsa_get_d(void *context, size_t hdrlen, unsigned char tag,
return -ENOMEM; return -ENOMEM;
/* In FIPS mode only allow key size 2K & 3K */ /* In FIPS mode only allow key size 2K & 3K */
if (fips_enabled && (mpi_get_size(key->d) != 256 || if (fips_enabled && (mpi_get_size(key->d) != 256 &&
mpi_get_size(key->d) != 384)) { mpi_get_size(key->d) != 384)) {
pr_err("RSA: key size not allowed in FIPS mode\n"); pr_err("RSA: key size not allowed in FIPS mode\n");
mpi_free(key->d); mpi_free(key->d);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -381,6 +381,9 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) ...@@ -381,6 +381,9 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
clk[IMX6QDL_CLK_ASRC] = imx_clk_gate2_shared("asrc", "asrc_podf", base + 0x68, 6, &share_count_asrc); clk[IMX6QDL_CLK_ASRC] = imx_clk_gate2_shared("asrc", "asrc_podf", base + 0x68, 6, &share_count_asrc);
clk[IMX6QDL_CLK_ASRC_IPG] = imx_clk_gate2_shared("asrc_ipg", "ahb", base + 0x68, 6, &share_count_asrc); clk[IMX6QDL_CLK_ASRC_IPG] = imx_clk_gate2_shared("asrc_ipg", "ahb", base + 0x68, 6, &share_count_asrc);
clk[IMX6QDL_CLK_ASRC_MEM] = imx_clk_gate2_shared("asrc_mem", "ahb", base + 0x68, 6, &share_count_asrc); clk[IMX6QDL_CLK_ASRC_MEM] = imx_clk_gate2_shared("asrc_mem", "ahb", base + 0x68, 6, &share_count_asrc);
clk[IMX6QDL_CLK_CAAM_MEM] = imx_clk_gate2("caam_mem", "ahb", base + 0x68, 8);
clk[IMX6QDL_CLK_CAAM_ACLK] = imx_clk_gate2("caam_aclk", "ahb", base + 0x68, 10);
clk[IMX6QDL_CLK_CAAM_IPG] = imx_clk_gate2("caam_ipg", "ipg", base + 0x68, 12);
clk[IMX6QDL_CLK_CAN1_IPG] = imx_clk_gate2("can1_ipg", "ipg", base + 0x68, 14); clk[IMX6QDL_CLK_CAN1_IPG] = imx_clk_gate2("can1_ipg", "ipg", base + 0x68, 14);
clk[IMX6QDL_CLK_CAN1_SERIAL] = imx_clk_gate2("can1_serial", "can_root", base + 0x68, 16); clk[IMX6QDL_CLK_CAN1_SERIAL] = imx_clk_gate2("can1_serial", "can_root", base + 0x68, 16);
clk[IMX6QDL_CLK_CAN2_IPG] = imx_clk_gate2("can2_ipg", "ipg", base + 0x68, 18); clk[IMX6QDL_CLK_CAN2_IPG] = imx_clk_gate2("can2_ipg", "ipg", base + 0x68, 18);
......
...@@ -480,4 +480,21 @@ config CRYPTO_DEV_IMGTEC_HASH ...@@ -480,4 +480,21 @@ config CRYPTO_DEV_IMGTEC_HASH
hardware hash accelerator. Supporting MD5/SHA1/SHA224/SHA256 hardware hash accelerator. Supporting MD5/SHA1/SHA224/SHA256
hashing algorithms. hashing algorithms.
config CRYPTO_DEV_SUN4I_SS
tristate "Support for Allwinner Security System cryptographic accelerator"
depends on ARCH_SUNXI
select CRYPTO_MD5
select CRYPTO_SHA1
select CRYPTO_AES
select CRYPTO_DES
select CRYPTO_BLKCIPHER
help
Some Allwinner SoC have a crypto accelerator named
Security System. Select this if you want to use it.
The Security System handle AES/DES/3DES ciphers in CBC mode
and SHA1 and MD5 hash algorithms.
To compile this driver as a module, choose M here: the module
will be called sun4i-ss.
endif # CRYPTO_HW endif # CRYPTO_HW
...@@ -28,3 +28,4 @@ obj-$(CONFIG_CRYPTO_DEV_UX500) += ux500/ ...@@ -28,3 +28,4 @@ obj-$(CONFIG_CRYPTO_DEV_UX500) += ux500/
obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/ obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/
obj-$(CONFIG_CRYPTO_DEV_QCE) += qce/ obj-$(CONFIG_CRYPTO_DEV_QCE) += qce/
obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/ obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
obj-$(CONFIG_CRYPTO_DEV_SUN4I_SS) += sunxi-ss/
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment