Commit 44d21c3f authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "Here is the crypto update for 4.2:

  API:

   - Convert RNG interface to new style.

   - New AEAD interface with one SG list for AD and plain/cipher text.
     All external AEAD users have been converted.

   - New asymmetric key interface (akcipher).

  Algorithms:

   - Chacha20, Poly1305 and RFC7539 support.

   - New RSA implementation.

   - Jitter RNG.

   - DRBG is now seeded with both /dev/random and Jitter RNG.  If kernel
     pool isn't ready then DRBG will be reseeded when it is.

   - DRBG is now the default crypto API RNG, replacing krng.

   - 842 compression (previously part of powerpc nx driver).

  Drivers:

   - Accelerated SHA-512 for arm64.

   - New Marvell CESA driver that supports DMA and more algorithms.

   - Updated powerpc nx 842 support.

   - Added support for SEC1 hardware to talitos"

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (292 commits)
  crypto: marvell/cesa - remove COMPILE_TEST dependency
  crypto: algif_aead - Temporarily disable all AEAD algorithms
  crypto: af_alg - Forbid the use internal algorithms
  crypto: echainiv - Only hold RNG during initialisation
  crypto: seqiv - Add compatibility support without RNG
  crypto: eseqiv - Offer normal cipher functionality without RNG
  crypto: chainiv - Offer normal cipher functionality without RNG
  crypto: user - Add CRYPTO_MSG_DELRNG
  crypto: user - Move cryptouser.h to uapi
  crypto: rng - Do not free default RNG when it becomes unused
  crypto: skcipher - Allow givencrypt to be NULL
  crypto: sahara - propagate the error on clk_disable_unprepare() failure
  crypto: rsa - fix invalid select for AKCIPHER
  crypto: picoxcell - Update to the current clk API
  crypto: nx - Check for bogus firmware properties
  crypto: marvell/cesa - add DT bindings documentation
  crypto: marvell/cesa - add support for Kirkwood and Dove SoCs
  crypto: marvell/cesa - add support for Orion SoCs
  crypto: marvell/cesa - add allhwsupport module parameter
  crypto: marvell/cesa - add support for all armada SoCs
  ...
parents efdfce2b fe55dfdc
...@@ -119,7 +119,7 @@ ...@@ -119,7 +119,7 @@
<para> <para>
Note: The terms "transformation" and cipher algorithm are used Note: The terms "transformation" and cipher algorithm are used
interchangably. interchangeably.
</para> </para>
</sect1> </sect1>
...@@ -536,8 +536,8 @@ ...@@ -536,8 +536,8 @@
<para> <para>
For other use cases of AEAD ciphers, the ASCII art applies as For other use cases of AEAD ciphers, the ASCII art applies as
well, but the caller may not use the GIVCIPHER interface. In well, but the caller may not use the AEAD cipher with a separate
this case, the caller must generate the IV. IV generator. In this case, the caller must generate the IV.
</para> </para>
<para> <para>
...@@ -584,8 +584,8 @@ kernel crypto API | IPSEC Layer ...@@ -584,8 +584,8 @@ kernel crypto API | IPSEC Layer
| |
+-----------+ | +-----------+ |
| | (1) | | (1)
| givcipher | <----------------------------------- esp_output | aead | <----------------------------------- esp_output
| (seqiv) | ---+ | (seqniv) | ---+
+-----------+ | +-----------+ |
| (2) | (2)
+-----------+ | +-----------+ |
...@@ -620,8 +620,8 @@ kernel crypto API | IPSEC Layer ...@@ -620,8 +620,8 @@ kernel crypto API | IPSEC Layer
<orderedlist> <orderedlist>
<listitem> <listitem>
<para> <para>
esp_output() invokes crypto_aead_givencrypt() to trigger an encryption esp_output() invokes crypto_aead_encrypt() to trigger an encryption
operation of the GIVCIPHER implementation. operation of the AEAD cipher with IV generator.
</para> </para>
<para> <para>
...@@ -1563,7 +1563,7 @@ struct sockaddr_alg sa = { ...@@ -1563,7 +1563,7 @@ struct sockaddr_alg sa = {
<sect1><title>Zero-Copy Interface</title> <sect1><title>Zero-Copy Interface</title>
<para> <para>
In addition to the send/write/read/recv system call familty, the AF_ALG In addition to the send/write/read/recv system call family, the AF_ALG
interface can be accessed with the zero-copy interface of splice/vmsplice. interface can be accessed with the zero-copy interface of splice/vmsplice.
As the name indicates, the kernel tries to avoid a copy operation into As the name indicates, the kernel tries to avoid a copy operation into
kernel space. kernel space.
...@@ -1669,9 +1669,19 @@ read(opfd, out, outlen); ...@@ -1669,9 +1669,19 @@ read(opfd, out, outlen);
</chapter> </chapter>
<chapter id="API"><title>Programming Interface</title> <chapter id="API"><title>Programming Interface</title>
<para>
Please note that the kernel crypto API contains the AEAD givcrypt
API (crypto_aead_giv* and aead_givcrypt_* function calls in
include/crypto/aead.h). This API is obsolete and will be removed
in the future. To obtain the functionality of an AEAD cipher with
internal IV generation, use the IV generator as a regular cipher.
For example, rfc4106(gcm(aes)) is the AEAD cipher with external
IV generation and seqniv(rfc4106(gcm(aes))) implies that the kernel
crypto API generates the IV. Different IV generators are available.
</para>
<sect1><title>Block Cipher Context Data Structures</title> <sect1><title>Block Cipher Context Data Structures</title>
!Pinclude/linux/crypto.h Block Cipher Context Data Structures !Pinclude/linux/crypto.h Block Cipher Context Data Structures
!Finclude/linux/crypto.h aead_request !Finclude/crypto/aead.h aead_request
</sect1> </sect1>
<sect1><title>Block Cipher Algorithm Definitions</title> <sect1><title>Block Cipher Algorithm Definitions</title>
!Pinclude/linux/crypto.h Block Cipher Algorithm Definitions !Pinclude/linux/crypto.h Block Cipher Algorithm Definitions
...@@ -1680,7 +1690,7 @@ read(opfd, out, outlen); ...@@ -1680,7 +1690,7 @@ read(opfd, out, outlen);
!Finclude/linux/crypto.h aead_alg !Finclude/linux/crypto.h aead_alg
!Finclude/linux/crypto.h blkcipher_alg !Finclude/linux/crypto.h blkcipher_alg
!Finclude/linux/crypto.h cipher_alg !Finclude/linux/crypto.h cipher_alg
!Finclude/linux/crypto.h rng_alg !Finclude/crypto/rng.h rng_alg
</sect1> </sect1>
<sect1><title>Asynchronous Block Cipher API</title> <sect1><title>Asynchronous Block Cipher API</title>
!Pinclude/linux/crypto.h Asynchronous Block Cipher API !Pinclude/linux/crypto.h Asynchronous Block Cipher API
...@@ -1704,26 +1714,27 @@ read(opfd, out, outlen); ...@@ -1704,26 +1714,27 @@ read(opfd, out, outlen);
!Finclude/linux/crypto.h ablkcipher_request_set_crypt !Finclude/linux/crypto.h ablkcipher_request_set_crypt
</sect1> </sect1>
<sect1><title>Authenticated Encryption With Associated Data (AEAD) Cipher API</title> <sect1><title>Authenticated Encryption With Associated Data (AEAD) Cipher API</title>
!Pinclude/linux/crypto.h Authenticated Encryption With Associated Data (AEAD) Cipher API !Pinclude/crypto/aead.h Authenticated Encryption With Associated Data (AEAD) Cipher API
!Finclude/linux/crypto.h crypto_alloc_aead !Finclude/crypto/aead.h crypto_alloc_aead
!Finclude/linux/crypto.h crypto_free_aead !Finclude/crypto/aead.h crypto_free_aead
!Finclude/linux/crypto.h crypto_aead_ivsize !Finclude/crypto/aead.h crypto_aead_ivsize
!Finclude/linux/crypto.h crypto_aead_authsize !Finclude/crypto/aead.h crypto_aead_authsize
!Finclude/linux/crypto.h crypto_aead_blocksize !Finclude/crypto/aead.h crypto_aead_blocksize
!Finclude/linux/crypto.h crypto_aead_setkey !Finclude/crypto/aead.h crypto_aead_setkey
!Finclude/linux/crypto.h crypto_aead_setauthsize !Finclude/crypto/aead.h crypto_aead_setauthsize
!Finclude/linux/crypto.h crypto_aead_encrypt !Finclude/crypto/aead.h crypto_aead_encrypt
!Finclude/linux/crypto.h crypto_aead_decrypt !Finclude/crypto/aead.h crypto_aead_decrypt
</sect1> </sect1>
<sect1><title>Asynchronous AEAD Request Handle</title> <sect1><title>Asynchronous AEAD Request Handle</title>
!Pinclude/linux/crypto.h Asynchronous AEAD Request Handle !Pinclude/crypto/aead.h Asynchronous AEAD Request Handle
!Finclude/linux/crypto.h crypto_aead_reqsize !Finclude/crypto/aead.h crypto_aead_reqsize
!Finclude/linux/crypto.h aead_request_set_tfm !Finclude/crypto/aead.h aead_request_set_tfm
!Finclude/linux/crypto.h aead_request_alloc !Finclude/crypto/aead.h aead_request_alloc
!Finclude/linux/crypto.h aead_request_free !Finclude/crypto/aead.h aead_request_free
!Finclude/linux/crypto.h aead_request_set_callback !Finclude/crypto/aead.h aead_request_set_callback
!Finclude/linux/crypto.h aead_request_set_crypt !Finclude/crypto/aead.h aead_request_set_crypt
!Finclude/linux/crypto.h aead_request_set_assoc !Finclude/crypto/aead.h aead_request_set_assoc
!Finclude/crypto/aead.h aead_request_set_ad
</sect1> </sect1>
<sect1><title>Synchronous Block Cipher API</title> <sect1><title>Synchronous Block Cipher API</title>
!Pinclude/linux/crypto.h Synchronous Block Cipher API !Pinclude/linux/crypto.h Synchronous Block Cipher API
......
Freescale SoC SEC Security Engines versions 2.x-3.x Freescale SoC SEC Security Engines versions 1.x-2.x-3.x
Required properties: Required properties:
- compatible : Should contain entries for this and backward compatible - compatible : Should contain entries for this and backward compatible
SEC versions, high to low, e.g., "fsl,sec2.1", "fsl,sec2.0" SEC versions, high to low, e.g., "fsl,sec2.1", "fsl,sec2.0" (SEC2/3)
e.g., "fsl,sec1.2", "fsl,sec1.0" (SEC1)
warning: SEC1 and SEC2 are mutually exclusive
- reg : Offset and length of the register set for the device - reg : Offset and length of the register set for the device
- interrupts : the SEC's interrupt number - interrupts : the SEC's interrupt number
- fsl,num-channels : An integer representing the number of channels - fsl,num-channels : An integer representing the number of channels
......
Marvell Cryptographic Engines And Security Accelerator
Required properties:
- compatible: should be one of the following string
"marvell,orion-crypto"
"marvell,kirkwood-crypto"
"marvell,dove-crypto"
"marvell,armada-370-crypto"
"marvell,armada-xp-crypto"
"marvell,armada-375-crypto"
"marvell,armada-38x-crypto"
- reg: base physical address of the engine and length of memory mapped
region. Can also contain an entry for the SRAM attached to the CESA,
but this representation is deprecated and marvell,crypto-srams should
be used instead
- reg-names: "regs". Can contain an "sram" entry, but this representation
is deprecated and marvell,crypto-srams should be used instead
- interrupts: interrupt number
- clocks: reference to the crypto engines clocks. This property is not
required for orion and kirkwood platforms
- clock-names: "cesaX" and "cesazX", X should be replaced by the crypto engine
id.
This property is not required for the orion and kirkwoord
platforms.
"cesazX" clocks are not required on armada-370 platforms
- marvell,crypto-srams: phandle to crypto SRAM definitions
Optional properties:
- marvell,crypto-sram-size: SRAM size reserved for crypto operations, if not
specified the whole SRAM is used (2KB)
Examples:
crypto@90000 {
compatible = "marvell,armada-xp-crypto";
reg = <0x90000 0x10000>;
reg-names = "regs";
interrupts = <48>, <49>;
clocks = <&gateclk 23>, <&gateclk 23>;
clock-names = "cesa0", "cesa1";
marvell,crypto-srams = <&crypto_sram0>, <&crypto_sram1>;
marvell,crypto-sram-size = <0x600>;
status = "okay";
};
Marvell Cryptographic Engines And Security Accelerator Marvell Cryptographic Engines And Security Accelerator
Required properties: Required properties:
- compatible : should be "marvell,orion-crypto" - compatible: should be one of the following string
- reg : base physical address of the engine and length of memory mapped "marvell,orion-crypto"
region, followed by base physical address of sram and its memory "marvell,kirkwood-crypto"
length "marvell,dove-crypto"
- reg-names : "regs" , "sram"; - reg: base physical address of the engine and length of memory mapped
- interrupts : interrupt number region. Can also contain an entry for the SRAM attached to the CESA,
but this representation is deprecated and marvell,crypto-srams should
be used instead
- reg-names: "regs". Can contain an "sram" entry, but this representation
is deprecated and marvell,crypto-srams should be used instead
- interrupts: interrupt number
- clocks: reference to the crypto engines clocks. This property is only
required for Dove platforms
- marvell,crypto-srams: phandle to crypto SRAM definitions
Optional properties:
- marvell,crypto-sram-size: SRAM size reserved for crypto operations, if not
specified the whole SRAM is used (2KB)
Examples: Examples:
crypto@30000 { crypto@30000 {
compatible = "marvell,orion-crypto"; compatible = "marvell,orion-crypto";
reg = <0x30000 0x10000>, reg = <0x30000 0x10000>;
<0x4000000 0x800>; reg-names = "regs";
reg-names = "regs" , "sram";
interrupts = <22>; interrupts = <22>;
marvell,crypto-srams = <&crypto_sram>;
marvell,crypto-sram-size = <0x600>;
status = "okay"; status = "okay";
}; };
...@@ -4879,13 +4879,23 @@ M: Marcelo Henrique Cerri <mhcerri@linux.vnet.ibm.com> ...@@ -4879,13 +4879,23 @@ M: Marcelo Henrique Cerri <mhcerri@linux.vnet.ibm.com>
M: Fionnuala Gunter <fin@linux.vnet.ibm.com> M: Fionnuala Gunter <fin@linux.vnet.ibm.com>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
S: Supported S: Supported
F: drivers/crypto/nx/ F: drivers/crypto/nx/Makefile
F: drivers/crypto/nx/Kconfig
F: drivers/crypto/nx/nx-aes*
F: drivers/crypto/nx/nx-sha*
F: drivers/crypto/nx/nx.*
F: drivers/crypto/nx/nx_csbcpb.h
F: drivers/crypto/nx/nx_debugfs.h
IBM Power 842 compression accelerator IBM Power 842 compression accelerator
M: Dan Streetman <ddstreet@us.ibm.com> M: Dan Streetman <ddstreet@us.ibm.com>
S: Supported S: Supported
F: drivers/crypto/nx/nx-842.c F: drivers/crypto/nx/Makefile
F: include/linux/nx842.h F: drivers/crypto/nx/Kconfig
F: drivers/crypto/nx/nx-842*
F: include/linux/sw842.h
F: crypto/842.c
F: lib/842/
IBM Power Linux RAID adapter IBM Power Linux RAID adapter
M: Brian King <brking@us.ibm.com> M: Brian King <brking@us.ibm.com>
......
...@@ -53,20 +53,13 @@ config CRYPTO_SHA256_ARM ...@@ -53,20 +53,13 @@ config CRYPTO_SHA256_ARM
SHA-256 secure hash standard (DFIPS 180-2) implemented SHA-256 secure hash standard (DFIPS 180-2) implemented
using optimized ARM assembler and NEON, when available. using optimized ARM assembler and NEON, when available.
config CRYPTO_SHA512_ARM_NEON config CRYPTO_SHA512_ARM
tristate "SHA384 and SHA512 digest algorithm (ARM NEON)" tristate "SHA-384/512 digest algorithm (ARM-asm and NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SHA512
select CRYPTO_HASH select CRYPTO_HASH
depends on !CPU_V7M
help help
SHA-512 secure hash standard (DFIPS 180-2) implemented SHA-512 secure hash standard (DFIPS 180-2) implemented
using ARM NEON instructions, when available. using optimized ARM assembler and NEON, when available.
This version of SHA implements a 512 bit hash with 256 bits of
security against collision attacks.
This code also includes SHA-384, a 384 bit hash with 192 bits
of security against collision attacks.
config CRYPTO_AES_ARM config CRYPTO_AES_ARM
tristate "AES cipher algorithms (ARM-asm)" tristate "AES cipher algorithms (ARM-asm)"
......
...@@ -7,7 +7,7 @@ obj-$(CONFIG_CRYPTO_AES_ARM_BS) += aes-arm-bs.o ...@@ -7,7 +7,7 @@ obj-$(CONFIG_CRYPTO_AES_ARM_BS) += aes-arm-bs.o
obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
obj-$(CONFIG_CRYPTO_SHA512_ARM_NEON) += sha512-arm-neon.o obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
...@@ -30,7 +30,8 @@ sha1-arm-y := sha1-armv4-large.o sha1_glue.o ...@@ -30,7 +30,8 @@ sha1-arm-y := sha1-armv4-large.o sha1_glue.o
sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o
sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o
sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y) sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y)
sha512-arm-neon-y := sha512-armv7-neon.o sha512_neon_glue.o sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o
sha2-arm-ce-y := sha2-ce-core.o sha2-ce-glue.o sha2-arm-ce-y := sha2-ce-core.o sha2-ce-glue.o
aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o
...@@ -45,4 +46,7 @@ $(src)/aesbs-core.S_shipped: $(src)/bsaes-armv7.pl ...@@ -45,4 +46,7 @@ $(src)/aesbs-core.S_shipped: $(src)/bsaes-armv7.pl
$(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl $(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl
$(call cmd,perl) $(call cmd,perl)
.PRECIOUS: $(obj)/aesbs-core.S $(obj)/sha256-core.S $(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl
$(call cmd,perl)
.PRECIOUS: $(obj)/aesbs-core.S $(obj)/sha256-core.S $(obj)/sha512-core.S
...@@ -101,15 +101,14 @@ ...@@ -101,15 +101,14 @@
\dround q10, q11 \dround q10, q11
blo 0f @ AES-128: 10 rounds blo 0f @ AES-128: 10 rounds
vld1.8 {q10-q11}, [ip]! vld1.8 {q10-q11}, [ip]!
beq 1f @ AES-192: 12 rounds
\dround q12, q13 \dround q12, q13
beq 1f @ AES-192: 12 rounds
vld1.8 {q12-q13}, [ip] vld1.8 {q12-q13}, [ip]
\dround q10, q11 \dround q10, q11
0: \fround q12, q13, q14 0: \fround q12, q13, q14
bx lr bx lr
1: \dround q12, q13 1: \fround q10, q11, q14
\fround q10, q11, q14
bx lr bx lr
.endm .endm
...@@ -122,8 +121,8 @@ ...@@ -122,8 +121,8 @@
* q2 : third in/output block (_3x version only) * q2 : third in/output block (_3x version only)
* q8 : first round key * q8 : first round key
* q9 : secound round key * q9 : secound round key
* ip : address of 3rd round key
* q14 : final round key * q14 : final round key
* r2 : address of round key array
* r3 : number of rounds * r3 : number of rounds
*/ */
.align 6 .align 6
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* sha512-glue.c - accelerated SHA-384/512 for ARM
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <crypto/internal/hash.h>
#include <crypto/sha.h>
#include <crypto/sha512_base.h>
#include <linux/crypto.h>
#include <linux/module.h>
#include <asm/hwcap.h>
#include <asm/neon.h>
#include "sha512.h"
MODULE_DESCRIPTION("Accelerated SHA-384/SHA-512 secure hash for ARM");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha384-arm");
MODULE_ALIAS_CRYPTO("sha512-arm");
asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks);
int sha512_arm_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order);
}
int sha512_arm_final(struct shash_desc *desc, u8 *out)
{
sha512_base_do_finalize(desc,
(sha512_block_fn *)sha512_block_data_order);
return sha512_base_finish(desc, out);
}
int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order);
return sha512_arm_final(desc, out);
}
static struct shash_alg sha512_arm_algs[] = { {
.init = sha384_base_init,
.update = sha512_arm_update,
.final = sha512_arm_final,
.finup = sha512_arm_finup,
.descsize = sizeof(struct sha512_state),
.digestsize = SHA384_DIGEST_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-arm",
.cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.init = sha512_base_init,
.update = sha512_arm_update,
.final = sha512_arm_final,
.finup = sha512_arm_finup,
.descsize = sizeof(struct sha512_state),
.digestsize = SHA512_DIGEST_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-arm",
.cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int __init sha512_arm_mod_init(void)
{
int err;
err = crypto_register_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
if (err)
return err;
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) {
err = crypto_register_shashes(sha512_neon_algs,
ARRAY_SIZE(sha512_neon_algs));
if (err)
goto err_unregister;
}
return 0;
err_unregister:
crypto_unregister_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
return err;
}
static void __exit sha512_arm_mod_fini(void)
{
crypto_unregister_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon())
crypto_unregister_shashes(sha512_neon_algs,
ARRAY_SIZE(sha512_neon_algs));
}
module_init(sha512_arm_mod_init);
module_exit(sha512_arm_mod_fini);
/*
* sha512-neon-glue.c - accelerated SHA-384/512 for ARM NEON
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <crypto/internal/hash.h>
#include <crypto/sha.h>
#include <crypto/sha512_base.h>
#include <linux/crypto.h>
#include <linux/module.h>
#include <asm/simd.h>
#include <asm/neon.h>
#include "sha512.h"
MODULE_ALIAS_CRYPTO("sha384-neon");
MODULE_ALIAS_CRYPTO("sha512-neon");
asmlinkage void sha512_block_data_order_neon(u64 *state, u8 const *src,
int blocks);
static int sha512_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
if (!may_use_simd() ||
(sctx->count[0] % SHA512_BLOCK_SIZE) + len < SHA512_BLOCK_SIZE)
return sha512_arm_update(desc, data, len);
kernel_neon_begin();
sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order_neon);
kernel_neon_end();
return 0;
}
static int sha512_neon_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
if (!may_use_simd())
return sha512_arm_finup(desc, data, len, out);
kernel_neon_begin();
if (len)
sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order_neon);
sha512_base_do_finalize(desc,
(sha512_block_fn *)sha512_block_data_order_neon);
kernel_neon_end();
return sha512_base_finish(desc, out);
}
static int sha512_neon_final(struct shash_desc *desc, u8 *out)
{
return sha512_neon_finup(desc, NULL, 0, out);
}
struct shash_alg sha512_neon_algs[] = { {
.init = sha384_base_init,
.update = sha512_neon_update,
.final = sha512_neon_final,
.finup = sha512_neon_finup,
.descsize = sizeof(struct sha512_state),
.digestsize = SHA384_DIGEST_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-neon",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.init = sha512_base_init,
.update = sha512_neon_update,
.final = sha512_neon_final,
.finup = sha512_neon_finup,
.descsize = sizeof(struct sha512_state),
.digestsize = SHA512_DIGEST_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-neon",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
int sha512_arm_update(struct shash_desc *desc, const u8 *data,
unsigned int len);
int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out);
extern struct shash_alg sha512_neon_algs[2];
/*
* Glue code for the SHA512 Secure Hash Algorithm assembly implementation
* using NEON instructions.
*
* Copyright © 2014 Jussi Kivilinna <jussi.kivilinna@iki.fi>
*
* This file is based on sha512_ssse3_glue.c:
* Copyright (C) 2013 Intel Corporation
* Author: Tim Chen <tim.c.chen@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/cryptohash.h>
#include <linux/types.h>
#include <linux/string.h>
#include <crypto/sha.h>
#include <asm/byteorder.h>
#include <asm/simd.h>
#include <asm/neon.h>
static const u64 sha512_k[] = {
0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL,
0xb5c0fbcfec4d3b2fULL, 0xe9b5dba58189dbbcULL,
0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL,
0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL,
0xd807aa98a3030242ULL, 0x12835b0145706fbeULL,
0x243185be4ee4b28cULL, 0x550c7dc3d5ffb4e2ULL,
0x72be5d74f27b896fULL, 0x80deb1fe3b1696b1ULL,
0x9bdc06a725c71235ULL, 0xc19bf174cf692694ULL,
0xe49b69c19ef14ad2ULL, 0xefbe4786384f25e3ULL,
0x0fc19dc68b8cd5b5ULL, 0x240ca1cc77ac9c65ULL,
0x2de92c6f592b0275ULL, 0x4a7484aa6ea6e483ULL,
0x5cb0a9dcbd41fbd4ULL, 0x76f988da831153b5ULL,
0x983e5152ee66dfabULL, 0xa831c66d2db43210ULL,
0xb00327c898fb213fULL, 0xbf597fc7beef0ee4ULL,
0xc6e00bf33da88fc2ULL, 0xd5a79147930aa725ULL,
0x06ca6351e003826fULL, 0x142929670a0e6e70ULL,
0x27b70a8546d22ffcULL, 0x2e1b21385c26c926ULL,
0x4d2c6dfc5ac42aedULL, 0x53380d139d95b3dfULL,
0x650a73548baf63deULL, 0x766a0abb3c77b2a8ULL,
0x81c2c92e47edaee6ULL, 0x92722c851482353bULL,
0xa2bfe8a14cf10364ULL, 0xa81a664bbc423001ULL,
0xc24b8b70d0f89791ULL, 0xc76c51a30654be30ULL,
0xd192e819d6ef5218ULL, 0xd69906245565a910ULL,
0xf40e35855771202aULL, 0x106aa07032bbd1b8ULL,
0x19a4c116b8d2d0c8ULL, 0x1e376c085141ab53ULL,
0x2748774cdf8eeb99ULL, 0x34b0bcb5e19b48a8ULL,
0x391c0cb3c5c95a63ULL, 0x4ed8aa4ae3418acbULL,
0x5b9cca4f7763e373ULL, 0x682e6ff3d6b2b8a3ULL,
0x748f82ee5defb2fcULL, 0x78a5636f43172f60ULL,
0x84c87814a1f0ab72ULL, 0x8cc702081a6439ecULL,
0x90befffa23631e28ULL, 0xa4506cebde82bde9ULL,
0xbef9a3f7b2c67915ULL, 0xc67178f2e372532bULL,
0xca273eceea26619cULL, 0xd186b8c721c0c207ULL,
0xeada7dd6cde0eb1eULL, 0xf57d4f7fee6ed178ULL,
0x06f067aa72176fbaULL, 0x0a637dc5a2c898a6ULL,
0x113f9804bef90daeULL, 0x1b710b35131c471bULL,
0x28db77f523047d84ULL, 0x32caab7b40c72493ULL,
0x3c9ebe0a15c9bebcULL, 0x431d67c49c100d4cULL,
0x4cc5d4becb3e42b6ULL, 0x597f299cfc657e2aULL,
0x5fcb6fab3ad6faecULL, 0x6c44198c4a475817ULL
};
asmlinkage void sha512_transform_neon(u64 *digest, const void *data,
const u64 k[], unsigned int num_blks);
static int sha512_neon_init(struct shash_desc *desc)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA512_H0;
sctx->state[1] = SHA512_H1;
sctx->state[2] = SHA512_H2;
sctx->state[3] = SHA512_H3;
sctx->state[4] = SHA512_H4;
sctx->state[5] = SHA512_H5;
sctx->state[6] = SHA512_H6;
sctx->state[7] = SHA512_H7;
sctx->count[0] = sctx->count[1] = 0;
return 0;
}
static int __sha512_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len, unsigned int partial)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
unsigned int done = 0;
sctx->count[0] += len;
if (sctx->count[0] < len)
sctx->count[1]++;
if (partial) {
done = SHA512_BLOCK_SIZE - partial;
memcpy(sctx->buf + partial, data, done);
sha512_transform_neon(sctx->state, sctx->buf, sha512_k, 1);
}
if (len - done >= SHA512_BLOCK_SIZE) {
const unsigned int rounds = (len - done) / SHA512_BLOCK_SIZE;
sha512_transform_neon(sctx->state, data + done, sha512_k,
rounds);
done += rounds * SHA512_BLOCK_SIZE;
}
memcpy(sctx->buf, data + done, len - done);
return 0;
}
static int sha512_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
unsigned int partial = sctx->count[0] % SHA512_BLOCK_SIZE;
int res;
/* Handle the fast case right here */
if (partial + len < SHA512_BLOCK_SIZE) {
sctx->count[0] += len;
if (sctx->count[0] < len)
sctx->count[1]++;
memcpy(sctx->buf + partial, data, len);
return 0;
}
if (!may_use_simd()) {
res = crypto_sha512_update(desc, data, len);
} else {
kernel_neon_begin();
res = __sha512_neon_update(desc, data, len, partial);
kernel_neon_end();
}
return res;
}
/* Add padding and return the message digest. */
static int sha512_neon_final(struct shash_desc *desc, u8 *out)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
unsigned int i, index, padlen;
__be64 *dst = (__be64 *)out;
__be64 bits[2];
static const u8 padding[SHA512_BLOCK_SIZE] = { 0x80, };
/* save number of bits */
bits[1] = cpu_to_be64(sctx->count[0] << 3);
bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61);
/* Pad out to 112 mod 128 and append length */
index = sctx->count[0] & 0x7f;
padlen = (index < 112) ? (112 - index) : ((128+112) - index);
if (!may_use_simd()) {
crypto_sha512_update(desc, padding, padlen);
crypto_sha512_update(desc, (const u8 *)&bits, sizeof(bits));
} else {
kernel_neon_begin();
/* We need to fill a whole block for __sha512_neon_update() */
if (padlen <= 112) {
sctx->count[0] += padlen;
if (sctx->count[0] < padlen)
sctx->count[1]++;
memcpy(sctx->buf + index, padding, padlen);
} else {
__sha512_neon_update(desc, padding, padlen, index);
}
__sha512_neon_update(desc, (const u8 *)&bits,
sizeof(bits), 112);
kernel_neon_end();
}
/* Store state in digest */
for (i = 0; i < 8; i++)
dst[i] = cpu_to_be64(sctx->state[i]);
/* Wipe context */
memset(sctx, 0, sizeof(*sctx));
return 0;
}
static int sha512_neon_export(struct shash_desc *desc, void *out)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
memcpy(out, sctx, sizeof(*sctx));
return 0;
}
static int sha512_neon_import(struct shash_desc *desc, const void *in)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
memcpy(sctx, in, sizeof(*sctx));
return 0;
}
static int sha384_neon_init(struct shash_desc *desc)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA384_H0;
sctx->state[1] = SHA384_H1;
sctx->state[2] = SHA384_H2;
sctx->state[3] = SHA384_H3;
sctx->state[4] = SHA384_H4;
sctx->state[5] = SHA384_H5;
sctx->state[6] = SHA384_H6;
sctx->state[7] = SHA384_H7;
sctx->count[0] = sctx->count[1] = 0;
return 0;
}
static int sha384_neon_final(struct shash_desc *desc, u8 *hash)
{
u8 D[SHA512_DIGEST_SIZE];
sha512_neon_final(desc, D);
memcpy(hash, D, SHA384_DIGEST_SIZE);
memzero_explicit(D, SHA512_DIGEST_SIZE);
return 0;
}
static struct shash_alg algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_neon_init,
.update = sha512_neon_update,
.final = sha512_neon_final,
.export = sha512_neon_export,
.import = sha512_neon_import,
.descsize = sizeof(struct sha512_state),
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-neon",
.cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_neon_init,
.update = sha512_neon_update,
.final = sha384_neon_final,
.export = sha512_neon_export,
.import = sha512_neon_import,
.descsize = sizeof(struct sha512_state),
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-neon",
.cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int __init sha512_neon_mod_init(void)
{
if (!cpu_has_neon())
return -ENODEV;
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
static void __exit sha512_neon_mod_fini(void)
{
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_init(sha512_neon_mod_init);
module_exit(sha512_neon_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, NEON accelerated");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha384");
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#include <crypto/aes.h> #include <crypto/aes.h>
#include <crypto/algapi.h> #include <crypto/algapi.h>
#include <crypto/scatterwalk.h> #include <crypto/scatterwalk.h>
#include <linux/crypto.h> #include <crypto/internal/aead.h>
#include <linux/module.h> #include <linux/module.h>
#include "aes-ce-setkey.h" #include "aes-ce-setkey.h"
......
...@@ -69,10 +69,10 @@ static int octeon_md5_init(struct shash_desc *desc) ...@@ -69,10 +69,10 @@ static int octeon_md5_init(struct shash_desc *desc)
{ {
struct md5_state *mctx = shash_desc_ctx(desc); struct md5_state *mctx = shash_desc_ctx(desc);
mctx->hash[0] = cpu_to_le32(0x67452301); mctx->hash[0] = cpu_to_le32(MD5_H0);
mctx->hash[1] = cpu_to_le32(0xefcdab89); mctx->hash[1] = cpu_to_le32(MD5_H1);
mctx->hash[2] = cpu_to_le32(0x98badcfe); mctx->hash[2] = cpu_to_le32(MD5_H2);
mctx->hash[3] = cpu_to_le32(0x10325476); mctx->hash[3] = cpu_to_le32(MD5_H3);
mctx->byte_count = 0; mctx->byte_count = 0;
return 0; return 0;
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
* for more details. * for more details.
*/ */
#include <linux/export.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/clocksource.h> #include <linux/clocksource.h>
...@@ -106,6 +107,7 @@ cycles_t get_cycles(void) ...@@ -106,6 +107,7 @@ cycles_t get_cycles(void)
{ {
return nios2_timer_read(&nios2_cs.cs); return nios2_timer_read(&nios2_cs.cs);
} }
EXPORT_SYMBOL(get_cycles);
static void nios2_timer_start(struct nios2_timer *timer) static void nios2_timer_start(struct nios2_timer *timer)
{ {
......
...@@ -37,10 +37,10 @@ static int ppc_md5_init(struct shash_desc *desc) ...@@ -37,10 +37,10 @@ static int ppc_md5_init(struct shash_desc *desc)
{ {
struct md5_state *sctx = shash_desc_ctx(desc); struct md5_state *sctx = shash_desc_ctx(desc);
sctx->hash[0] = 0x67452301; sctx->hash[0] = MD5_H0;
sctx->hash[1] = 0xefcdab89; sctx->hash[1] = MD5_H1;
sctx->hash[2] = 0x98badcfe; sctx->hash[2] = MD5_H2;
sctx->hash[3] = 0x10325476; sctx->hash[3] = MD5_H3;
sctx->byte_count = 0; sctx->byte_count = 0;
return 0; return 0;
......
/*
* ICSWX api
*
* Copyright (C) 2015 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
* This provides the Initiate Coprocessor Store Word Indexed (ICSWX)
* instruction. This instruction is used to communicate with PowerPC
* coprocessors. This also provides definitions of the structures used
* to communicate with the coprocessor.
*
* The RFC02130: Coprocessor Architecture document is the reference for
* everything in this file unless otherwise noted.
*/
#ifndef _ARCH_POWERPC_INCLUDE_ASM_ICSWX_H_
#define _ARCH_POWERPC_INCLUDE_ASM_ICSWX_H_
#include <asm/ppc-opcode.h> /* for PPC_ICSWX */
/* Chapter 6.5.8 Coprocessor-Completion Block (CCB) */
#define CCB_VALUE (0x3fffffffffffffff)
#define CCB_ADDRESS (0xfffffffffffffff8)
#define CCB_CM (0x0000000000000007)
#define CCB_CM0 (0x0000000000000004)
#define CCB_CM12 (0x0000000000000003)
#define CCB_CM0_ALL_COMPLETIONS (0x0)
#define CCB_CM0_LAST_IN_CHAIN (0x4)
#define CCB_CM12_STORE (0x0)
#define CCB_CM12_INTERRUPT (0x1)
#define CCB_SIZE (0x10)
#define CCB_ALIGN CCB_SIZE
struct coprocessor_completion_block {
__be64 value;
__be64 address;
} __packed __aligned(CCB_ALIGN);
/* Chapter 6.5.7 Coprocessor-Status Block (CSB) */
#define CSB_V (0x80)
#define CSB_F (0x04)
#define CSB_CH (0x03)
#define CSB_CE_INCOMPLETE (0x80)
#define CSB_CE_TERMINATION (0x40)
#define CSB_CE_TPBC (0x20)
#define CSB_CC_SUCCESS (0)
#define CSB_CC_INVALID_ALIGN (1)
#define CSB_CC_OPERAND_OVERLAP (2)
#define CSB_CC_DATA_LENGTH (3)
#define CSB_CC_TRANSLATION (5)
#define CSB_CC_PROTECTION (6)
#define CSB_CC_RD_EXTERNAL (7)
#define CSB_CC_INVALID_OPERAND (8)
#define CSB_CC_PRIVILEGE (9)
#define CSB_CC_INTERNAL (10)
#define CSB_CC_WR_EXTERNAL (12)
#define CSB_CC_NOSPC (13)
#define CSB_CC_EXCESSIVE_DDE (14)
#define CSB_CC_WR_TRANSLATION (15)
#define CSB_CC_WR_PROTECTION (16)
#define CSB_CC_UNKNOWN_CODE (17)
#define CSB_CC_ABORT (18)
#define CSB_CC_TRANSPORT (20)
#define CSB_CC_SEGMENTED_DDL (31)
#define CSB_CC_PROGRESS_POINT (32)
#define CSB_CC_DDE_OVERFLOW (33)
#define CSB_CC_SESSION (34)
#define CSB_CC_PROVISION (36)
#define CSB_CC_CHAIN (37)
#define CSB_CC_SEQUENCE (38)
#define CSB_CC_HW (39)
#define CSB_SIZE (0x10)
#define CSB_ALIGN CSB_SIZE
struct coprocessor_status_block {
u8 flags;
u8 cs;
u8 cc;
u8 ce;
__be32 count;
__be64 address;
} __packed __aligned(CSB_ALIGN);
/* Chapter 6.5.10 Data-Descriptor List (DDL)
* each list contains one or more Data-Descriptor Entries (DDE)
*/
#define DDE_P (0x8000)
#define DDE_SIZE (0x10)
#define DDE_ALIGN DDE_SIZE
struct data_descriptor_entry {
__be16 flags;
u8 count;
u8 index;
__be32 length;
__be64 address;
} __packed __aligned(DDE_ALIGN);
/* Chapter 6.5.2 Coprocessor-Request Block (CRB) */
#define CRB_SIZE (0x80)
#define CRB_ALIGN (0x100) /* Errata: requires 256 alignment */
/* Coprocessor Status Block field
* ADDRESS address of CSB
* C CCB is valid
* AT 0 = addrs are virtual, 1 = addrs are phys
* M enable perf monitor
*/
#define CRB_CSB_ADDRESS (0xfffffffffffffff0)
#define CRB_CSB_C (0x0000000000000008)
#define CRB_CSB_AT (0x0000000000000002)
#define CRB_CSB_M (0x0000000000000001)
struct coprocessor_request_block {
__be32 ccw;
__be32 flags;
__be64 csb_addr;
struct data_descriptor_entry source;
struct data_descriptor_entry target;
struct coprocessor_completion_block ccb;
u8 reserved[48];
struct coprocessor_status_block csb;
} __packed __aligned(CRB_ALIGN);
/* RFC02167 Initiate Coprocessor Instructions document
* Chapter 8.2.1.1.1 RS
* Chapter 8.2.3 Coprocessor Directive
* Chapter 8.2.4 Execution
*
* The CCW must be converted to BE before passing to icswx()
*/
#define CCW_PS (0xff000000)
#define CCW_CT (0x00ff0000)
#define CCW_CD (0x0000ffff)
#define CCW_CL (0x0000c000)
/* RFC02167 Initiate Coprocessor Instructions document
* Chapter 8.2.1 Initiate Coprocessor Store Word Indexed (ICSWX)
* Chapter 8.2.4.1 Condition Register 0
*/
#define ICSWX_INITIATED (0x8)
#define ICSWX_BUSY (0x4)
#define ICSWX_REJECTED (0x2)
static inline int icswx(__be32 ccw, struct coprocessor_request_block *crb)
{
__be64 ccw_reg = ccw;
u32 cr;
__asm__ __volatile__(
PPC_ICSWX(%1,0,%2) "\n"
"mfcr %0\n"
: "=r" (cr)
: "r" (ccw_reg), "r" (crb)
: "cr0", "memory");
return (int)((cr >> 28) & 0xf);
}
#endif /* _ARCH_POWERPC_INCLUDE_ASM_ICSWX_H_ */
...@@ -136,6 +136,8 @@ ...@@ -136,6 +136,8 @@
#define PPC_INST_DCBAL 0x7c2005ec #define PPC_INST_DCBAL 0x7c2005ec
#define PPC_INST_DCBZL 0x7c2007ec #define PPC_INST_DCBZL 0x7c2007ec
#define PPC_INST_ICBT 0x7c00002c #define PPC_INST_ICBT 0x7c00002c
#define PPC_INST_ICSWX 0x7c00032d
#define PPC_INST_ICSWEPX 0x7c00076d
#define PPC_INST_ISEL 0x7c00001e #define PPC_INST_ISEL 0x7c00001e
#define PPC_INST_ISEL_MASK 0xfc00003e #define PPC_INST_ISEL_MASK 0xfc00003e
#define PPC_INST_LDARX 0x7c0000a8 #define PPC_INST_LDARX 0x7c0000a8
...@@ -403,4 +405,15 @@ ...@@ -403,4 +405,15 @@
#define MFTMR(tmr, r) stringify_in_c(.long PPC_INST_MFTMR | \ #define MFTMR(tmr, r) stringify_in_c(.long PPC_INST_MFTMR | \
TMRN(tmr) | ___PPC_RT(r)) TMRN(tmr) | ___PPC_RT(r))
/* Coprocessor instructions */
#define PPC_ICSWX(s, a, b) stringify_in_c(.long PPC_INST_ICSWX | \
___PPC_RS(s) | \
___PPC_RA(a) | \
___PPC_RB(b))
#define PPC_ICSWEPX(s, a, b) stringify_in_c(.long PPC_INST_ICSWEPX | \
___PPC_RS(s) | \
___PPC_RA(a) | \
___PPC_RB(b))
#endif /* _ASM_POWERPC_PPC_OPCODE_H */ #endif /* _ASM_POWERPC_PPC_OPCODE_H */
...@@ -800,6 +800,7 @@ int of_get_ibm_chip_id(struct device_node *np) ...@@ -800,6 +800,7 @@ int of_get_ibm_chip_id(struct device_node *np)
} }
return -1; return -1;
} }
EXPORT_SYMBOL(of_get_ibm_chip_id);
/** /**
* cpu_to_chip_id - Return the cpus chip-id * cpu_to_chip_id - Return the cpus chip-id
......
...@@ -33,10 +33,10 @@ static int md5_sparc64_init(struct shash_desc *desc) ...@@ -33,10 +33,10 @@ static int md5_sparc64_init(struct shash_desc *desc)
{ {
struct md5_state *mctx = shash_desc_ctx(desc); struct md5_state *mctx = shash_desc_ctx(desc);
mctx->hash[0] = cpu_to_le32(0x67452301); mctx->hash[0] = cpu_to_le32(MD5_H0);
mctx->hash[1] = cpu_to_le32(0xefcdab89); mctx->hash[1] = cpu_to_le32(MD5_H1);
mctx->hash[2] = cpu_to_le32(0x98badcfe); mctx->hash[2] = cpu_to_le32(MD5_H2);
mctx->hash[3] = cpu_to_le32(0x10325476); mctx->hash[3] = cpu_to_le32(MD5_H3);
mctx->byte_count = 0; mctx->byte_count = 0;
return 0; return 0;
......
This diff is collapsed.
...@@ -156,7 +156,7 @@ int __init crypto_fpu_init(void) ...@@ -156,7 +156,7 @@ int __init crypto_fpu_init(void)
return crypto_register_template(&crypto_fpu_tmpl); return crypto_register_template(&crypto_fpu_tmpl);
} }
void __exit crypto_fpu_exit(void) void crypto_fpu_exit(void)
{ {
crypto_unregister_template(&crypto_fpu_tmpl); crypto_unregister_template(&crypto_fpu_tmpl);
} }
......
...@@ -882,7 +882,8 @@ static int __init sha1_mb_mod_init(void) ...@@ -882,7 +882,8 @@ static int __init sha1_mb_mod_init(void)
INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher); INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
cpu_state->cpu = cpu; cpu_state->cpu = cpu;
cpu_state->alg_state = &sha1_mb_alg_state; cpu_state->alg_state = &sha1_mb_alg_state;
cpu_state->mgr = (struct sha1_ctx_mgr *) kzalloc(sizeof(struct sha1_ctx_mgr), GFP_KERNEL); cpu_state->mgr = kzalloc(sizeof(struct sha1_ctx_mgr),
GFP_KERNEL);
if (!cpu_state->mgr) if (!cpu_state->mgr)
goto err2; goto err2;
sha1_ctx_mgr_init(cpu_state->mgr); sha1_ctx_mgr_init(cpu_state->mgr);
......
/* /*
* Cryptographic API for the 842 compression algorithm. * Cryptographic API for the 842 software compression algorithm.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -11,173 +11,73 @@ ...@@ -11,173 +11,73 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details. * GNU General Public License for more details.
* *
* You should have received a copy of the GNU General Public License * Copyright (C) IBM Corporation, 2011-2015
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
* *
* Copyright (C) IBM Corporation, 2011 * Original Authors: Robert Jennings <rcj@linux.vnet.ibm.com>
* Seth Jennings <sjenning@linux.vnet.ibm.com>
* *
* Authors: Robert Jennings <rcj@linux.vnet.ibm.com> * Rewrite: Dan Streetman <ddstreet@ieee.org>
* Seth Jennings <sjenning@linux.vnet.ibm.com> *
* This is the software implementation of compression and decompression using
* the 842 format. This uses the software 842 library at lib/842/ which is
* only a reference implementation, and is very, very slow as compared to other
* software compressors. You probably do not want to use this software
* compression. If you have access to the PowerPC 842 compression hardware, you
* want to use the 842 hardware compression interface, which is at:
* drivers/crypto/nx/nx-842-crypto.c
*/ */
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/vmalloc.h> #include <linux/sw842.h>
#include <linux/nx842.h>
#include <linux/lzo.h>
#include <linux/timer.h>
static int nx842_uselzo;
struct nx842_ctx {
void *nx842_wmem; /* working memory for 842/lzo */
};
enum nx842_crypto_type { struct crypto842_ctx {
NX842_CRYPTO_TYPE_842, char wmem[SW842_MEM_COMPRESS]; /* working memory for compress */
NX842_CRYPTO_TYPE_LZO
}; };
#define NX842_SENTINEL 0xdeadbeef static int crypto842_compress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
struct nx842_crypto_header { u8 *dst, unsigned int *dlen)
unsigned int sentinel; /* debug */
enum nx842_crypto_type type;
};
static int nx842_init(struct crypto_tfm *tfm)
{
struct nx842_ctx *ctx = crypto_tfm_ctx(tfm);
int wmemsize;
wmemsize = max_t(int, nx842_get_workmem_size(), LZO1X_MEM_COMPRESS);
ctx->nx842_wmem = kmalloc(wmemsize, GFP_NOFS);
if (!ctx->nx842_wmem)
return -ENOMEM;
return 0;
}
static void nx842_exit(struct crypto_tfm *tfm)
{
struct nx842_ctx *ctx = crypto_tfm_ctx(tfm);
kfree(ctx->nx842_wmem);
}
static void nx842_reset_uselzo(unsigned long data)
{ {
nx842_uselzo = 0; struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
}
static DEFINE_TIMER(failover_timer, nx842_reset_uselzo, 0, 0);
static int nx842_crypto_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct nx842_ctx *ctx = crypto_tfm_ctx(tfm);
struct nx842_crypto_header *hdr;
unsigned int tmp_len = *dlen;
size_t lzodlen; /* needed for lzo */
int err;
*dlen = 0;
hdr = (struct nx842_crypto_header *)dst;
hdr->sentinel = NX842_SENTINEL; /* debug */
dst += sizeof(struct nx842_crypto_header);
tmp_len -= sizeof(struct nx842_crypto_header);
lzodlen = tmp_len;
if (likely(!nx842_uselzo)) {
err = nx842_compress(src, slen, dst, &tmp_len, ctx->nx842_wmem);
if (likely(!err)) {
hdr->type = NX842_CRYPTO_TYPE_842;
*dlen = tmp_len + sizeof(struct nx842_crypto_header);
return 0;
}
/* hardware failed */
nx842_uselzo = 1;
/* set timer to check for hardware again in 1 second */ return sw842_compress(src, slen, dst, dlen, ctx->wmem);
mod_timer(&failover_timer, jiffies + msecs_to_jiffies(1000));
}
/* no hardware, use lzo */
err = lzo1x_1_compress(src, slen, dst, &lzodlen, ctx->nx842_wmem);
if (err != LZO_E_OK)
return -EINVAL;
hdr->type = NX842_CRYPTO_TYPE_LZO;
*dlen = lzodlen + sizeof(struct nx842_crypto_header);
return 0;
} }
static int nx842_crypto_decompress(struct crypto_tfm *tfm, const u8 *src, static int crypto842_decompress(struct crypto_tfm *tfm,
unsigned int slen, u8 *dst, unsigned int *dlen) const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
{ {
struct nx842_ctx *ctx = crypto_tfm_ctx(tfm); return sw842_decompress(src, slen, dst, dlen);
struct nx842_crypto_header *hdr;
unsigned int tmp_len = *dlen;
size_t lzodlen; /* needed for lzo */
int err;
*dlen = 0;
hdr = (struct nx842_crypto_header *)src;
if (unlikely(hdr->sentinel != NX842_SENTINEL))
return -EINVAL;
src += sizeof(struct nx842_crypto_header);
slen -= sizeof(struct nx842_crypto_header);
if (likely(hdr->type == NX842_CRYPTO_TYPE_842)) {
err = nx842_decompress(src, slen, dst, &tmp_len,
ctx->nx842_wmem);
if (err)
return -EINVAL;
*dlen = tmp_len;
} else if (hdr->type == NX842_CRYPTO_TYPE_LZO) {
lzodlen = tmp_len;
err = lzo1x_decompress_safe(src, slen, dst, &lzodlen);
if (err != LZO_E_OK)
return -EINVAL;
*dlen = lzodlen;
} else
return -EINVAL;
return 0;
} }
static struct crypto_alg alg = { static struct crypto_alg alg = {
.cra_name = "842", .cra_name = "842",
.cra_driver_name = "842-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS, .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct nx842_ctx), .cra_ctxsize = sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = nx842_init,
.cra_exit = nx842_exit,
.cra_u = { .compress = { .cra_u = { .compress = {
.coa_compress = nx842_crypto_compress, .coa_compress = crypto842_compress,
.coa_decompress = nx842_crypto_decompress } } .coa_decompress = crypto842_decompress } }
}; };
static int __init nx842_mod_init(void) static int __init crypto842_mod_init(void)
{ {
del_timer(&failover_timer);
return crypto_register_alg(&alg); return crypto_register_alg(&alg);
} }
module_init(crypto842_mod_init);
static void __exit nx842_mod_exit(void) static void __exit crypto842_mod_exit(void)
{ {
crypto_unregister_alg(&alg); crypto_unregister_alg(&alg);
} }
module_exit(crypto842_mod_exit);
module_init(nx842_mod_init);
module_exit(nx842_mod_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("842 Compression Algorithm"); MODULE_DESCRIPTION("842 Software Compression Algorithm");
MODULE_ALIAS_CRYPTO("842"); MODULE_ALIAS_CRYPTO("842");
MODULE_ALIAS_CRYPTO("842-generic");
MODULE_AUTHOR("Dan Streetman <ddstreet@ieee.org>");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -127,6 +127,7 @@ EXPORT_SYMBOL_GPL(af_alg_release); ...@@ -127,6 +127,7 @@ EXPORT_SYMBOL_GPL(af_alg_release);
static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
{ {
const u32 forbidden = CRYPTO_ALG_INTERNAL;
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk); struct alg_sock *ask = alg_sk(sk);
struct sockaddr_alg *sa = (void *)uaddr; struct sockaddr_alg *sa = (void *)uaddr;
...@@ -151,7 +152,9 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) ...@@ -151,7 +152,9 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
if (IS_ERR(type)) if (IS_ERR(type))
return PTR_ERR(type); return PTR_ERR(type);
private = type->bind(sa->salg_name, sa->salg_feat, sa->salg_mask); private = type->bind(sa->salg_name,
sa->salg_feat & ~forbidden,
sa->salg_mask & ~forbidden);
if (IS_ERR(private)) { if (IS_ERR(private)) {
module_put(type->owner); module_put(type->owner);
return PTR_ERR(private); return PTR_ERR(private);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -164,7 +164,7 @@ static int rng_setkey(void *private, const u8 *seed, unsigned int seedlen) ...@@ -164,7 +164,7 @@ static int rng_setkey(void *private, const u8 *seed, unsigned int seedlen)
* Check whether seedlen is of sufficient size is done in RNG * Check whether seedlen is of sufficient size is done in RNG
* implementations. * implementations.
*/ */
return crypto_rng_reset(private, (u8 *)seed, seedlen); return crypto_rng_reset(private, seed, seedlen);
} }
static const struct af_alg_type algif_type_rng = { static const struct af_alg_type algif_type_rng = {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
* *
*/ */
#include <crypto/aead.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <crypto/scatterwalk.h> #include <crypto/scatterwalk.h>
#include <linux/errno.h> #include <linux/errno.h>
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment