Commit 72f35423 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Fix out-of-sync IVs in self-test for IPsec AEAD algorithms

  Algorithms:
   - Use formally verified implementation of x86/curve25519

  Drivers:
   - Enhance hwrng support in caam

   - Use crypto_engine for skcipher/aead/rsa/hash in caam

   - Add Xilinx AES driver

   - Add uacce driver

   - Register zip engine to uacce in hisilicon

   - Add support for OCTEON TX CPT engine in marvell"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits)
  crypto: af_alg - bool type cosmetics
  crypto: arm[64]/poly1305 - add artifact to .gitignore files
  crypto: caam - limit single JD RNG output to maximum of 16 bytes
  crypto: caam - enable prediction resistance in HRWNG
  bus: fsl-mc: add api to retrieve mc version
  crypto: caam - invalidate entropy register during RNG initialization
  crypto: caam - check if RNG job failed
  crypto: caam - simplify RNG implementation
  crypto: caam - drop global context pointer and init_done
  crypto: caam - use struct hwrng's .init for initialization
  crypto: caam - allocate RNG instantiation descriptor with GFP_DMA
  crypto: ccree - remove duplicated include from cc_aead.c
  crypto: chelsio - remove set but not used variable 'adap'
  crypto: marvell - enable OcteonTX cpt options for build
  crypto: marvell - add the Virtual Function driver for CPT
  crypto: marvell - add support for OCTEON TX CPT engine
  crypto: marvell - create common Kconfig and Makefile for Marvell
  crypto: arm/neon - memzero_explicit aes-cbc key
  crypto: bcm - Use scnprintf() for avoiding potential buffer overflow
  crypto: atmel-i2c - Fix wakeup fail
  ...
parents 890f0b0d fcb90d51
What: /sys/class/uacce/<dev_name>/api
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Api of the device
Can be any string and up to userspace to parse.
Application use the api to match the correct driver
What: /sys/class/uacce/<dev_name>/flags
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Attributes of the device, see UACCE_DEV_xxx flag defined in uacce.h
What: /sys/class/uacce/<dev_name>/available_instances
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Available instances left of the device
Return -ENODEV if uacce_ops get_available_instances is not provided
What: /sys/class/uacce/<dev_name>/algorithms
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Algorithms supported by this accelerator, separated by new line.
Can be any string and up to userspace to parse.
What: /sys/class/uacce/<dev_name>/region_mmio_size
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Size (bytes) of mmio region queue file
What: /sys/class/uacce/<dev_name>/region_dus_size
Date: Feb 2020
KernelVersion: 5.7
Contact: linux-accelerators@lists.ozlabs.org
Description: Size (bytes) of dus region queue file
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/xlnx,zynqmp-aes.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Xilinx ZynqMP AES-GCM Hardware Accelerator Device Tree Bindings
maintainers:
- Kalyani Akula <kalyani.akula@xilinx.com>
- Michal Simek <michal.simek@xilinx.com>
description: |
The ZynqMP AES-GCM hardened cryptographic accelerator is used to
encrypt or decrypt the data with provided key and initialization vector.
properties:
compatible:
const: xlnx,zynqmp-aes
required:
- compatible
additionalProperties: false
examples:
- |
firmware {
zynqmp_firmware: zynqmp-firmware {
compatible = "xlnx,zynqmp-firmware";
method = "smc";
xlnx_aes: zynqmp-aes {
compatible = "xlnx,zynqmp-aes";
};
};
};
...
.. SPDX-License-Identifier: GPL-2.0
Introduction of Uacce
---------------------
Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing between cpu and io device, which share
only data content rather than address.
Because of the unified address, hardware and user space of process can
share the same virtual address in the communication.
Uacce takes the hardware accelerator as a heterogeneous processor, while
IOMMU share the same CPU page tables and as a result the same translation
from va to pa.
::
__________________________ __________________________
| | | |
| User application (CPU) | | Hardware Accelerator |
|__________________________| |__________________________|
| |
| va | va
V V
__________ __________
| | | |
| MMU | | IOMMU |
|__________| |__________|
| |
| |
V pa V pa
_______________________________________
| |
| Memory |
|_______________________________________|
Architecture
------------
Uacce is the kernel module, taking charge of iommu and address sharing.
The user drivers and libraries are called WarpDrive.
The uacce device, built around the IOMMU SVA API, can access multiple
address spaces, including the one without PASID.
A virtual concept, queue, is used for the communication. It provides a
FIFO-like interface. And it maintains a unified address space between the
application and all involved hardware.
::
___________________ ________________
| | user API | |
| WarpDrive library | ------------> | user driver |
|___________________| |________________|
| |
| |
| queue fd |
| |
| |
v |
___________________ _________ |
| | | | | mmap memory
| Other framework | | uacce | | r/w interface
| crypto/nic/others | |_________| |
|___________________| |
| | |
| register | register |
| | |
| | |
| _________________ __________ |
| | | | | |
------------- | Device Driver | | IOMMU | |
|_________________| |__________| |
| |
| V
| ___________________
| | |
-------------------------- | Device(Hardware) |
|___________________|
How does it work
----------------
Uacce uses mmap and IOMMU to play the trick.
Uacce creates a chrdev for every device registered to it. New queue is
created when user application open the chrdev. The file descriptor is used
as the user handle of the queue.
The accelerator device present itself as an Uacce object, which exports as
a chrdev to the user space. The user application communicates with the
hardware by ioctl (as control path) or share memory (as data path).
The control path to the hardware is via file operation, while data path is
via mmap space of the queue fd.
The queue file address space:
::
/**
* enum uacce_qfrt: qfrt type
* @UACCE_QFRT_MMIO: device mmio region
* @UACCE_QFRT_DUS: device user share region
*/
enum uacce_qfrt {
UACCE_QFRT_MMIO = 0,
UACCE_QFRT_DUS = 1,
};
All regions are optional and differ from device type to type.
Each region can be mmapped only once, otherwise -EEXIST returns.
The device mmio region is mapped to the hardware mmio space. It is generally
used for doorbell or other notification to the hardware. It is not fast enough
as data channel.
The device user share region is used for share data buffer between user process
and device.
The Uacce register API
----------------------
The register API is defined in uacce.h.
::
struct uacce_interface {
char name[UACCE_MAX_NAME_SIZE];
unsigned int flags;
const struct uacce_ops *ops;
};
According to the IOMMU capability, uacce_interface flags can be:
::
/**
* UACCE Device flags:
* UACCE_DEV_SVA: Shared Virtual Addresses
* Support PASID
* Support device page faults (PCI PRI or SMMU Stall)
*/
#define UACCE_DEV_SVA BIT(0)
struct uacce_device *uacce_alloc(struct device *parent,
struct uacce_interface *interface);
int uacce_register(struct uacce_device *uacce);
void uacce_remove(struct uacce_device *uacce);
uacce_register results can be:
a. If uacce module is not compiled, ERR_PTR(-ENODEV)
b. Succeed with the desired flags
c. Succeed with the negotiated flags, for example
uacce_interface.flags = UACCE_DEV_SVA but uacce->flags = ~UACCE_DEV_SVA
So user driver need check return value as well as the negotiated uacce->flags.
The user driver
---------------
The queue file mmap space will need a user driver to wrap the communication
protocol. Uacce provides some attributes in sysfs for the user driver to
match the right accelerator accordingly.
More details in Documentation/ABI/testing/sysfs-driver-uacce.
......@@ -4577,7 +4577,9 @@ S: Supported
F: drivers/scsi/cxgbi/cxgb3i
CXGB4 CRYPTO DRIVER (chcr)
M: Atul Gupta <atul.gupta@chelsio.com>
M: Ayush Sawal <ayush.sawal@chelsio.com>
M: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
M: Rohit Maheshwari <rohitm@chelsio.com>
L: linux-crypto@vger.kernel.org
W: http://www.chelsio.com
S: Supported
......@@ -10066,6 +10068,7 @@ F: Documentation/devicetree/bindings/phy/phy-mvebu-utmi.txt
MARVELL CRYPTO DRIVER
M: Boris Brezillon <bbrezillon@kernel.org>
M: Arnaud Ebalard <arno@natisbad.org>
M: Srujana Challa <schalla@marvell.com>
F: drivers/crypto/marvell/
S: Maintained
L: linux-crypto@vger.kernel.org
......@@ -17139,6 +17142,18 @@ W: http://linuxtv.org
S: Maintained
F: drivers/media/pci/tw686x/
UACCE ACCELERATOR FRAMEWORK
M: Zhangfei Gao <zhangfei.gao@linaro.org>
M: Zhou Wang <wangzhou1@hisilicon.com>
L: linux-accelerators@lists.ozlabs.org
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-driver-uacce
F: Documentation/misc-devices/uacce.rst
F: drivers/misc/uacce/
F: include/linux/uacce.h
F: include/uapi/misc/uacce/
UBI FILE SYSTEM (UBIFS)
M: Richard Weinberger <richard@nod.at>
L: linux-mtd@lists.infradead.org
......
aesbs-core.S
sha256-core.S
sha512-core.S
poly1305-core.S
......@@ -138,6 +138,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
kernel_neon_begin();
aesbs_convert_key(ctx->key.rk, rk.key_enc, ctx->key.rounds);
kernel_neon_end();
memzero_explicit(&rk, sizeof(rk));
return crypto_cipher_setkey(ctx->enc_tfm, in_key, key_len);
}
......
......@@ -8,6 +8,9 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
.arch armv8-a
.fpu crypto-neon-fp-armv8
SHASH .req q0
T1 .req q1
XL .req q2
......@@ -88,8 +91,6 @@
T3_H .req d17
.text
.arch armv8-a
.fpu crypto-neon-fp-armv8
.macro __pmull_p64, rd, rn, rm, b1, b2, b3, b4
vmull.p64 \rd, \rn, \rm
......
sha256-core.S
sha512-core.S
poly1305-core.S
......@@ -151,6 +151,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
kernel_neon_begin();
aesbs_convert_key(ctx->key.rk, rk.key_enc, ctx->key.rounds);
kernel_neon_end();
memzero_explicit(&rk, sizeof(rk));
return 0;
}
......
......@@ -91,12 +91,32 @@ static int sha1_ce_final(struct shash_desc *desc, u8 *out)
return sha1_base_finish(desc, out);
}
static int sha1_ce_export(struct shash_desc *desc, void *out)
{
struct sha1_ce_state *sctx = shash_desc_ctx(desc);
memcpy(out, &sctx->sst, sizeof(struct sha1_state));
return 0;
}
static int sha1_ce_import(struct shash_desc *desc, const void *in)
{
struct sha1_ce_state *sctx = shash_desc_ctx(desc);
memcpy(&sctx->sst, in, sizeof(struct sha1_state));
sctx->finalize = 0;
return 0;
}
static struct shash_alg alg = {
.init = sha1_base_init,
.update = sha1_ce_update,
.final = sha1_ce_final,
.finup = sha1_ce_finup,
.import = sha1_ce_import,
.export = sha1_ce_export,
.descsize = sizeof(struct sha1_ce_state),
.statesize = sizeof(struct sha1_state),
.digestsize = SHA1_DIGEST_SIZE,
.base = {
.cra_name = "sha1",
......
......@@ -109,12 +109,32 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out)
return sha256_base_finish(desc, out);
}
static int sha256_ce_export(struct shash_desc *desc, void *out)
{
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
memcpy(out, &sctx->sst, sizeof(struct sha256_state));
return 0;
}
static int sha256_ce_import(struct shash_desc *desc, const void *in)
{
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
memcpy(&sctx->sst, in, sizeof(struct sha256_state));
sctx->finalize = 0;
return 0;
}
static struct shash_alg algs[] = { {
.init = sha224_base_init,
.update = sha256_ce_update,
.final = sha256_ce_final,
.finup = sha256_ce_finup,
.export = sha256_ce_export,
.import = sha256_ce_import,
.descsize = sizeof(struct sha256_ce_state),
.statesize = sizeof(struct sha256_state),
.digestsize = SHA224_DIGEST_SIZE,
.base = {
.cra_name = "sha224",
......@@ -128,7 +148,10 @@ static struct shash_alg algs[] = { {
.update = sha256_ce_update,
.final = sha256_ce_final,
.finup = sha256_ce_finup,
.export = sha256_ce_export,
.import = sha256_ce_import,
.descsize = sizeof(struct sha256_ce_state),
.statesize = sizeof(struct sha256_state),
.digestsize = SHA256_DIGEST_SIZE,
.base = {
.cra_name = "sha256",
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -821,8 +821,8 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
struct af_alg_tsgl *sgl;
struct af_alg_control con = {};
long copied = 0;
bool enc = 0;
bool init = 0;
bool enc = false;
bool init = false;
int err = 0;
if (msg->msg_controllen) {
......@@ -830,13 +830,13 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
if (err)
return err;
init = 1;
init = true;
switch (con.op) {
case ALG_OP_ENCRYPT:
enc = 1;
enc = true;
break;
case ALG_OP_DECRYPT:
enc = 0;
enc = false;
break;
default:
return -EINVAL;
......
......@@ -83,7 +83,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
goto unlock;
}
ctx->more = 0;
ctx->more = false;
while (msg_data_left(msg)) {
int len = msg_data_left(msg);
......@@ -211,7 +211,7 @@ static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
}
if (!result || ctx->more) {
ctx->more = 0;
ctx->more = false;
err = crypto_wait_req(crypto_ahash_final(&ctx->req),
&ctx->wait);
if (err)
......@@ -436,7 +436,7 @@ static int hash_accept_parent_nokey(void *private, struct sock *sk)
ctx->result = NULL;
ctx->len = len;
ctx->more = 0;
ctx->more = false;
crypto_init_wait(&ctx->wait);
ask->private = ctx;
......
......@@ -458,7 +458,7 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
inst->alg.encrypt = crypto_authenc_esn_encrypt;
inst->alg.decrypt = crypto_authenc_esn_decrypt;
inst->free = crypto_authenc_esn_free,
inst->free = crypto_authenc_esn_free;
err = aead_register_instance(tmpl, inst);
if (err) {
......
......@@ -717,7 +717,6 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
struct aead_instance *inst;
struct crypto_aead_spawn *spawn;
struct aead_alg *alg;
const char *ccm_name;
int err;
algt = crypto_get_attr_type(tb);
......@@ -729,19 +728,15 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
mask = crypto_requires_sync(algt->type, algt->mask);
ccm_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(ccm_name))
return PTR_ERR(ccm_name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return -ENOMEM;
spawn = aead_instance_ctx(inst);
err = crypto_grab_aead(spawn, aead_crypto_instance(inst),
ccm_name, 0, mask);
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
alg = crypto_spawn_aead_alg(spawn);
......@@ -749,11 +744,11 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
/* We only support 16-byte blocks. */
if (crypto_aead_alg_ivsize(alg) != 16)
goto out_drop_alg;
goto err_free_inst;
/* Not a stream cipher? */
if (alg->base.cra_blocksize != 1)
goto out_drop_alg;
goto err_free_inst;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
......@@ -762,7 +757,7 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"rfc4309(%s)", alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -786,17 +781,11 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
inst->free = crypto_rfc4309_free;
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_alg;
out:
if (err) {
err_free_inst:
crypto_rfc4309_free(inst);
}
return err;
out_drop_alg:
crypto_drop_aead(spawn);
out_free_inst:
kfree(inst);
goto out;
}
static int crypto_cbcmac_digest_setkey(struct crypto_shash *parent,
......
......@@ -369,7 +369,6 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
struct skcipherd_instance_ctx *ctx;
struct skcipher_instance *inst;
struct skcipher_alg *alg;
const char *name;
u32 type;
u32 mask;
int err;
......@@ -379,10 +378,6 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
cryptd_check_internal(tb, &type, &mask);
name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(name))
return PTR_ERR(name);
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
......@@ -391,14 +386,14 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
ctx->queue = queue;
err = crypto_grab_skcipher(&ctx->spawn, skcipher_crypto_instance(inst),
name, type, mask);
crypto_attr_alg_name(tb[1]), type, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
alg = crypto_spawn_skcipher_alg(&ctx->spawn);
err = cryptd_init_instance(skcipher_crypto_instance(inst), &alg->base);
if (err)
goto out_drop_skcipher;
goto err_free_inst;
inst->alg.base.cra_flags = CRYPTO_ALG_ASYNC |
(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
......@@ -421,10 +416,8 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
err = skcipher_register_instance(tmpl, inst);
if (err) {
out_drop_skcipher:
crypto_drop_skcipher(&ctx->spawn);
out_free_inst:
kfree(inst);
err_free_inst:
cryptd_skcipher_free(inst);
}
return err;
}
......@@ -694,8 +687,7 @@ static int cryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
err = ahash_register_instance(tmpl, inst);
if (err) {
err_free_inst:
crypto_drop_shash(&ctx->spawn);
kfree(inst);
cryptd_hash_free(inst);
}
return err;
}
......@@ -833,17 +825,12 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
struct aead_instance_ctx *ctx;
struct aead_instance *inst;
struct aead_alg *alg;
const char *name;
u32 type = 0;
u32 mask = CRYPTO_ALG_ASYNC;
int err;
cryptd_check_internal(tb, &type, &mask);
name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(name))
return PTR_ERR(name);
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
......@@ -852,14 +839,14 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
ctx->queue = queue;
err = crypto_grab_aead(&ctx->aead_spawn, aead_crypto_instance(inst),
name, type, mask);
crypto_attr_alg_name(tb[1]), type, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
alg = crypto_spawn_aead_alg(&ctx->aead_spawn);
err = cryptd_init_instance(aead_crypto_instance(inst), &alg->base);
if (err)
goto out_drop_aead;
goto err_free_inst;
inst->alg.base.cra_flags = CRYPTO_ALG_ASYNC |
(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
......@@ -879,10 +866,8 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
err = aead_register_instance(tmpl, inst);
if (err) {
out_drop_aead:
crypto_drop_aead(&ctx->aead_spawn);
out_free_inst:
kfree(inst);
err_free_inst:
cryptd_aead_free(inst);
}
return err;
}
......
......@@ -260,7 +260,6 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
struct skcipher_instance *inst;
struct skcipher_alg *alg;
struct crypto_skcipher_spawn *spawn;
const char *cipher_name;
u32 mask;
int err;
......@@ -272,10 +271,6 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
return -EINVAL;
cipher_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(cipher_name))
return PTR_ERR(cipher_name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return -ENOMEM;
......@@ -287,7 +282,7 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
spawn = skcipher_instance_ctx(inst);
err = crypto_grab_skcipher(spawn, skcipher_crypto_instance(inst),
cipher_name, 0, mask);
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto err_free_inst;
......@@ -296,20 +291,20 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
/* We only support 16-byte blocks. */
err = -EINVAL;
if (crypto_skcipher_alg_ivsize(alg) != CTR_RFC3686_BLOCK_SIZE)
goto err_drop_spawn;
goto err_free_inst;
/* Not a stream cipher? */
if (alg->base.cra_blocksize != 1)
goto err_drop_spawn;
goto err_free_inst;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"rfc3686(%s)", alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
goto err_drop_spawn;
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"rfc3686(%s)", alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_drop_spawn;
goto err_free_inst;
inst->alg.base.cra_priority = alg->base.cra_priority;
inst->alg.base.cra_blocksize = 1;
......@@ -336,17 +331,11 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
inst->free = crypto_rfc3686_free;
err = skcipher_register_instance(tmpl, inst);
if (err)
goto err_drop_spawn;
out:
return err;
err_drop_spawn:
crypto_drop_skcipher(spawn);
if (err) {
err_free_inst:
kfree(inst);
goto out;
crypto_rfc3686_free(inst);
}
return err;
}
static struct crypto_template crypto_ctr_tmpls[] = {
......
......@@ -327,7 +327,6 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
struct skcipher_instance *inst;
struct crypto_attr_type *algt;
struct skcipher_alg *alg;
const char *cipher_name;
u32 mask;
int err;
......@@ -340,10 +339,6 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
mask = crypto_requires_sync(algt->type, algt->mask);
cipher_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(cipher_name))
return PTR_ERR(cipher_name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return -ENOMEM;
......@@ -351,7 +346,7 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
spawn = skcipher_instance_ctx(inst);
err = crypto_grab_skcipher(spawn, skcipher_crypto_instance(inst),
cipher_name, 0, mask);
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto err_free_inst;
......@@ -359,15 +354,15 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
err = -EINVAL;
if (crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize)
goto err_drop_spawn;
goto err_free_inst;
if (strncmp(alg->base.cra_name, "cbc(", 4))
goto err_drop_spawn;
goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "cts",
&alg->base);
if (err)
goto err_drop_spawn;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -391,17 +386,11 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->free = crypto_cts_free;
err = skcipher_register_instance(tmpl, inst);
if (err)
goto err_drop_spawn;
out:
return err;
err_drop_spawn:
crypto_drop_skcipher(spawn);
if (err) {
err_free_inst:
kfree(inst);
goto out;
crypto_cts_free(inst);
}
return err;
}
static struct crypto_template crypto_cts_tmpl = {
......
......@@ -840,7 +840,6 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
struct aead_instance *inst;
struct crypto_aead_spawn *spawn;
struct aead_alg *alg;
const char *ccm_name;
int err;
algt = crypto_get_attr_type(tb);
......@@ -852,19 +851,15 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
mask = crypto_requires_sync(algt->type, algt->mask);
ccm_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(ccm_name))
return PTR_ERR(ccm_name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return -ENOMEM;
spawn = aead_instance_ctx(inst);
err = crypto_grab_aead(spawn, aead_crypto_instance(inst),
ccm_name, 0, mask);
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
alg = crypto_spawn_aead_alg(spawn);
......@@ -872,11 +867,11 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
/* Underlying IV size must be 12. */
if (crypto_aead_alg_ivsize(alg) != GCM_AES_IV_SIZE)
goto out_drop_alg;
goto err_free_inst;
/* Not a stream cipher? */
if (alg->base.cra_blocksize != 1)
goto out_drop_alg;
goto err_free_inst;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
......@@ -885,7 +880,7 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"rfc4106(%s)", alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -909,17 +904,11 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
inst->free = crypto_rfc4106_free;
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_alg;
out:
if (err) {
err_free_inst:
crypto_rfc4106_free(inst);
}
return err;
out_drop_alg:
crypto_drop_aead(spawn);
out_free_inst:
kfree(inst);
goto out;
}
static int crypto_rfc4543_setkey(struct crypto_aead *parent, const u8 *key,
......@@ -1071,10 +1060,8 @@ static int crypto_rfc4543_create(struct crypto_template *tmpl,
struct crypto_attr_type *algt;
u32 mask;
struct aead_instance *inst;
struct crypto_aead_spawn *spawn;
struct aead_alg *alg;
struct crypto_rfc4543_instance_ctx *ctx;
const char *ccm_name;
int err;
algt = crypto_get_attr_type(tb);
......@@ -1086,32 +1073,27 @@ static int crypto_rfc4543_create(struct crypto_template *tmpl,
mask = crypto_requires_sync(algt->type, algt->mask);
ccm_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(ccm_name))
return PTR_ERR(ccm_name);
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
ctx = aead_instance_ctx(inst);
spawn = &ctx->aead;
err = crypto_grab_aead(spawn, aead_crypto_instance(inst),
ccm_name, 0, mask);
err = crypto_grab_aead(&ctx->aead, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
alg = crypto_spawn_aead_alg(spawn);
alg = crypto_spawn_aead_alg(&ctx->aead);
err = -EINVAL;
/* Underlying IV size must be 12. */
if (crypto_aead_alg_ivsize(alg) != GCM_AES_IV_SIZE)
goto out_drop_alg;
goto err_free_inst;
/* Not a stream cipher? */
if (alg->base.cra_blocksize != 1)
goto out_drop_alg;
goto err_free_inst;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
......@@ -1120,7 +1102,7 @@ static int crypto_rfc4543_create(struct crypto_template *tmpl,
snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"rfc4543(%s)", alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -1141,20 +1123,14 @@ static int crypto_rfc4543_create(struct crypto_template *tmpl,
inst->alg.encrypt = crypto_rfc4543_encrypt;
inst->alg.decrypt = crypto_rfc4543_decrypt;
inst->free = crypto_rfc4543_free,
inst->free = crypto_rfc4543_free;
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_alg;
out:
if (err) {
err_free_inst:
crypto_rfc4543_free(inst);
}
return err;
out_drop_alg:
crypto_drop_aead(spawn);
out_free_inst:
kfree(inst);
goto out;
}
static struct crypto_template crypto_gcm_tmpls[] = {
......
......@@ -41,7 +41,6 @@ static void aead_geniv_free(struct aead_instance *inst)
struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
struct rtattr **tb, u32 type, u32 mask)
{
const char *name;
struct crypto_aead_spawn *spawn;
struct crypto_attr_type *algt;
struct aead_instance *inst;
......@@ -57,10 +56,6 @@ struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
return ERR_PTR(-EINVAL);
name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(name))
return ERR_CAST(name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return ERR_PTR(-ENOMEM);
......@@ -71,7 +66,7 @@ struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
mask |= crypto_requires_sync(algt->type, algt->mask);
err = crypto_grab_aead(spawn, aead_crypto_instance(inst),
name, type, mask);
crypto_attr_alg_name(tb[1]), type, mask);
if (err)
goto err_free_inst;
......@@ -82,17 +77,17 @@ struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
err = -EINVAL;
if (ivsize < sizeof(u64))
goto err_drop_alg;
goto err_free_inst;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"%s(%s)", tmpl->name, alg->base.cra_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_drop_alg;
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"%s(%s)", tmpl->name, alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_drop_alg;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -111,10 +106,8 @@ struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
out:
return inst;
err_drop_alg:
crypto_drop_aead(spawn);
err_free_inst:
kfree(inst);
aead_geniv_free(inst);
inst = ERR_PTR(err);
goto out;
}
......
......@@ -343,15 +343,15 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
err = -EINVAL;
if (alg->base.cra_blocksize != LRW_BLOCK_SIZE)
goto err_drop_spawn;
goto err_free_inst;
if (crypto_skcipher_alg_ivsize(alg))
goto err_drop_spawn;
goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "lrw",
&alg->base);
if (err)
goto err_drop_spawn;
goto err_free_inst;
err = -EINVAL;
cipher_name = alg->base.cra_name;
......@@ -364,20 +364,20 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
len = strlcpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
if (len < 2 || len >= sizeof(ecb_name))
goto err_drop_spawn;
goto err_free_inst;
if (ecb_name[len - 1] != ')')
goto err_drop_spawn;
goto err_free_inst;
ecb_name[len - 1] = 0;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"lrw(%s)", ecb_name) >= CRYPTO_MAX_ALG_NAME) {
err = -ENAMETOOLONG;
goto err_drop_spawn;
goto err_free_inst;
}
} else
goto err_drop_spawn;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -403,17 +403,11 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
inst->free = free;
err = skcipher_register_instance(tmpl, inst);
if (err)
goto err_drop_spawn;
out:
return err;
err_drop_spawn:
crypto_drop_skcipher(spawn);
if (err) {
err_free_inst:
kfree(inst);
goto out;
free(inst);
}
return err;
}
static struct crypto_template crypto_tmpl = {
......
......@@ -23,9 +23,6 @@
#include <linux/types.h>
#include <asm/byteorder.h>
#define MD5_DIGEST_WORDS 4
#define MD5_MESSAGE_BYTES 64
const u8 md5_zero_message_hash[MD5_DIGEST_SIZE] = {
0xd4, 0x1d, 0x8c, 0xd9, 0x8f, 0x00, 0xb2, 0x04,
0xe9, 0x80, 0x09, 0x98, 0xec, 0xf8, 0x42, 0x7e,
......
......@@ -232,17 +232,12 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb,
struct crypto_attr_type *algt;
struct aead_instance *inst;
struct aead_alg *alg;
const char *name;
int err;
algt = crypto_get_attr_type(tb);
if (IS_ERR(algt))
return PTR_ERR(algt);
name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(name))
return PTR_ERR(name);
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
......@@ -252,21 +247,21 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb,
ctx = aead_instance_ctx(inst);
ctx->psenc = padata_alloc_shell(pencrypt);
if (!ctx->psenc)
goto out_free_inst;
goto err_free_inst;
ctx->psdec = padata_alloc_shell(pdecrypt);
if (!ctx->psdec)
goto out_free_psenc;
goto err_free_inst;
err = crypto_grab_aead(&ctx->spawn, aead_crypto_instance(inst),
name, 0, 0);
crypto_attr_alg_name(tb[1]), 0, 0);
if (err)
goto out_free_psdec;
goto err_free_inst;
alg = crypto_spawn_aead_alg(&ctx->spawn);
err = pcrypt_init_instance(aead_crypto_instance(inst), &alg->base);
if (err)
goto out_drop_aead;
goto err_free_inst;
inst->alg.base.cra_flags = CRYPTO_ALG_ASYNC;
......@@ -286,21 +281,11 @@ static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr **tb,
inst->free = pcrypt_free;
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_aead;
out:
if (err) {
err_free_inst:
pcrypt_free(inst);
}
return err;
out_drop_aead:
crypto_drop_aead(&ctx->spawn);
out_free_psdec:
padata_free_shell(ctx->psdec);
out_free_psenc:
padata_free_shell(ctx->psenc);
out_free_inst:
kfree(inst);
goto out;
}
static int pcrypt_create(struct crypto_template *tmpl, struct rtattr **tb)
......
......@@ -60,7 +60,7 @@ static int c_show(struct seq_file *m, void *p)
goto out;
}
switch (alg->cra_flags & (CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_LARVAL)) {
switch (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_CIPHER:
seq_printf(m, "type : cipher\n");
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
......
......@@ -37,12 +37,16 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
crypto_stats_get(alg);
if (!seed && slen) {
buf = kmalloc(slen, GFP_KERNEL);
if (!buf)
if (!buf) {
crypto_alg_put(alg);
return -ENOMEM;
}
err = get_random_bytes_wait(buf, slen);
if (err)
if (err) {
crypto_alg_put(alg);
goto out;
}
seed = buf;
}
......
......@@ -596,14 +596,11 @@ static void pkcs1pad_free(struct akcipher_instance *inst)
static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
{
const struct rsa_asn1_template *digest_info;
struct crypto_attr_type *algt;
u32 mask;
struct akcipher_instance *inst;
struct pkcs1pad_inst_ctx *ctx;
struct crypto_akcipher_spawn *spawn;
struct akcipher_alg *rsa_alg;
const char *rsa_alg_name;
const char *hash_name;
int err;
......@@ -616,60 +613,49 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
mask = crypto_requires_sync(algt->type, algt->mask);
rsa_alg_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(rsa_alg_name))
return PTR_ERR(rsa_alg_name);
hash_name = crypto_attr_alg_name(tb[2]);
if (IS_ERR(hash_name))
hash_name = NULL;
if (hash_name) {
digest_info = rsa_lookup_asn1(hash_name);
if (!digest_info)
return -EINVAL;
} else
digest_info = NULL;
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
ctx = akcipher_instance_ctx(inst);
spawn = &ctx->spawn;
ctx->digest_info = digest_info;
err = crypto_grab_akcipher(spawn, akcipher_crypto_instance(inst),
rsa_alg_name, 0, mask);
err = crypto_grab_akcipher(&ctx->spawn, akcipher_crypto_instance(inst),
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto out_free_inst;
goto err_free_inst;
rsa_alg = crypto_spawn_akcipher_alg(spawn);
rsa_alg = crypto_spawn_akcipher_alg(&ctx->spawn);
err = -ENAMETOOLONG;
if (!hash_name) {
hash_name = crypto_attr_alg_name(tb[2]);
if (IS_ERR(hash_name)) {
if (snprintf(inst->alg.base.cra_name,
CRYPTO_MAX_ALG_NAME, "pkcs1pad(%s)",
rsa_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name,
CRYPTO_MAX_ALG_NAME, "pkcs1pad(%s)",
rsa_alg->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
} else {
ctx->digest_info = rsa_lookup_asn1(hash_name);
if (!ctx->digest_info) {
err = -EINVAL;
goto err_free_inst;
}
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"pkcs1pad(%s,%s)", rsa_alg->base.cra_name,
hash_name) >= CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name,
CRYPTO_MAX_ALG_NAME, "pkcs1pad(%s,%s)",
rsa_alg->base.cra_driver_name,
hash_name) >= CRYPTO_MAX_ALG_NAME)
goto out_drop_alg;
goto err_free_inst;
}
inst->alg.base.cra_flags = rsa_alg->base.cra_flags & CRYPTO_ALG_ASYNC;
......@@ -691,15 +677,10 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->free = pkcs1pad_free;
err = akcipher_register_instance(tmpl, inst);
if (err)
goto out_drop_alg;
return 0;
out_drop_alg:
crypto_drop_akcipher(spawn);
out_free_inst:
kfree(inst);
if (err) {
err_free_inst:
pkcs1pad_free(inst);
}
return err;
}
......
......@@ -1514,8 +1514,8 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
return;
}
pr_info("\ntesting speed of async %s (%s) %s\n", algo,
get_driver_name(crypto_skcipher, tfm), e);
pr_info("\ntesting speed of %s %s (%s) %s\n", async ? "async" : "sync",
algo, get_driver_name(crypto_skcipher, tfm), e);
req = skcipher_request_alloc(tfm, GFP_KERNEL);
if (!req) {
......
......@@ -91,10 +91,11 @@ struct aead_test_suite {
unsigned int einval_allowed : 1;
/*
* Set if the algorithm intentionally ignores the last 8 bytes of the
* AAD buffer during decryption.
* Set if this algorithm requires that the IV be located at the end of
* the AAD buffer, in addition to being given in the normal way. The
* behavior when the two IV copies differ is implementation-defined.
*/
unsigned int esp_aad : 1;
unsigned int aad_iv : 1;
};
struct cipher_test_suite {
......@@ -2167,9 +2168,10 @@ struct aead_extra_tests_ctx {
* here means the full ciphertext including the authentication tag. The
* authentication tag (and hence also the ciphertext) is assumed to be nonempty.
*/
static void mutate_aead_message(struct aead_testvec *vec, bool esp_aad)
static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
unsigned int ivsize)
{
const unsigned int aad_tail_size = esp_aad ? 8 : 0;
const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
const unsigned int authsize = vec->clen - vec->plen;
if (prandom_u32() % 2 == 0 && vec->alen > aad_tail_size) {
......@@ -2207,6 +2209,9 @@ static void generate_aead_message(struct aead_request *req,
/* Generate the AAD. */
generate_random_bytes((u8 *)vec->assoc, vec->alen);
if (suite->aad_iv && vec->alen >= ivsize)
/* Avoid implementation-defined behavior. */
memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);
if (inauthentic && prandom_u32() % 2 == 0) {
/* Generate a random ciphertext. */
......@@ -2242,7 +2247,7 @@ static void generate_aead_message(struct aead_request *req,
* Mutate the authentic (ciphertext, AAD) pair to get an
* inauthentic one.
*/
mutate_aead_message(vec, suite->esp_aad);
mutate_aead_message(vec, suite->aad_iv, ivsize);
}
vec->novrfy = 1;
if (suite->einval_allowed)
......@@ -2507,11 +2512,11 @@ static int test_aead_extra(const char *driver,
goto out;
}
err = test_aead_inauthentic_inputs(ctx);
err = test_aead_vs_generic_impl(ctx);
if (err)
goto out;
err = test_aead_vs_generic_impl(ctx);
err = test_aead_inauthentic_inputs(ctx);
out:
kfree(ctx->vec.key);
kfree(ctx->vec.iv);
......@@ -5229,7 +5234,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.aead = {
____VECS(aes_gcm_rfc4106_tv_template),
.einval_allowed = 1,
.esp_aad = 1,
.aad_iv = 1,
}
}
}, {
......@@ -5241,7 +5246,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.aead = {
____VECS(aes_ccm_rfc4309_tv_template),
.einval_allowed = 1,
.esp_aad = 1,
.aad_iv = 1,
}
}
}, {
......@@ -5252,6 +5257,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.aead = {
____VECS(aes_gcm_rfc4543_tv_template),
.einval_allowed = 1,
.aad_iv = 1,
}
}
}, {
......@@ -5267,7 +5273,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.aead = {
____VECS(rfc7539esp_tv_template),
.einval_allowed = 1,
.esp_aad = 1,
.aad_iv = 1,
}
}
}, {
......
......@@ -379,15 +379,15 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
err = -EINVAL;
if (alg->base.cra_blocksize != XTS_BLOCK_SIZE)
goto err_drop_spawn;
goto err_free_inst;
if (crypto_skcipher_alg_ivsize(alg))
goto err_drop_spawn;
goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts",
&alg->base);
if (err)
goto err_drop_spawn;
goto err_free_inst;
err = -EINVAL;
cipher_name = alg->base.cra_name;
......@@ -400,20 +400,20 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
len = strlcpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
if (len < 2 || len >= sizeof(ctx->name))
goto err_drop_spawn;
goto err_free_inst;
if (ctx->name[len - 1] != ')')
goto err_drop_spawn;
goto err_free_inst;
ctx->name[len - 1] = 0;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"xts(%s)", ctx->name) >= CRYPTO_MAX_ALG_NAME) {
err = -ENAMETOOLONG;
goto err_drop_spawn;
goto err_free_inst;
}
} else
goto err_drop_spawn;
goto err_free_inst;
inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = alg->base.cra_priority;
......@@ -437,17 +437,11 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
inst->free = free;
err = skcipher_register_instance(tmpl, inst);
if (err)
goto err_drop_spawn;
out:
return err;
err_drop_spawn:
crypto_drop_skcipher(&ctx->spawn);
if (err) {
err_free_inst:
kfree(inst);
goto out;
free(inst);
}
return err;
}
static struct crypto_template crypto_tmpl = {
......
......@@ -26,6 +26,8 @@
*/
#define FSL_MC_DEFAULT_DMA_MASK (~0ULL)
static struct fsl_mc_version mc_version;
/**
* struct fsl_mc - Private data of a "fsl,qoriq-mc" platform device
* @root_mc_bus_dev: fsl-mc device representing the root DPRC
......@@ -54,20 +56,6 @@ struct fsl_mc_addr_translation_range {
phys_addr_t start_phys_addr;
};
/**
* struct mc_version
* @major: Major version number: incremented on API compatibility changes
* @minor: Minor version number: incremented on API additions (that are
* backward compatible); reset when major version is incremented
* @revision: Internal revision number: incremented on implementation changes
* and/or bug fixes that have no impact on API
*/
struct mc_version {
u32 major;
u32 minor;
u32 revision;
};
/**
* fsl_mc_bus_match - device to driver matching callback
* @dev: the fsl-mc device to match against
......@@ -338,7 +326,7 @@ EXPORT_SYMBOL_GPL(fsl_mc_driver_unregister);
*/
static int mc_get_version(struct fsl_mc_io *mc_io,
u32 cmd_flags,
struct mc_version *mc_ver_info)
struct fsl_mc_version *mc_ver_info)
{
struct fsl_mc_command cmd = { 0 };
struct dpmng_rsp_get_version *rsp_params;
......@@ -363,6 +351,20 @@ static int mc_get_version(struct fsl_mc_io *mc_io,
return 0;
}
/**
* fsl_mc_get_version - function to retrieve the MC f/w version information
*
* Return: mc version when called after fsl-mc-bus probe; NULL otherwise.
*/
struct fsl_mc_version *fsl_mc_get_version(void)
{
if (mc_version.major)
return &mc_version;
return NULL;
}
EXPORT_SYMBOL_GPL(fsl_mc_get_version);
/**
* fsl_mc_get_root_dprc - function to traverse to the root dprc
*/
......@@ -862,7 +864,6 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
int container_id;
phys_addr_t mc_portal_phys_addr;
u32 mc_portal_size;
struct mc_version mc_version;
struct resource res;
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
......
......@@ -244,7 +244,8 @@ config HW_RANDOM_MXC_RNGA
config HW_RANDOM_IMX_RNGC
tristate "Freescale i.MX RNGC Random Number Generator"
depends on ARCH_MXC
depends on HAS_IOMEM && HAVE_CLK
depends on SOC_IMX25 || COMPILE_TEST
default HW_RANDOM
---help---
This driver provides kernel-side support for the Random Number
......@@ -466,6 +467,13 @@ config HW_RANDOM_NPCM
If unsure, say Y.
config HW_RANDOM_KEYSTONE
depends on ARCH_KEYSTONE || COMPILE_TEST
default HW_RANDOM
tristate "TI Keystone NETCP SA Hardware random number generator"
help
This option enables Keystone's hardware random generator.
endif # HW_RANDOM
config UML_RANDOM
......@@ -482,10 +490,3 @@ config UML_RANDOM
(check your distro, or download from
http://sourceforge.net/projects/gkernel/). rngd periodically reads
/dev/hwrng and injects the entropy into /dev/random.
config HW_RANDOM_KEYSTONE
depends on ARCH_KEYSTONE || COMPILE_TEST
default HW_RANDOM
tristate "TI Keystone NETCP SA Hardware random number generator"
help
This option enables Keystone's hardware random generator.
......@@ -18,12 +18,22 @@
#include <linux/completion.h>
#include <linux/io.h>
#define RNGC_VER_ID 0x0000
#define RNGC_COMMAND 0x0004
#define RNGC_CONTROL 0x0008
#define RNGC_STATUS 0x000C
#define RNGC_ERROR 0x0010
#define RNGC_FIFO 0x0014
/* the fields in the ver id register */
#define RNGC_TYPE_SHIFT 28
#define RNGC_VER_MAJ_SHIFT 8
/* the rng_type field */
#define RNGC_TYPE_RNGB 0x1
#define RNGC_TYPE_RNGC 0x2
#define RNGC_CMD_CLR_ERR 0x00000020
#define RNGC_CMD_CLR_INT 0x00000010
#define RNGC_CMD_SEED 0x00000002
......@@ -31,6 +41,7 @@
#define RNGC_CTRL_MASK_ERROR 0x00000040
#define RNGC_CTRL_MASK_DONE 0x00000020
#define RNGC_CTRL_AUTO_SEED 0x00000010
#define RNGC_STATUS_ERROR 0x00010000
#define RNGC_STATUS_FIFO_LEVEL_MASK 0x00000f00
......@@ -100,15 +111,11 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
ret = wait_for_completion_timeout(&rngc->rng_op_done, RNGC_TIMEOUT);
if (!ret) {
imx_rngc_irq_mask_clear(rngc);
imx_rngc_irq_mask_clear(rngc);
if (!ret)
return -ETIMEDOUT;
}
if (rngc->err_reg != 0)
return -EIO;
return 0;
return rngc->err_reg ? -EIO : 0;
}
static int imx_rngc_read(struct hwrng *rng, void *data, size_t max, bool wait)
......@@ -165,17 +172,17 @@ static irqreturn_t imx_rngc_irq(int irq, void *priv)
static int imx_rngc_init(struct hwrng *rng)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
u32 cmd;
u32 cmd, ctrl;
int ret;
/* clear error */
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_CLR_ERR, rngc->base + RNGC_COMMAND);
imx_rngc_irq_unmask(rngc);
/* create seed, repeat while there is some statistical error */
do {
imx_rngc_irq_unmask(rngc);
/* seed creation */
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
......@@ -184,13 +191,42 @@ static int imx_rngc_init(struct hwrng *rng)
RNGC_TIMEOUT);
if (!ret) {
imx_rngc_irq_mask_clear(rngc);
return -ETIMEDOUT;
ret = -ETIMEDOUT;
goto err;
}
} while (rngc->err_reg == RNGC_ERROR_STATUS_STAT_ERR);
return rngc->err_reg ? -EIO : 0;
if (rngc->err_reg) {
ret = -EIO;
goto err;
}
/*
* enable automatic seeding, the rngc creates a new seed automatically
* after serving 2^20 random 160-bit words
*/
ctrl = readl(rngc->base + RNGC_CONTROL);
ctrl |= RNGC_CTRL_AUTO_SEED;
writel(ctrl, rngc->base + RNGC_CONTROL);
/*
* if initialisation was successful, we keep the interrupt
* unmasked until imx_rngc_cleanup is called
* we mask the interrupt ourselves if we return an error
*/
return 0;
err:
imx_rngc_irq_mask_clear(rngc);
return ret;
}
static void imx_rngc_cleanup(struct hwrng *rng)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
imx_rngc_irq_mask_clear(rngc);
}
static int imx_rngc_probe(struct platform_device *pdev)
......@@ -198,6 +234,8 @@ static int imx_rngc_probe(struct platform_device *pdev)
struct imx_rngc *rngc;
int ret;
int irq;
u32 ver_id;
u8 rng_type;
rngc = devm_kzalloc(&pdev->dev, sizeof(*rngc), GFP_KERNEL);
if (!rngc)
......@@ -223,6 +261,17 @@ static int imx_rngc_probe(struct platform_device *pdev)
if (ret)
return ret;
ver_id = readl(rngc->base + RNGC_VER_ID);
rng_type = ver_id >> RNGC_TYPE_SHIFT;
/*
* This driver supports only RNGC and RNGB. (There's a different
* driver for RNGA.)
*/
if (rng_type != RNGC_TYPE_RNGC && rng_type != RNGC_TYPE_RNGB) {
ret = -ENODEV;
goto err;
}
ret = devm_request_irq(&pdev->dev,
irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
if (ret) {
......@@ -235,6 +284,7 @@ static int imx_rngc_probe(struct platform_device *pdev)
rngc->rng.name = pdev->name;
rngc->rng.init = imx_rngc_init;
rngc->rng.read = imx_rngc_read;
rngc->rng.cleanup = imx_rngc_cleanup;
rngc->dev = &pdev->dev;
platform_set_drvdata(pdev, rngc);
......@@ -244,18 +294,21 @@ static int imx_rngc_probe(struct platform_device *pdev)
if (self_test) {
ret = imx_rngc_self_test(rngc);
if (ret) {
dev_err(rngc->dev, "FSL RNGC self test failed.\n");
dev_err(rngc->dev, "self test failed\n");
goto err;
}
}
ret = hwrng_register(&rngc->rng);
if (ret) {
dev_err(&pdev->dev, "FSL RNGC registering failed (%d)\n", ret);
dev_err(&pdev->dev, "hwrng registration failed\n");
goto err;
}
dev_info(&pdev->dev, "Freescale RNGC registered.\n");
dev_info(&pdev->dev,
"Freescale RNG%c registered (HW revision %d.%02d)\n",
rng_type == RNGC_TYPE_RNGB ? 'B' : 'C',
(ver_id >> RNGC_VER_MAJ_SHIFT) & 0xff, ver_id & 0xff);
return 0;
err:
......
......@@ -18,6 +18,7 @@
#include <linux/workqueue.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
......
......@@ -233,20 +233,6 @@ config CRYPTO_CRC32_S390
It is available with IBM z13 or later.
config CRYPTO_DEV_MARVELL_CESA
tristate "Marvell's Cryptographic Engine driver"
depends on PLAT_ORION || ARCH_MVEBU
select CRYPTO_LIB_AES
select CRYPTO_LIB_DES
select CRYPTO_SKCIPHER
select CRYPTO_HASH
select SRAM
help
This driver allows you to utilize the Cryptographic Engines and
Security Accelerator (CESA) which can be found on MVEBU and ORION
platforms.
This driver supports CPU offload through DMA transfers.
config CRYPTO_DEV_NIAGARA2
tristate "Niagara2 Stream Processing Unit driver"
select CRYPTO_LIB_DES
......@@ -606,6 +592,7 @@ config CRYPTO_DEV_MXS_DCP
source "drivers/crypto/qat/Kconfig"
source "drivers/crypto/cavium/cpt/Kconfig"
source "drivers/crypto/cavium/nitrox/Kconfig"
source "drivers/crypto/marvell/Kconfig"
config CRYPTO_DEV_CAVIUM_ZIP
tristate "Cavium ZIP driver"
......@@ -685,6 +672,29 @@ choice
endchoice
config CRYPTO_DEV_QCE_SW_MAX_LEN
int "Default maximum request size to use software for AES"
depends on CRYPTO_DEV_QCE && CRYPTO_DEV_QCE_SKCIPHER
default 512
help
This sets the default maximum request size to perform AES requests
using software instead of the crypto engine. It can be changed by
setting the aes_sw_max_len parameter.
Small blocks are processed faster in software than hardware.
Considering the 256-bit ciphers, software is 2-3 times faster than
qce at 256-bytes, 30% faster at 512, and about even at 768-bytes.
With 128-bit keys, the break-even point would be around 1024-bytes.
The default is set a little lower, to 512 bytes, to balance the
cost in CPU usage. The minimum recommended setting is 16-bytes
(1 AES block), since AES-GCM will fail if you set it lower.
Setting this to zero will send all requests to the hardware.
Note that 192-bit keys are not supported by the hardware and are
always processed by the software fallback, and all DES requests
are done by the hardware.
config CRYPTO_DEV_QCOM_RNG
tristate "Qualcomm Random Number Generator Driver"
depends on ARCH_QCOM || COMPILE_TEST
......@@ -731,6 +741,18 @@ config CRYPTO_DEV_ROCKCHIP
This driver interfaces with the hardware crypto accelerator.
Supporting cbc/ecb chainmode, and aes/des/des3_ede cipher mode.
config CRYPTO_DEV_ZYNQMP_AES
tristate "Support for Xilinx ZynqMP AES hw accelerator"
depends on ZYNQMP_FIRMWARE || COMPILE_TEST
select CRYPTO_AES
select CRYPTO_ENGINE
select CRYPTO_AEAD
help
Xilinx ZynqMP has AES-GCM engine used for symmetric key
encryption and decryption. This driver interfaces with AES hw
accelerator. Select this if you want to use the ZynqMP module
for AES algorithms.
config CRYPTO_DEV_MEDIATEK
tristate "MediaTek's EIP97 Cryptographic Engine driver"
depends on (ARM && ARCH_MEDIATEK) || COMPILE_TEST
......
......@@ -18,7 +18,7 @@ obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
obj-$(CONFIG_CRYPTO_DEV_IMGTEC_HASH) += img-hash.o
obj-$(CONFIG_CRYPTO_DEV_IXP4XX) += ixp4xx_crypto.o
obj-$(CONFIG_CRYPTO_DEV_MARVELL_CESA) += marvell/
obj-$(CONFIG_CRYPTO_DEV_MARVELL) += marvell/
obj-$(CONFIG_CRYPTO_DEV_MEDIATEK) += mediatek/
obj-$(CONFIG_CRYPTO_DEV_MXS_DCP) += mxs-dcp.o
obj-$(CONFIG_CRYPTO_DEV_NIAGARA2) += n2_crypto.o
......@@ -47,5 +47,6 @@ obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/
obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += inside-secure/
obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) += axis/
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += xilinx/
obj-y += hisilicon/
obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/
......@@ -565,10 +565,8 @@ static int sun8i_ce_probe(struct platform_device *pdev)
/* Get Non Secure IRQ */
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(ce->dev, "Cannot get CryptoEngine Non-secure IRQ\n");
if (irq < 0)
return irq;
}
ce->reset = devm_reset_control_get(&pdev->dev, NULL);
if (IS_ERR(ce->reset)) {
......
......@@ -214,7 +214,7 @@ struct sun8i_cipher_tfm_ctx {
* this template
* @alg: one of sub struct must be used
* @stat_req: number of request done on this template
* @stat_fb: total of all data len done on this template
* @stat_fb: number of request which has fallbacked
*/
struct sun8i_ce_alg_template {
u32 type;
......
......@@ -186,7 +186,7 @@ struct sun8i_cipher_tfm_ctx {
* this template
* @alg: one of sub struct must be used
* @stat_req: number of request done on this template
* @stat_fb: total of all data len done on this template
* @stat_fb: number of request which has fallbacked
*/
struct sun8i_ss_alg_template {
u32 type;
......
......@@ -176,7 +176,8 @@ static int atmel_i2c_wakeup(struct i2c_client *client)
* device is idle, asleep or during waking up. Don't check for error
* when waking up the device.
*/
i2c_master_send(client, i2c_priv->wake_token, i2c_priv->wake_token_sz);
i2c_transfer_buffer_flags(client, i2c_priv->wake_token,
i2c_priv->wake_token_sz, I2C_M_IGNORE_NAK);
/*
* Wait to wake the device. Typical execution times for ecdh and genkey
......
......@@ -366,88 +366,88 @@ static ssize_t spu_debugfs_read(struct file *filp, char __user *ubuf,
ipriv = filp->private_data;
out_offset = 0;
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Number of SPUs.........%u\n",
ipriv->spu.num_spu);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Current sessions.......%u\n",
atomic_read(&ipriv->session_count));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Session count..........%u\n",
atomic_read(&ipriv->stream_count));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Cipher setkey..........%u\n",
atomic_read(&ipriv->setkey_cnt[SPU_OP_CIPHER]));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Cipher Ops.............%u\n",
atomic_read(&ipriv->op_counts[SPU_OP_CIPHER]));
for (alg = 0; alg < CIPHER_ALG_LAST; alg++) {
for (mode = 0; mode < CIPHER_MODE_LAST; mode++) {
op_cnt = atomic_read(&ipriv->cipher_cnt[alg][mode]);
if (op_cnt) {
out_offset += snprintf(buf + out_offset,
out_offset += scnprintf(buf + out_offset,
out_count - out_offset,
" %-13s%11u\n",
spu_alg_name(alg, mode), op_cnt);
}
}
}
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Hash Ops...............%u\n",
atomic_read(&ipriv->op_counts[SPU_OP_HASH]));
for (alg = 0; alg < HASH_ALG_LAST; alg++) {
op_cnt = atomic_read(&ipriv->hash_cnt[alg]);
if (op_cnt) {
out_offset += snprintf(buf + out_offset,
out_offset += scnprintf(buf + out_offset,
out_count - out_offset,
" %-13s%11u\n",
hash_alg_name[alg], op_cnt);
}
}
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"HMAC setkey............%u\n",
atomic_read(&ipriv->setkey_cnt[SPU_OP_HMAC]));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"HMAC Ops...............%u\n",
atomic_read(&ipriv->op_counts[SPU_OP_HMAC]));
for (alg = 0; alg < HASH_ALG_LAST; alg++) {
op_cnt = atomic_read(&ipriv->hmac_cnt[alg]);
if (op_cnt) {
out_offset += snprintf(buf + out_offset,
out_offset += scnprintf(buf + out_offset,
out_count - out_offset,
" %-13s%11u\n",
hash_alg_name[alg], op_cnt);
}
}
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"AEAD setkey............%u\n",
atomic_read(&ipriv->setkey_cnt[SPU_OP_AEAD]));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"AEAD Ops...............%u\n",
atomic_read(&ipriv->op_counts[SPU_OP_AEAD]));
for (alg = 0; alg < AEAD_TYPE_LAST; alg++) {
op_cnt = atomic_read(&ipriv->aead_cnt[alg]);
if (op_cnt) {
out_offset += snprintf(buf + out_offset,
out_offset += scnprintf(buf + out_offset,
out_count - out_offset,
" %-13s%11u\n",
aead_alg_name[alg], op_cnt);
}
}
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Bytes of req data......%llu\n",
(u64)atomic64_read(&ipriv->bytes_out));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Bytes of resp data.....%llu\n",
(u64)atomic64_read(&ipriv->bytes_in));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Mailbox full...........%u\n",
atomic_read(&ipriv->mb_no_spc));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Mailbox send failures..%u\n",
atomic_read(&ipriv->mb_send_fail));
out_offset += snprintf(buf + out_offset, out_count - out_offset,
out_offset += scnprintf(buf + out_offset, out_count - out_offset,
"Check ICV errors.......%u\n",
atomic_read(&ipriv->bad_icv));
if (ipriv->spu.spu_type == SPU_TYPE_SPUM)
......@@ -455,7 +455,7 @@ static ssize_t spu_debugfs_read(struct file *filp, char __user *ubuf,
spu_ofifo_ctrl = ioread32(ipriv->spu.reg_vbase[i] +
SPU_OFIFO_CTRL);
fifo_len = spu_ofifo_ctrl & SPU_FIFO_WATERMARK;
out_offset += snprintf(buf + out_offset,
out_offset += scnprintf(buf + out_offset,
out_count - out_offset,
"SPU %d output FIFO high water.....%u\n",
i, fifo_len);
......
......@@ -13,6 +13,7 @@ config CRYPTO_DEV_FSL_CAAM
depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE
select SOC_BUS
select CRYPTO_DEV_FSL_CAAM_COMMON
imply FSL_MC_BUS
help
Enables the driver module for Freescale's Cryptographic Accelerator
and Assurance Module (CAAM), also known as the SEC version 4 (SEC4).
......@@ -33,6 +34,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
menuconfig CRYPTO_DEV_FSL_CAAM_JR
tristate "Freescale CAAM Job Ring driver backend"
select CRYPTO_ENGINE
default y
help
Enables the driver module for Job Rings which are part of
......
This diff is collapsed.
......@@ -1379,6 +1379,9 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
const u32 ctx1_iv_off)
{
u32 *key_jump_cmd;
u32 options = cdata->algtype | OP_ALG_AS_INIT | OP_ALG_ENCRYPT;
bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) ==
OP_ALG_ALGSEL_CHACHA20);
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
/* Skip if already shared */
......@@ -1417,14 +1420,15 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
LDST_OFFSET_SHIFT));
/* Load operation */
append_operation(desc, cdata->algtype | OP_ALG_AS_INIT |
OP_ALG_ENCRYPT);
if (is_chacha20)
options |= OP_ALG_AS_FINALIZE;
append_operation(desc, options);
/* Perform operation */
skcipher_append_src_dst(desc);
/* Store IV */
if (ivsize)
if (!is_chacha20 && ivsize)
append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT |
LDST_CLASS_1_CCB | (ctx1_iv_off <<
LDST_OFFSET_SHIFT));
......@@ -1451,6 +1455,8 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
const u32 ctx1_iv_off)
{
u32 *key_jump_cmd;
bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) ==
OP_ALG_ALGSEL_CHACHA20);
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
/* Skip if already shared */
......@@ -1499,7 +1505,7 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
skcipher_append_src_dst(desc);
/* Store IV */
if (ivsize)
if (!is_chacha20 && ivsize)
append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT |
LDST_CLASS_1_CCB | (ctx1_iv_off <<
LDST_OFFSET_SHIFT));
......@@ -1518,7 +1524,13 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_decap);
*/
void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata)
{
__be64 sector_size = cpu_to_be64(512);
/*
* Set sector size to a big value, practically disabling
* sector size segmentation in xts implementation. We cannot
* take full advantage of this HW feature with existing
* crypto API / dm-crypt SW architecture.
*/
__be64 sector_size = cpu_to_be64(BIT(15));
u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
......@@ -1571,7 +1583,13 @@ EXPORT_SYMBOL(cnstr_shdsc_xts_skcipher_encap);
*/
void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata)
{
__be64 sector_size = cpu_to_be64(512);
/*
* Set sector size to a big value, practically disabling
* sector size segmentation in xts implementation. We cannot
* take full advantage of this HW feature with existing
* crypto API / dm-crypt SW architecture.
*/
__be64 sector_size = cpu_to_be64(BIT(15));
u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
......
......@@ -783,7 +783,7 @@ struct aead_edesc {
unsigned int assoclen;
dma_addr_t assoclen_dma;
struct caam_drv_req drv_req;
struct qm_sg_entry sgt[0];
struct qm_sg_entry sgt[];
};
/*
......@@ -803,7 +803,7 @@ struct skcipher_edesc {
int qm_sg_bytes;
dma_addr_t qm_sg_dma;
struct caam_drv_req drv_req;
struct qm_sg_entry sgt[0];
struct qm_sg_entry sgt[];
};
static struct caam_drv_ctx *get_drv_ctx(struct caam_ctx *ctx,
......
......@@ -114,7 +114,7 @@ struct aead_edesc {
dma_addr_t qm_sg_dma;
unsigned int assoclen;
dma_addr_t assoclen_dma;
struct dpaa2_sg_entry sgt[0];
struct dpaa2_sg_entry sgt[];
};
/*
......@@ -132,7 +132,7 @@ struct skcipher_edesc {
dma_addr_t iv_dma;
int qm_sg_bytes;
dma_addr_t qm_sg_dma;
struct dpaa2_sg_entry sgt[0];
struct dpaa2_sg_entry sgt[];
};
/*
......@@ -146,7 +146,7 @@ struct ahash_edesc {
dma_addr_t qm_sg_dma;
int src_nents;
int qm_sg_bytes;
struct dpaa2_sg_entry sgt[0];
struct dpaa2_sg_entry sgt[];
};
/**
......
This diff is collapsed.
......@@ -117,76 +117,69 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
{
struct akcipher_request *req = context;
struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct rsa_edesc *edesc;
int ecode = 0;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
edesc = req_ctx->edesc;
rsa_pub_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);
akcipher_request_complete(req, ecode);
}
static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
void *context)
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
int ecode = 0;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
rsa_priv_f1_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);
akcipher_request_complete(req, ecode);
}
static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
void *context)
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
int ecode = 0;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
rsa_priv_f2_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);
akcipher_request_complete(req, ecode);
/*
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
akcipher_request_complete(req, ecode);
else
crypto_finalize_akcipher_request(jrp->engine, req, ecode);
}
static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
void *context)
static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
void *context)
{
struct akcipher_request *req = context;
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
struct caam_rsa_key *key = &ctx->key;
struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct rsa_edesc *edesc;
int ecode = 0;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
edesc = req_ctx->edesc;
switch (key->priv_form) {
case FORM1:
rsa_priv_f1_unmap(dev, edesc, req);
break;
case FORM2:
rsa_priv_f2_unmap(dev, edesc, req);
break;
case FORM3:
rsa_priv_f3_unmap(dev, edesc, req);
}
rsa_priv_f3_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);
akcipher_request_complete(req, ecode);
/*
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
akcipher_request_complete(req, ecode);
else
crypto_finalize_akcipher_request(jrp->engine, req, ecode);
}
/**
......@@ -334,6 +327,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
edesc->src_nents = src_nents;
edesc->dst_nents = dst_nents;
req_ctx->edesc = edesc;
if (!sec4_sg_bytes)
return edesc;
......@@ -364,6 +359,33 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
return ERR_PTR(-ENOMEM);
}
static int akcipher_do_one_req(struct crypto_engine *engine, void *areq)
{
struct akcipher_request *req = container_of(areq,
struct akcipher_request,
base);
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
struct device *jrdev = ctx->dev;
u32 *desc = req_ctx->edesc->hw_desc;
int ret;
req_ctx->edesc->bklog = true;
ret = caam_jr_enqueue(jrdev, desc, req_ctx->akcipher_op_done, req);
if (ret != -EINPROGRESS) {
rsa_pub_unmap(jrdev, req_ctx->edesc, req);
rsa_io_unmap(jrdev, req_ctx->edesc, req);
kfree(req_ctx->edesc);
} else {
ret = 0;
}
return ret;
}
static int set_rsa_pub_pdb(struct akcipher_request *req,
struct rsa_edesc *edesc)
{
......@@ -627,6 +649,53 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
return -ENOMEM;
}
static int akcipher_enqueue_req(struct device *jrdev,
void (*cbk)(struct device *jrdev, u32 *desc,
u32 err, void *context),
struct akcipher_request *req)
{
struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
struct caam_rsa_key *key = &ctx->key;
struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct rsa_edesc *edesc = req_ctx->edesc;
u32 *desc = edesc->hw_desc;
int ret;
req_ctx->akcipher_op_done = cbk;
/*
* Only the backlog request are sent to crypto-engine since the others
* can be handled by CAAM, if free, especially since JR has up to 1024
* entries (more than the 10 entries from crypto-engine).
*/
if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
ret = crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
req);
else
ret = caam_jr_enqueue(jrdev, desc, cbk, req);
if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
switch (key->priv_form) {
case FORM1:
rsa_priv_f1_unmap(jrdev, edesc, req);
break;
case FORM2:
rsa_priv_f2_unmap(jrdev, edesc, req);
break;
case FORM3:
rsa_priv_f3_unmap(jrdev, edesc, req);
break;
default:
rsa_pub_unmap(jrdev, edesc, req);
}
rsa_io_unmap(jrdev, edesc, req);
kfree(edesc);
}
return ret;
}
static int caam_rsa_enc(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
......@@ -658,11 +727,7 @@ static int caam_rsa_enc(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
if (!ret)
return -EINPROGRESS;
rsa_pub_unmap(jrdev, edesc, req);
return akcipher_enqueue_req(jrdev, rsa_pub_done, req);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
......@@ -691,11 +756,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f1_done, req);
if (!ret)
return -EINPROGRESS;
rsa_priv_f1_unmap(jrdev, edesc, req);
return akcipher_enqueue_req(jrdev, rsa_priv_f_done, req);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
......@@ -724,11 +785,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f2_done, req);
if (!ret)
return -EINPROGRESS;
rsa_priv_f2_unmap(jrdev, edesc, req);
return akcipher_enqueue_req(jrdev, rsa_priv_f_done, req);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
......@@ -757,11 +814,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f3_done, req);
if (!ret)
return -EINPROGRESS;
rsa_priv_f3_unmap(jrdev, edesc, req);
return akcipher_enqueue_req(jrdev, rsa_priv_f_done, req);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
......@@ -1054,6 +1107,8 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm)
return -ENOMEM;
}
ctx->enginectx.op.do_one_request = akcipher_do_one_req;
return 0;
}
......
......@@ -12,6 +12,7 @@
#define _PKC_DESC_H_
#include "compat.h"
#include "pdb.h"
#include <crypto/engine.h>
/**
* caam_priv_key_form - CAAM RSA private key representation
......@@ -87,11 +88,13 @@ struct caam_rsa_key {
/**
* caam_rsa_ctx - per session context.
* @enginectx : crypto engine context
* @key : RSA key in DMA zone
* @dev : device structure
* @padding_dma : dma address of padding, for adding it to the input
*/
struct caam_rsa_ctx {
struct crypto_engine_ctx enginectx;
struct caam_rsa_key key;
struct device *dev;
dma_addr_t padding_dma;
......@@ -103,11 +106,16 @@ struct caam_rsa_ctx {
* @src : input scatterlist (stripped of leading zeros)
* @fixup_src : input scatterlist (that might be stripped of leading zeros)
* @fixup_src_len : length of the fixup_src input scatterlist
* @edesc : s/w-extended rsa descriptor
* @akcipher_op_done : callback used when operation is done
*/
struct caam_rsa_req_ctx {
struct scatterlist src[2];
struct scatterlist *fixup_src;
unsigned int fixup_src_len;
struct rsa_edesc *edesc;
void (*akcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
void *context);
};
/**
......@@ -117,6 +125,7 @@ struct caam_rsa_req_ctx {
* @mapped_src_nents: number of segments in input h/w link table
* @mapped_dst_nents: number of segments in output h/w link table
* @sec4_sg_bytes : length of h/w link table
* @bklog : stored to determine if the request needs backlog
* @sec4_sg_dma : dma address of h/w link table
* @sec4_sg : pointer to h/w link table
* @pdb : specific RSA Protocol Data Block (PDB)
......@@ -128,6 +137,7 @@ struct rsa_edesc {
int mapped_src_nents;
int mapped_dst_nents;
int sec4_sg_bytes;
bool bklog;
dma_addr_t sec4_sg_dma;
struct sec4_sg_entry *sec4_sg;
union {
......
This diff is collapsed.
......@@ -10,6 +10,7 @@
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/sys_soc.h>
#include <linux/fsl/mc.h>
#include "compat.h"
#include "regs.h"
......@@ -36,7 +37,8 @@ static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
init_job_desc(desc, 0);
op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
(handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT;
(handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT |
OP_ALG_PR_ON;
/* INIT RNG in non-test mode */
append_operation(desc, op_flags);
......@@ -196,7 +198,7 @@ static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask)
u32 *desc, status;
int sh_idx, ret = 0;
desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL);
desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL | GFP_DMA);
if (!desc)
return -ENOMEM;
......@@ -273,17 +275,30 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
int ret = 0, sh_idx;
ctrl = (struct caam_ctrl __iomem *)ctrlpriv->ctrl;
desc = kmalloc(CAAM_CMD_SZ * 7, GFP_KERNEL);
desc = kmalloc(CAAM_CMD_SZ * 7, GFP_KERNEL | GFP_DMA);
if (!desc)
return -ENOMEM;
for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) {
const u32 rdsta_if = RDSTA_IF0 << sh_idx;
const u32 rdsta_pr = RDSTA_PR0 << sh_idx;
const u32 rdsta_mask = rdsta_if | rdsta_pr;
/*
* If the corresponding bit is set, this state handle
* was initialized by somebody else, so it's left alone.
*/
if ((1 << sh_idx) & state_handle_mask)
continue;
if (rdsta_if & state_handle_mask) {
if (rdsta_pr & state_handle_mask)
continue;
dev_info(ctrldev,
"RNG4 SH%d was previously instantiated without prediction resistance. Tearing it down\n",
sh_idx);
ret = deinstantiate_rng(ctrldev, rdsta_if);
if (ret)
break;
}
/* Create the descriptor for instantiating RNG State Handle */
build_instantiation_desc(desc, sh_idx, gen_sk);
......@@ -303,9 +318,9 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
if (ret)
break;
rdsta_val = rd_reg32(&ctrl->r4tst[0].rdsta) & RDSTA_IFMASK;
rdsta_val = rd_reg32(&ctrl->r4tst[0].rdsta) & RDSTA_MASK;
if ((status && status != JRSTA_SSRC_JUMP_HALT_CC) ||
!(rdsta_val & (1 << sh_idx))) {
(rdsta_val & rdsta_mask) != rdsta_mask) {
ret = -EAGAIN;
break;
}
......@@ -341,8 +356,12 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
ctrl = (struct caam_ctrl __iomem *)ctrlpriv->ctrl;
r4tst = &ctrl->r4tst[0];
/* put RNG4 into program mode */
clrsetbits_32(&r4tst->rtmctl, 0, RTMCTL_PRGM);
/*
* Setting both RTMCTL:PRGM and RTMCTL:TRNG_ACC causes TRNG to
* properly invalidate the entropy in the entropy register and
* force re-generation.
*/
clrsetbits_32(&r4tst->rtmctl, 0, RTMCTL_PRGM | RTMCTL_ACC);
/*
* Performance-wise, it does not make sense to
......@@ -372,7 +391,8 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
* select raw sampling in both entropy shifter
* and statistical checker; ; put RNG4 into run mode
*/
clrsetbits_32(&r4tst->rtmctl, RTMCTL_PRGM, RTMCTL_SAMP_MODE_RAW_ES_SC);
clrsetbits_32(&r4tst->rtmctl, RTMCTL_PRGM | RTMCTL_ACC,
RTMCTL_SAMP_MODE_RAW_ES_SC);
}
static int caam_get_era_from_hw(struct caam_ctrl __iomem *ctrl)
......@@ -559,6 +579,26 @@ static void caam_remove_debugfs(void *root)
}
#endif
#ifdef CONFIG_FSL_MC_BUS
static bool check_version(struct fsl_mc_version *mc_version, u32 major,
u32 minor, u32 revision)
{
if (mc_version->major > major)
return true;
if (mc_version->major == major) {
if (mc_version->minor > minor)
return true;
if (mc_version->minor == minor &&
mc_version->revision > revision)
return true;
}
return false;
}
#endif
/* Probe routine for CAAM top (controller) level */
static int caam_probe(struct platform_device *pdev)
{
......@@ -577,6 +617,7 @@ static int caam_probe(struct platform_device *pdev)
u8 rng_vid;
int pg_size;
int BLOCK_OFFSET = 0;
bool pr_support = false;
ctrlpriv = devm_kzalloc(&pdev->dev, sizeof(*ctrlpriv), GFP_KERNEL);
if (!ctrlpriv)
......@@ -662,6 +703,21 @@ static int caam_probe(struct platform_device *pdev)
/* Get the IRQ of the controller (for security violations only) */
ctrlpriv->secvio_irq = irq_of_parse_and_map(nprop, 0);
np = of_find_compatible_node(NULL, NULL, "fsl,qoriq-mc");
ctrlpriv->mc_en = !!np;
of_node_put(np);
#ifdef CONFIG_FSL_MC_BUS
if (ctrlpriv->mc_en) {
struct fsl_mc_version *mc_version;
mc_version = fsl_mc_get_version();
if (mc_version)
pr_support = check_version(mc_version, 10, 20, 0);
else
return -EPROBE_DEFER;
}
#endif
/*
* Enable DECO watchdogs and, if this is a PHYS_ADDR_T_64BIT kernel,
......@@ -669,10 +725,6 @@ static int caam_probe(struct platform_device *pdev)
* In case of SoCs with Management Complex, MC f/w performs
* the configuration.
*/
np = of_find_compatible_node(NULL, NULL, "fsl,qoriq-mc");
ctrlpriv->mc_en = !!np;
of_node_put(np);
if (!ctrlpriv->mc_en)
clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK,
MCFGR_AWCACHE_CACH | MCFGR_AWCACHE_BUFF |
......@@ -779,7 +831,7 @@ static int caam_probe(struct platform_device *pdev)
* already instantiated, do RNG instantiation
* In case of SoCs with Management Complex, RNG is managed by MC f/w.
*/
if (!ctrlpriv->mc_en && rng_vid >= 4) {
if (!(ctrlpriv->mc_en && pr_support) && rng_vid >= 4) {
ctrlpriv->rng4_sh_init =
rd_reg32(&ctrl->r4tst[0].rdsta);
/*
......@@ -789,11 +841,11 @@ static int caam_probe(struct platform_device *pdev)
* to regenerate these keys before the next POR.
*/
gen_sk = ctrlpriv->rng4_sh_init & RDSTA_SKVN ? 0 : 1;
ctrlpriv->rng4_sh_init &= RDSTA_IFMASK;
ctrlpriv->rng4_sh_init &= RDSTA_MASK;
do {
int inst_handles =
rd_reg32(&ctrl->r4tst[0].rdsta) &
RDSTA_IFMASK;
RDSTA_MASK;
/*
* If either SH were instantiated by somebody else
* (e.g. u-boot) then it is assumed that the entropy
......@@ -833,7 +885,7 @@ static int caam_probe(struct platform_device *pdev)
* Set handles init'ed by this module as the complement of the
* already initialized ones
*/
ctrlpriv->rng4_sh_init = ~ctrlpriv->rng4_sh_init & RDSTA_IFMASK;
ctrlpriv->rng4_sh_init = ~ctrlpriv->rng4_sh_init & RDSTA_MASK;
/* Enable RDB bit so that RNG works faster */
clrsetbits_32(&ctrl->scfgr, 0, SCFGR_RDBENABLE);
......
......@@ -1254,6 +1254,8 @@
#define OP_ALG_ICV_OFF (0 << OP_ALG_ICV_SHIFT)
#define OP_ALG_ICV_ON (1 << OP_ALG_ICV_SHIFT)
#define OP_ALG_PR_ON BIT(1)
#define OP_ALG_DIR_SHIFT 0
#define OP_ALG_DIR_MASK 1
#define OP_ALG_DECRYPT 0
......
......@@ -11,6 +11,7 @@
#define INTERN_H
#include "ctrl.h"
#include <crypto/engine.h>
/* Currently comes from Kconfig param as a ^2 (driver-required) */
#define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
......@@ -46,6 +47,7 @@ struct caam_drv_private_jr {
struct caam_job_ring __iomem *rregs; /* JobR's register space */
struct tasklet_struct irqtask;
int irq; /* One per queue */
bool hwrng;
/* Number of scatterlist crypt transforms active on the JobR */
atomic_t tfm_count ____cacheline_aligned;
......@@ -60,6 +62,7 @@ struct caam_drv_private_jr {
int out_ring_read_index; /* Output index "tail" */
int tail; /* entinfo (s/w ring) tail index */
void *outring; /* Base of output ring, DMA-safe */
struct crypto_engine *engine;
};
/*
......@@ -161,7 +164,7 @@ static inline void caam_pkc_exit(void)
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API
int caam_rng_init(struct device *dev);
void caam_rng_exit(void);
void caam_rng_exit(struct device *dev);
#else
......@@ -170,9 +173,7 @@ static inline int caam_rng_init(struct device *dev)
return 0;
}
static inline void caam_rng_exit(void)
{
}
static inline void caam_rng_exit(struct device *dev) {}
#endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API */
......
......@@ -27,7 +27,8 @@ static struct jr_driver_data driver_data;
static DEFINE_MUTEX(algs_lock);
static unsigned int active_devs;
static void register_algs(struct device *dev)
static void register_algs(struct caam_drv_private_jr *jrpriv,
struct device *dev)
{
mutex_lock(&algs_lock);
......@@ -37,7 +38,7 @@ static void register_algs(struct device *dev)
caam_algapi_init(dev);
caam_algapi_hash_init(dev);
caam_pkc_init(dev);
caam_rng_init(dev);
jrpriv->hwrng = !caam_rng_init(dev);
caam_qi_algapi_init(dev);
algs_unlock:
......@@ -53,7 +54,6 @@ static void unregister_algs(void)
caam_qi_algapi_exit();
caam_rng_exit();
caam_pkc_exit();
caam_algapi_hash_exit();
caam_algapi_exit();
......@@ -62,6 +62,15 @@ static void unregister_algs(void)
mutex_unlock(&algs_lock);
}
static void caam_jr_crypto_engine_exit(void *data)
{
struct device *jrdev = data;
struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
/* Free the resources of crypto-engine */
crypto_engine_exit(jrpriv->engine);
}
static int caam_reset_hw_jr(struct device *dev)
{
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
......@@ -126,6 +135,9 @@ static int caam_jr_remove(struct platform_device *pdev)
jrdev = &pdev->dev;
jrpriv = dev_get_drvdata(jrdev);
if (jrpriv->hwrng)
caam_rng_exit(jrdev->parent);
/*
* Return EBUSY if job ring already allocated.
*/
......@@ -324,8 +336,8 @@ void caam_jr_free(struct device *rdev)
EXPORT_SYMBOL(caam_jr_free);
/**
* caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,
* -EBUSY if the queue is full, -EIO if it cannot map the caller's
* caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
* if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
* descriptor.
* @dev: device of the job ring to be used. This device should have
* been assigned prior by caam_jr_register().
......@@ -377,7 +389,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {
spin_unlock_bh(&jrp->inplock);
dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE);
return -EBUSY;
return -ENOSPC;
}
head_entry = &jrp->entinfo[head];
......@@ -414,7 +426,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
spin_unlock_bh(&jrp->inplock);
return 0;
return -EINPROGRESS;
}
EXPORT_SYMBOL(caam_jr_enqueue);
......@@ -505,7 +517,7 @@ static int caam_jr_probe(struct platform_device *pdev)
int error;
jrdev = &pdev->dev;
jrpriv = devm_kmalloc(jrdev, sizeof(*jrpriv), GFP_KERNEL);
jrpriv = devm_kzalloc(jrdev, sizeof(*jrpriv), GFP_KERNEL);
if (!jrpriv)
return -ENOMEM;
......@@ -538,6 +550,25 @@ static int caam_jr_probe(struct platform_device *pdev)
return error;
}
/* Initialize crypto engine */
jrpriv->engine = crypto_engine_alloc_init(jrdev, false);
if (!jrpriv->engine) {
dev_err(jrdev, "Could not init crypto-engine\n");
return -ENOMEM;
}
error = devm_add_action_or_reset(jrdev, caam_jr_crypto_engine_exit,
jrdev);
if (error)
return error;
/* Start crypto engine */
error = crypto_engine_start(jrpriv->engine);
if (error) {
dev_err(jrdev, "Could not start crypto-engine\n");
return error;
}
/* Identify the interrupt */
jrpriv->irq = irq_of_parse_and_map(nprop, 0);
if (!jrpriv->irq) {
......@@ -562,7 +593,7 @@ static int caam_jr_probe(struct platform_device *pdev)
atomic_set(&jrpriv->tfm_count, 0);
register_algs(jrdev->parent);
register_algs(jrpriv, jrdev->parent);
return 0;
}
......
......@@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
init_completion(&result.completion);
ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
if (!ret) {
if (ret == -EINPROGRESS) {
/* in progress */
wait_for_completion(&result.completion);
ret = result.err;
......
......@@ -4,7 +4,7 @@
* Queue Interface backend functionality
*
* Copyright 2013-2016 Freescale Semiconductor, Inc.
* Copyright 2016-2017, 2019 NXP
* Copyright 2016-2017, 2019-2020 NXP
*/
#include <linux/cpumask.h>
......@@ -124,8 +124,10 @@ int caam_qi_enqueue(struct device *qidev, struct caam_drv_req *req)
do {
ret = qman_enqueue(req->drv_ctx->req_fq, &fd);
if (likely(!ret))
if (likely(!ret)) {
refcount_inc(&req->drv_ctx->refcnt);
return 0;
}
if (ret != -EBUSY)
break;
......@@ -148,11 +150,6 @@ static void caam_fq_ern_cb(struct qman_portal *qm, struct qman_fq *fq,
fd = &msg->ern.fd;
if (qm_fd_get_format(fd) != qm_fd_compound) {
dev_err(qidev, "Non-compound FD from CAAM\n");
return;
}
drv_req = caam_iova_to_virt(priv->domain, qm_fd_addr_get64(fd));
if (!drv_req) {
dev_err(qidev,
......@@ -160,6 +157,13 @@ static void caam_fq_ern_cb(struct qman_portal *qm, struct qman_fq *fq,
return;
}
refcount_dec(&drv_req->drv_ctx->refcnt);
if (qm_fd_get_format(fd) != qm_fd_compound) {
dev_err(qidev, "Non-compound FD from CAAM\n");
return;
}
dma_unmap_single(drv_req->drv_ctx->qidev, qm_fd_addr(fd),
sizeof(drv_req->fd_sgt), DMA_BIDIRECTIONAL);
......@@ -287,9 +291,10 @@ static int kill_fq(struct device *qidev, struct qman_fq *fq)
return ret;
}
static int empty_caam_fq(struct qman_fq *fq)
static int empty_caam_fq(struct qman_fq *fq, struct caam_drv_ctx *drv_ctx)
{
int ret;
int retries = 10;
struct qm_mcr_queryfq_np np;
/* Wait till the older CAAM FQ get empty */
......@@ -304,11 +309,18 @@ static int empty_caam_fq(struct qman_fq *fq)
msleep(20);
} while (1);
/*
* Give extra time for pending jobs from this FQ in holding tanks
* to get processed
*/
msleep(20);
/* Wait until pending jobs from this FQ are processed by CAAM */
do {
if (refcount_read(&drv_ctx->refcnt) == 1)
break;
msleep(20);
} while (--retries);
if (!retries)
dev_warn_once(drv_ctx->qidev, "%d frames from FQID %u still pending in CAAM\n",
refcount_read(&drv_ctx->refcnt), fq->fqid);
return 0;
}
......@@ -340,7 +352,7 @@ int caam_drv_ctx_update(struct caam_drv_ctx *drv_ctx, u32 *sh_desc)
drv_ctx->req_fq = new_fq;
/* Empty and remove the older FQ */
ret = empty_caam_fq(old_fq);
ret = empty_caam_fq(old_fq, drv_ctx);
if (ret) {
dev_err(qidev, "Old CAAM FQ empty failed: %d\n", ret);
......@@ -453,6 +465,9 @@ struct caam_drv_ctx *caam_drv_ctx_init(struct device *qidev,
return ERR_PTR(-ENOMEM);
}
/* init reference counter used to track references to request FQ */
refcount_set(&drv_ctx->refcnt, 1);
drv_ctx->qidev = qidev;
return drv_ctx;
}
......@@ -571,6 +586,16 @@ static enum qman_cb_dqrr_result caam_rsp_fq_dqrr_cb(struct qman_portal *p,
return qman_cb_dqrr_stop;
fd = &dqrr->fd;
drv_req = caam_iova_to_virt(priv->domain, qm_fd_addr_get64(fd));
if (unlikely(!drv_req)) {
dev_err(qidev,
"Can't find original request for caam response\n");
return qman_cb_dqrr_consume;
}
refcount_dec(&drv_req->drv_ctx->refcnt);
status = be32_to_cpu(fd->status);
if (unlikely(status)) {
u32 ssrc = status & JRSTA_SSRC_MASK;
......@@ -588,13 +613,6 @@ static enum qman_cb_dqrr_result caam_rsp_fq_dqrr_cb(struct qman_portal *p,
return qman_cb_dqrr_consume;
}
drv_req = caam_iova_to_virt(priv->domain, qm_fd_addr_get64(fd));
if (unlikely(!drv_req)) {
dev_err(qidev,
"Can't find original request for caam response\n");
return qman_cb_dqrr_consume;
}
dma_unmap_single(drv_req->drv_ctx->qidev, qm_fd_addr(fd),
sizeof(drv_req->fd_sgt), DMA_BIDIRECTIONAL);
......
......@@ -3,7 +3,7 @@
* Public definitions for the CAAM/QI (Queue Interface) backend.
*
* Copyright 2013-2016 Freescale Semiconductor, Inc.
* Copyright 2016-2017 NXP
* Copyright 2016-2017, 2020 NXP
*/
#ifndef __QI_H__
......@@ -52,6 +52,7 @@ enum optype {
* @context_a: shared descriptor dma address
* @req_fq: to-CAAM request frame queue
* @rsp_fq: from-CAAM response frame queue
* @refcnt: reference counter incremented for each frame enqueued in to-CAAM FQ
* @cpu: cpu on which to receive CAAM response
* @op_type: operation type
* @qidev: device pointer for CAAM/QI backend
......@@ -62,6 +63,7 @@ struct caam_drv_ctx {
dma_addr_t context_a;
struct qman_fq *req_fq;
struct qman_fq *rsp_fq;
refcount_t refcnt;
int cpu;
enum optype op_type;
struct device *qidev;
......
......@@ -487,7 +487,8 @@ struct rngtst {
/* RNG4 TRNG test registers */
struct rng4tst {
#define RTMCTL_PRGM 0x00010000 /* 1 -> program mode, 0 -> run mode */
#define RTMCTL_ACC BIT(5) /* TRNG access mode */
#define RTMCTL_PRGM BIT(16) /* 1 -> program mode, 0 -> run mode */
#define RTMCTL_SAMP_MODE_VON_NEUMANN_ES_SC 0 /* use von Neumann data in
both entropy shifter and
statistical checker */
......@@ -523,9 +524,11 @@ struct rng4tst {
u32 rsvd1[40];
#define RDSTA_SKVT 0x80000000
#define RDSTA_SKVN 0x40000000
#define RDSTA_PR0 BIT(4)
#define RDSTA_PR1 BIT(5)
#define RDSTA_IF0 0x00000001
#define RDSTA_IF1 0x00000002
#define RDSTA_IFMASK (RDSTA_IF1 | RDSTA_IF0)
#define RDSTA_MASK (RDSTA_PR1 | RDSTA_PR0 | RDSTA_IF1 | RDSTA_IF0)
u32 rdsta;
u32 rsvd2[15];
};
......
......@@ -71,7 +71,7 @@ struct ucode {
char version[VERSION_LEN - 1];
__be32 code_size;
u8 raz[12];
u64 code[0];
u64 code[];
};
/**
......
......@@ -215,6 +215,9 @@ void psp_dev_destroy(struct sp_device *sp)
tee_dev_destroy(psp);
sp_free_psp_irq(sp, psp);
if (sp->clear_psp_master_device)
sp->clear_psp_master_device(sp);
}
void psp_set_sev_irq_handler(struct psp_device *psp, psp_irq_handler_t handler,
......
......@@ -283,11 +283,11 @@ static int sev_get_platform_state(int *state, int *error)
return rc;
}
static int sev_ioctl_do_reset(struct sev_issue_cmd *argp)
static int sev_ioctl_do_reset(struct sev_issue_cmd *argp, bool writable)
{
int state, rc;
if (!capable(CAP_SYS_ADMIN))
if (!writable)
return -EPERM;
/*
......@@ -331,12 +331,12 @@ static int sev_ioctl_do_platform_status(struct sev_issue_cmd *argp)
return ret;
}
static int sev_ioctl_do_pek_pdh_gen(int cmd, struct sev_issue_cmd *argp)
static int sev_ioctl_do_pek_pdh_gen(int cmd, struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
int rc;
if (!capable(CAP_SYS_ADMIN))
if (!writable)
return -EPERM;
if (sev->state == SEV_STATE_UNINIT) {
......@@ -348,7 +348,7 @@ static int sev_ioctl_do_pek_pdh_gen(int cmd, struct sev_issue_cmd *argp)
return __sev_do_cmd_locked(cmd, NULL, &argp->error);
}
static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp)
static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_pek_csr input;
......@@ -356,7 +356,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp)
void *blob = NULL;
int ret;
if (!capable(CAP_SYS_ADMIN))
if (!writable)
return -EPERM;
if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
......@@ -539,7 +539,7 @@ static int sev_update_firmware(struct device *dev)
return ret;
}
static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp)
static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_pek_cert_import input;
......@@ -547,7 +547,7 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp)
void *pek_blob, *oca_blob;
int ret;
if (!capable(CAP_SYS_ADMIN))
if (!writable)
return -EPERM;
if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
......@@ -698,7 +698,7 @@ static int sev_ioctl_do_get_id(struct sev_issue_cmd *argp)
return ret;
}
static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp)
static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_pdh_cert_export input;
......@@ -708,7 +708,7 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp)
/* If platform is not in INIT state then transition it to INIT. */
if (sev->state != SEV_STATE_INIT) {
if (!capable(CAP_SYS_ADMIN))
if (!writable)
return -EPERM;
ret = __sev_platform_init_locked(&argp->error);
......@@ -801,6 +801,7 @@ static long sev_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
void __user *argp = (void __user *)arg;
struct sev_issue_cmd input;
int ret = -EFAULT;
bool writable = file->f_mode & FMODE_WRITE;
if (!psp_master || !psp_master->sev_data)
return -ENODEV;
......@@ -819,25 +820,25 @@ static long sev_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
switch (input.cmd) {
case SEV_FACTORY_RESET:
ret = sev_ioctl_do_reset(&input);
ret = sev_ioctl_do_reset(&input, writable);
break;
case SEV_PLATFORM_STATUS:
ret = sev_ioctl_do_platform_status(&input);
break;
case SEV_PEK_GEN:
ret = sev_ioctl_do_pek_pdh_gen(SEV_CMD_PEK_GEN, &input);
ret = sev_ioctl_do_pek_pdh_gen(SEV_CMD_PEK_GEN, &input, writable);
break;
case SEV_PDH_GEN:
ret = sev_ioctl_do_pek_pdh_gen(SEV_CMD_PDH_GEN, &input);
ret = sev_ioctl_do_pek_pdh_gen(SEV_CMD_PDH_GEN, &input, writable);
break;
case SEV_PEK_CSR:
ret = sev_ioctl_do_pek_csr(&input);
ret = sev_ioctl_do_pek_csr(&input, writable);
break;
case SEV_PEK_CERT_IMPORT:
ret = sev_ioctl_do_pek_import(&input);
ret = sev_ioctl_do_pek_import(&input, writable);
break;
case SEV_PDH_CERT_EXPORT:
ret = sev_ioctl_do_pdh_export(&input);
ret = sev_ioctl_do_pdh_export(&input, writable);
break;
case SEV_GET_ID:
pr_warn_once("SEV_GET_ID command is deprecated, use SEV_GET_ID2\n");
......@@ -896,9 +897,9 @@ EXPORT_SYMBOL_GPL(sev_guest_df_flush);
static void sev_exit(struct kref *ref)
{
struct sev_misc_dev *misc_dev = container_of(ref, struct sev_misc_dev, refcount);
misc_deregister(&misc_dev->misc);
kfree(misc_dev);
misc_dev = NULL;
}
static int sev_misc_init(struct sev_device *sev)
......@@ -916,7 +917,7 @@ static int sev_misc_init(struct sev_device *sev)
if (!misc_dev) {
struct miscdevice *misc;
misc_dev = devm_kzalloc(dev, sizeof(*misc_dev), GFP_KERNEL);
misc_dev = kzalloc(sizeof(*misc_dev), GFP_KERNEL);
if (!misc_dev)
return -ENOMEM;
......
......@@ -90,6 +90,7 @@ struct sp_device {
/* get and set master device */
struct sp_device*(*get_psp_master_device)(void);
void (*set_psp_master_device)(struct sp_device *);
void (*clear_psp_master_device)(struct sp_device *);
bool irq_registered;
bool use_tasklet;
......
......@@ -146,6 +146,14 @@ static struct sp_device *psp_get_master(void)
return sp_dev_master;
}
static void psp_clear_master(struct sp_device *sp)
{
if (sp == sp_dev_master) {
sp_dev_master = NULL;
dev_dbg(sp->dev, "Cleared sp_dev_master\n");
}
}
static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct sp_device *sp;
......@@ -206,6 +214,7 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_master(pdev);
sp->set_psp_master_device = psp_set_master;
sp->get_psp_master_device = psp_get_master;
sp->clear_psp_master_device = psp_clear_master;
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
if (ret) {
......
This diff is collapsed.
......@@ -66,7 +66,7 @@ struct aead_req_ctx {
/* used to prevent cache coherence problem */
u8 backup_mac[MAX_MAC_SIZE];
u8 *backup_iv; /* store orig iv */
u32 assoclen; /* internal assoclen */
u32 assoclen; /* size of AAD buffer to authenticate */
dma_addr_t mac_buf_dma_addr; /* internal ICV DMA buffer */
/* buffer for internal ccm configurations */
dma_addr_t ccm_iv0_dma_addr;
......@@ -79,7 +79,6 @@ struct aead_req_ctx {
dma_addr_t gcm_iv_inc2_dma_addr;
dma_addr_t hkey_dma_addr; /* Phys. address of hkey */
dma_addr_t gcm_block_len_dma_addr; /* Phys. address of gcm block len */
bool is_gcm4543;
u8 *icv_virt_addr; /* Virt. address of ICV */
struct async_gen_req_ctx gen_ctx;
......
This diff is collapsed.
......@@ -24,14 +24,15 @@ enum cc_sg_cpy_direct {
};
struct cc_mlli {
cc_sram_addr_t sram_addr;
u32 sram_addr;
unsigned int mapped_nents;
unsigned int nents; //sg nents
unsigned int mlli_nents; //mlli nents might be different than the above
};
struct mlli_params {
struct dma_pool *curr_pool;
u8 *mlli_virt_addr;
void *mlli_virt_addr;
dma_addr_t mlli_dma_addr;
u32 mlli_len;
};
......
This diff is collapsed.
......@@ -8,10 +8,6 @@
#include "cc_crypto_ctx.h"
#include "cc_debugfs.h"
struct cc_debugfs_ctx {
struct dentry *dir;
};
#define CC_DEBUG_REG(_X) { \
.name = __stringify(_X),\
.offset = CC_REG(_X) \
......@@ -67,13 +63,8 @@ void __exit cc_debugfs_global_fini(void)
int cc_debugfs_init(struct cc_drvdata *drvdata)
{
struct device *dev = drvdata_to_dev(drvdata);
struct cc_debugfs_ctx *ctx;
struct debugfs_regset32 *regset, *verset;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
return -ENOMEM;
......@@ -81,16 +72,18 @@ int cc_debugfs_init(struct cc_drvdata *drvdata)
regset->regs = debug_regs;
regset->nregs = ARRAY_SIZE(debug_regs);
regset->base = drvdata->cc_base;
regset->dev = dev;
ctx->dir = debugfs_create_dir(drvdata->plat_dev->name, cc_debugfs_dir);
drvdata->dir = debugfs_create_dir(drvdata->plat_dev->name,
cc_debugfs_dir);
debugfs_create_regset32("regs", 0400, ctx->dir, regset);
debugfs_create_bool("coherent", 0400, ctx->dir, &drvdata->coherent);
debugfs_create_regset32("regs", 0400, drvdata->dir, regset);
debugfs_create_bool("coherent", 0400, drvdata->dir, &drvdata->coherent);
verset = devm_kzalloc(dev, sizeof(*verset), GFP_KERNEL);
/* Failing here is not important enough to fail the module load */
if (!verset)
goto out;
return 0;
if (drvdata->hw_rev <= CC_HW_REV_712) {
ver_sig_regs[0].offset = drvdata->sig_offset;
......@@ -102,17 +95,13 @@ int cc_debugfs_init(struct cc_drvdata *drvdata)
verset->nregs = ARRAY_SIZE(pid_cid_regs);
}
verset->base = drvdata->cc_base;
verset->dev = dev;
debugfs_create_regset32("version", 0400, ctx->dir, verset);
out:
drvdata->debugfs = ctx;
debugfs_create_regset32("version", 0400, drvdata->dir, verset);
return 0;
}
void cc_debugfs_fini(struct cc_drvdata *drvdata)
{
struct cc_debugfs_ctx *ctx = (struct cc_debugfs_ctx *)drvdata->debugfs;
debugfs_remove_recursive(ctx->dir);
debugfs_remove_recursive(drvdata->dir);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -80,30 +80,27 @@ int cc_hash_alloc(struct cc_drvdata *drvdata);
int cc_init_hash_sram(struct cc_drvdata *drvdata);
int cc_hash_free(struct cc_drvdata *drvdata);
/*!
* Gets the initial digest length
/**
* cc_digest_len_addr() - Gets the initial digest length
*
* \param drvdata
* \param mode The Hash mode. Supported modes:
* MD5/SHA1/SHA224/SHA256/SHA384/SHA512
* @drvdata: Associated device driver context
* @mode: The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
*
* \return u32 returns the address of the initial digest length in SRAM
* Return:
* Returns the address of the initial digest length in SRAM
*/
cc_sram_addr_t
cc_digest_len_addr(void *drvdata, u32 mode);
u32 cc_digest_len_addr(void *drvdata, u32 mode);
/*!
* Gets the address of the initial digest in SRAM
/**
* cc_larval_digest_addr() - Gets the address of the initial digest in SRAM
* according to the given hash mode
*
* \param drvdata
* \param mode The Hash mode. Supported modes:
* MD5/SHA1/SHA224/SHA256/SHA384/SHA512
* @drvdata: Associated device driver context
* @mode: The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
*
* \return u32 The address of the initial digest in SRAM
* Return:
* The address of the initial digest in SRAM
*/
cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode);
void cc_hash_global_init(void);
u32 cc_larval_digest_addr(void *drvdata, u32 mode);
#endif /*__CC_HASH_H__*/
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment