- 28 Nov, 2016 26 commits
-
-
Horia Geantă authored
Move split key length and padded length computation from caamalg.c and caamhash.c to key_gen.c. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Refactor the generation of the authenc, ablkcipher shared descriptors and exports the functionality, such that they could be shared with the upcoming caam/qi (Queue Interface) driver. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Remove dependency on CRYPTO_DEV_FSL_CAAM where superfluous: depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR is equivalent to depends on CRYPTO_DEV_FSL_CAAM_JR since CRYPTO_DEV_FSL_CAAM_JR depends on CRYPTO_DEV_FSL_CAAM. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
A few descriptor commands are generated using generic inline append "append_cmd" function. Rewrite them using specific inline append functions. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
For authenc / stitched AEAD algorithms, check independently each of the two (authentication, encryption) keys whether inlining is possible. Prioritize the inlining of the authentication key, since the length of the (split) key is bigger than that of the encryption key. For the other algorithms, compute only once per tfm the remaining available bytes and decide whether key inlining is possible based on this. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Information carried by alg_op can be deduced from adata->algtype plus some fixed flags. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
In preparation of factoring out the shared descriptors, struct alginfo is introduced to group the algorithm related parameters. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
append_key_aead() is used in only one place, thus inline it. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts aesbs over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch moves the core CBC implementation into a header file so that it can be reused by drivers implementing CBC. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts cbc over to the skcipher interface. It also rearranges the code to allow it to be reused by drivers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts aes-ce over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts arm64/aes over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts aesni (including fpu) over to the skcipher interface. The LRW implementation has been removed as the generic LRW code can now be used directly on top of the accelerated ECB implementation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently we manually filter out internal algorithms using a list in testmgr. This is dangerous as internal algorithms cannot be safely used even by testmgr. This patch ensures that they're never processed by testmgr at all. This patch also removes an obsolete bypass for nivciphers which no longer exist. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds xts helpers that use the skcipher interface rather than blkcipher. This will be used by aesni_intel. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts lrw over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the simd skcipher helper which is meant to be a replacement for ablk helper. It replaces the underlying blkcipher interface with skcipher, and also presents the top-level algorithm as an skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds skcipher support to cryptd alongside ablkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently all bits not set in mask are cleared in crypto_larval_lookup. This is unnecessary as wherever the type bits are used it is always masked anyway. This patch removes the clearing so that we may use bits set in the type but not in the mask for special purposes, e.g., picking up internal algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts xts over to the skcipher interface. It also optimises the implementation to be based on ECB instead of the underlying cipher. For compatibility the existing naming scheme of xts(aes) is maintained as opposed to the more obvious one of xts(ecb(aes)). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts lrw over to the skcipher interface. It also optimises the implementation to be based on ECB instead of the underlying cipher. For compatibility the existing naming scheme of lrw(aes) is maintained as opposed to the more obvious one of lrw(ecb(aes)). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch makes use of the new skcipher walk interface instead of the obsolete blkcipher walk interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the skcipher walk interface which replaces both blkcipher walk and ablkcipher walk. Just like blkcipher walk it can also be used for AEAD algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jean Delvare authored
For consistency with the other 246 kernel configuration options, rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM. Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Anton Blanchard <anton@samba.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This integrates both the accelerated scalar and the NEON implementations of SHA-224/256 as well as SHA-384/512 from the OpenSSL project. Relative performance compared to the respective generic C versions: | SHA256-scalar | SHA256-NEON* | SHA512 | ------------+-----------------+--------------+----------+ Cortex-A53 | 1.63x | 1.63x | 2.34x | Cortex-A57 | 1.43x | 1.59x | 1.95x | Cortex-A73 | 1.26x | 1.56x | ? | The core crypto code was authored by Andy Polyakov of the OpenSSL project, in collaboration with whom the upstream code was adapted so that this module can be built from the same version of sha512-armv8.pl. The version in this patch was taken from OpenSSL commit 32bbb62ea634 ("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.") * The core SHA algorithm is fundamentally sequential, but there is a secondary transformation involved, called the schedule update, which can be performed independently. The NEON version of SHA-224/SHA-256 only implements this part of the algorithm using NEON instructions, the sequential part is always done using scalar instructions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 21 Nov, 2016 2 commits
-
-
PrasannaKumar Muralidharan authored
As hw_random core calls ->read with max > 32 or more, make it explicit. Also remove checks involving 'max' being less than 8. Signed-off-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
The CTR DRBG segments the number of random bytes to be generated into 128 byte blocks. The current code misses the advancement of the output buffer pointer when the requestor asks for more than 128 bytes of data. In this case, the next 128 byte block of random numbers is copied to the beginning of the output buffer again. This implies that only the first 128 bytes of the output buffer would ever be filled. The patch adds the advancement of the buffer pointer to fill the entire buffer. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 17 Nov, 2016 6 commits
-
-
Naveen N. Rao authored
First up, clean up the generated .S files properly on a 'make clean'. Secondly, force re-generation of these files when building for different endian-ness than what was built previously. Finally, generate the new files in the build tree, rather than the source tree. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Tucker authored
Current multi-buffer hash implementations have a restriction on the total length of a hash job to 512MB. Hashing larger buffers will result in an incorrect hash. This extends the limit to 2^62 - 1. Signed-off-by: Greg Tucker <greg.b.tucker@intel.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Alex Cope authored
GF(2^128) multiplication tables are typically used for secret information, so it's a good idea to zero them on free. Signed-off-by: Alex Cope <alexcope@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Wei Yongjun authored
Since clk_prepare_enable() is used to get trng->clk, we should use clk_disable_unprepare() to release it for the error path. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Geliang Tang authored
Drop duplicate header types.h from nx.c. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Geliang Tang authored
Drop duplicate header module.h from jitterentropy-kcapi.c. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 13 Nov, 2016 6 commits
-
-
Horia Geantă authored
Shared descriptors used by ahash_final() and ahash_finup() are identical, thus get rid of one of them (sh_desc_finup). Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
The pointer to the descriptor buffer is not touched, it always points to start of the descriptor buffer. Thus, make it const. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
sec4_sg_entry structure is used only by helper functions in sg_sw_sec4.h. Since SEC HW S/G entries are to be manipulated only indirectly, via these functions, move sec4_sg_entry to the corresponding header. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
This reverts commit 66d2e202. Quoting from Russell's findings: https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg21136.html [quote] Okay, I've re-tested, using a different way of measuring, because using openssl speed is impractical for off-loaded engines. I've decided to use this way to measure the performance: dd if=/dev/zero bs=1048576 count=128 | /usr/bin/time openssl dgst -md5 For the threaded IRQs case gives: 0.05user 2.74system 0:05.30elapsed 52%CPU (0avgtext+0avgdata 2400maxresident)k 0.06user 2.52system 0:05.18elapsed 49%CPU (0avgtext+0avgdata 2404maxresident)k 0.12user 2.60system 0:05.61elapsed 48%CPU (0avgtext+0avgdata 2460maxresident)k => 5.36s => 25.0MB/s and the tasklet case: 0.08user 2.53system 0:04.83elapsed 54%CPU (0avgtext+0avgdata 2468maxresident)k 0.09user 2.47system 0:05.16elapsed 49%CPU (0avgtext+0avgdata 2368maxresident)k 0.10user 2.51system 0:04.87elapsed 53%CPU (0avgtext+0avgdata 2460maxresident)k => 4.95 => 27.1MB/s which corresponds to an 8% slowdown for the threaded IRQ case. So, tasklets are indeed faster than threaded IRQs. [...] I think I've proven from the above that this patch needs to be reverted due to the performance regression, and that there _is_ most definitely a deterimental effect of switching from tasklets to threaded IRQs. [/quote] Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
alkcipher_edesc_alloc() and ablkcipher_giv_edesc_alloc() don't free / unmap resources on error path: - dmap_map_sg() could fail, thus make sure the return value is checked - unmap DMA mappings in case of error Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
ERRID is a 4-bit field. Since err_id values are in [0..15] and err_id_list array size is 16, the condition "err_id < ARRAY_SIZE(err_id_list)" is always true. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-