- 25 Apr, 2019 23 commits
-
-
Gilad Ben-Yossef authored
We were handling chained scattergather lists with specialized code needlessly as the regular sg APIs handle them just fine. The code handling this also had an (unused) code path with a use-before-init error, flagged by Coverity. Remove all special handling of chained sg and leave their handling to the regular sg APIs. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Use proper hash callback completion API instead of open coding it. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
We were doing backlog notification callbacks via a cipher/hash/aead request structure cast to the base structure, which may or may not work based on how the structure is laid in memory and is not safe. Fix it by delegating the backlog notification to the appropriate internal callbacks which are type aware. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
The new HW uses a new standard product and component ID registers replacing the old ad-hoc version and signature gister schemes. Update the driver to support the new HW ID registers. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
We were computing the next IV in software instead of reading it from HW on the premise that this can be quicker due to the small size of IVs but this proved to be much more hassle and bug ridden than expected. Move to reading the next IV as computed by the HW. This fixes a number of issue with next IV being wrong for OFB, CTS-CBC and probably most of the other ciphers as well. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Adapt the CPP descriptor to new HW interface. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Add the registration for the SM4 based policy protected keys ciphers. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Add the missing logic to set usage policy protections for keys. This enables key policy protection for AES. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Add the logic needed to track and report CPP operation rejection. The new logic will be used by the CPP feature introduced later. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Add support for the Security Disabled mode under which only pure cryptographic functionality is enabled and protected keys services are unavailable. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Refactor to move the descriptor copying the MLLI line to SRAM to before the key loading descriptor in preparation to the introduction of CPP later on. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Refactor the descriptor setup code in order to move the key loading descriptor to one before last position. This has no effect on current functionality but is needed for later support of Content Protection Policy keys. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Mark sm4 and missing aes using protected keys which are indetical to same algs with no HW protected keys as tested. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Corentin Labbe authored
sun4i-ss does not handle requests when length are not a multiple of blocksize. This patch adds a fallback for that case. Fixes: 6298e948 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Corentin Labbe authored
When nbytes < 4, end is wronlgy set to a negative value which, due to uint, is then interpreted to a large value leading to a deadlock in the following code. This patch fix this problem. Fixes: 6298e948 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Corentin Labbe authored
ECB algos does not need IV. Fixes: 6298e948 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Corentin Labbe authored
This patch remove the test against areq->info since sun4i-ss could work without it (ECB). Fixes: 6298e948 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Nagadheeraj Rottela authored
This patch fixes the NITROX-V family part name format. The fix includes ZIP core performance (feature option) in the part name to differentiate various HW devices. The complete HW part name format is mentioned below Part name: CNN55<core option>-<freq>BG676-<feature option>-<rev> Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com> Reviewed-by: Srikanth Jampala <jsrikanth@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
GCM detection logic has to change for two reasons: -some CAAM instantiations with Era < 10, even though they have AES LP, they now support GCM mode -Era 10 upwards, there is a dedicated bit in AESA_VERSION[AESA_MISC] field for GCM support For Era 9 and earlier, all AES accelerator versions support GCM, except for AES LP (CHAVID_LS[AESVID]=3) with revision CRNR[AESRN] < 8. For Era 10 and later, bit 9 of the AESA_VERSION register should be used to detect GCM support in AES accelerator. Note: caam/qi and caam/qi2 are drivers for QI (Queue Interface), which is used in DPAA-based SoCs; for now, we rely on CAAM having an AES HP and this AES accelerator having support for GCM. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Reviewed-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Colin Ian King authored
There is a spelling mistake in an error message in the qi_error_list array. Fix this. Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The nx driver uses the MAY_SLEEP flag in shash_desc::flags as an indicator to not retry sending the operation to the hardware as many times before returning -EBUSY. This is bogus because (1) that's not what the MAY_SLEEP flag is for, and (2) the shash API doesn't allow failing if the hardware is busy anyway. For now, just make it always retry the larger number of times. This doesn't actually fix this driver, but it at least makes it not use the shash_desc::flags field anymore. Then this field can be removed, as no other drivers use it. Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 19 Apr, 2019 2 commits
-
-
Eric Biggers authored
CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 18 Apr, 2019 15 commits
-
-
Eric Biggers authored
shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Replace all calls to in_interrupt() in the PowerPC crypto code with !crypto_simd_usable(). This causes the crypto self-tests to test the no-SIMD code paths when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. The p8_ghash algorithm is currently failing and needs to be fixed, as it produces the wrong digest when no-SIMD updates are mixed with SIMD ones. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The cavium crypto driver adds 'sizeof(struct ablkcipher_request)' to its request size because "the cryptd daemon uses this memory for request_ctx information". This is incorrect and unnecessary; cryptd doesn't require wrapped algorithms to reserve extra request space. So remove this. Also remove the unneeded memset() of the tfm context to 0. It's already zeroed on allocation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create algorithms with the deprecated "ablkcipher" type. This has been unused since commit 0e145b47 ("crypto: ablk_helper - remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Andrzej Siewior authored
In commit 71052dcf ("crypto: scompress - Use per-CPU struct instead multiple variables") I accidentally initialized multiple times the memory on a random CPU. I should have initialize the memory on every CPU like it has been done earlier. I didn't notice this because the scheduler didn't move the task to another CPU. Guenter managed to do that and the code crashed as expected. Allocate / free per-CPU memory on each CPU. Fixes: 71052dcf ("crypto: scompress - Use per-CPU struct instead multiple variables") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zhang Zhijie authored
The Kernel Crypto API request output the next IV data to IV buffer for CBC implementation. So the last block data of ciphertext should be copid into assigned IV buffer. Reported-by: Eric Biggers <ebiggers@google.com> Fixes: 433cd2c6 ("crypto: rockchip - add crypto driver for rk3288") Cc: <stable@vger.kernel.org> # v4.5+ Signed-off-by: Zhang Zhijie <zhangzj@rock-chips.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
When the extra crypto self-tests are enabled, test each AEAD algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
When the extra crypto self-tests are enabled, test each skcipher algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the skcipher_walk API, a bug in the LRW template, and an inconsistency in the cts implementations. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
When the extra crypto self-tests are enabled, test each hash algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the x86 implementation of poly1305, bugs in crct10dif, and an inconsistency in cbcmac. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Add some helper functions in preparation for fuzz testing algorithms against their generic implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
In preparation for fuzz testing algorithms against their generic implementation, make error messages in testmgr identify test vectors by name rather than index. Built-in test vectors are simply "named" by their index in testmgr.h, as before. But (in later patches) generated test vectors will be given more descriptive names to help developers debug problems detected with them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Update testmgr to support testing for specific errors from setkey() and digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs. This is useful because algorithms usually restrict the lengths or format of the message, key, and/or authentication tag in some way. And bad inputs should be tested too, not just good inputs. As part of this change, remove the ambiguously-named 'fail' flag and replace it with 'setkey_error = -EINVAL' for the only test vector that used it -- the DES weak key test vector. Note that this tightens the test to require -EINVAL rather than any error code, but AFAICS this won't cause any test failure. Other than that, these new fields aren't set on any test vectors yet. Later patches will do so. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Vitaly Chikunov authored
Allow to use EC-RDSA signatures for IMA by determining signature type by the hash algorithm name. This works good for EC-RDSA since Streebog and EC-RDSA should always be used together. Cc: Mimi Zohar <zohar@linux.ibm.com> Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Vitaly Chikunov authored
Add testmgr test vectors for EC-RDSA algorithm for every of five supported parameters (curves). Because there are no officially published test vectors for the curves, the vectors are generated by gost-engine. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-