- 11 Dec, 2019 40 commits
-
-
Herbert Xu authored
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch switches padlock-sha over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The shash interface supports a dynamic descsize field because of the presence of fallbacks (it's just padlock-sha actually, perhaps we can remove it one day). As it is the API does not verify the setting of descsize at all. It is up to the individual algorithms to ensure that descsize does not exceed the specified maximum value of HASH_MAX_DESCSIZE (going above would cause stack corruption). In order to allow the API to impose this limit directly, this patch adds init_tfm/exit_tfm hooks to the shash_alg structure. We can then verify the descsize setting in the API directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch explains the logic behind crypto_remove_spawns and its underling crypto_more_spawns. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently when a spawn is removed we will zap its alg field. This is racy because the spawn could belong to an unregistered instance which may dereference the spawn->alg field. This patch fixes this by keeping spawn->alg constant and instead adding a new spawn->dead field to indicate that a spawn is going away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The function crypto_spawn_alg is racy because it drops the lock before shooting the dying algorithm. The algorithm could disappear altogether before we shoot it. This patch fixes it by moving the shooting into the locked section. Fixes: 6bfd4809 ("[CRYPTO] api: Added spawns") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
arc4 is no longer considered secure, so it shouldn't be used, even as just an example. Mention serpent and chacha20 instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
We need to check whether spawn->alg is NULL under lock as otherwise the algorithm could be removed from under us after we have checked it and found it to be non-NULL. This could cause us to remove the spawn from a non-existent list. Fixes: 7ede5a5b ("crypto: api - Fix crypto_drop_spawn crash...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Valdis Klētnieks authored
Building with W=1 causes a warning: CC [M] arch/x86/crypto/chacha_glue.o In file included from arch/x86/crypto/chacha_glue.c:10: ./include/crypto/internal/chacha.h:37:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration] 37 | static int inline chacha12_setkey(struct crypto_skcipher *tfm, const u8 *key, | ^~~~~~ Straighten out the order to match the rest of the header file. Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Move common alg type init to dedicated methods. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Use core helper functions. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
No error handling, change return type to void. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
'err' member was initialized to 0 but its value never changed. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
The req->iv of the skcipher_request is expected to contain the last used IV. Update the req->iv for CTR mode. Fixes: bd3c7b5c ("crypto: atmel - add Atmel AES driver") Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
32 bit counter is not supported by neither of our AES IPs, all implement a 16 bit block counter. Drop the 32 bit block counter logic. Fixes: fcac8365 ("crypto: atmel-aes - fix the counter overflow in CTR mode") Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
ECB mode does not use IV. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
atmel_tdes_crypt_start() obtained a pointer to tfm from dd, passed the tfm pointer to atmel_tdes_crypt_{dma,pdc}, and in the calles we obtained dd back from the tfm. Pass pointer to dd directly. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Simplifies the configuration of the TDES IP. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
As claimed by the datasheet, writing 0 into the Control Register has no effect. Remove this useless register access. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Choose label names which say what the goto does and not from where the goto was issued. This avoids adding superfluous labels like "err_aes_buff". Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
In case the probe fails, the device/driver core takes care of printing the driver name, device name and error code. Drop superfluous error message at probe. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
atmel_{sha,tdes}_hw_version_init() calls atmel_{sha,tdes}_hw_init(), which may fail. Check the return code of atmel_{sha,tdes}_hw_init() and propagate the error if needed. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Hash headers are not used. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Increase the algorithm priorities so the hardware acceleration is now preferred to the software computation: the generic drivers use 100 as priority. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
atmel_tdes_write_n() should not modify its value argument. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
As af_alg_release_parent may be called from BH context (most notably due to an async request that only completes after socket closure, or as reported here because of an RCU-delayed sk_destruct call), we must use bh_lock_sock instead of lock_sock. Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Fixes: c840ac6a ("crypto: af_alg - Disallow bind/setkey/...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Daniel Jordan authored
Remove references to unused functions, standardize language, update to reflect new functionality, migrate to rst format, and fix all kernel-doc warnings. Fixes: 815613da ("kernel/padata.c: removed unused code") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Daniel Jordan authored
reorder_objects is unused since the rework of padata's flushing, so remove it. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Daniel Jordan authored
Since commit 63d35788 ("crypto: pcrypt - remove padata cpumask notifier") this feature is unused, so get rid of it. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Daniel Jordan authored
lockdep complains when padata's paths to update cpumasks via CPU hotplug and sysfs are both taken: # echo 0 > /sys/devices/system/cpu/cpu1/online # echo ff > /sys/kernel/pcrypt/pencrypt/parallel_cpumask ====================================================== WARNING: possible circular locking dependency detected 5.4.0-rc8-padata-cpuhp-v3+ #1 Not tainted ------------------------------------------------------ bash/205 is trying to acquire lock: ffffffff8286bcd0 (cpu_hotplug_lock.rw_sem){++++}, at: padata_set_cpumask+0x2b/0x120 but task is already holding lock: ffff8880001abfa0 (&pinst->lock){+.+.}, at: padata_set_cpumask+0x26/0x120 which lock already depends on the new lock. padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent order. Which should be first? CPU hotplug calls into padata with cpu_hotplug_lock already held, so it should have priority. Fixes: 6751fb3c ("padata: Use get_online_cpus/put_online_cpus") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Daniel Jordan authored
Configuring an instance's parallel mask without any online CPUs... echo 2 > /sys/kernel/pcrypt/pencrypt/parallel_cpumask echo 0 > /sys/devices/system/cpu/cpu1/online ...makes tcrypt mode=215 crash like this: divide error: 0000 [#1] SMP PTI CPU: 4 PID: 283 Comm: modprobe Not tainted 5.4.0-rc8-padata-doc-v2+ #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191013_105130-anatol 04/01/2014 RIP: 0010:padata_do_parallel+0x114/0x300 Call Trace: pcrypt_aead_encrypt+0xc0/0xd0 [pcrypt] crypto_aead_encrypt+0x1f/0x30 do_mult_aead_op+0x4e/0xdf [tcrypt] test_mb_aead_speed.constprop.0.cold+0x226/0x564 [tcrypt] do_test+0x28c2/0x4d49 [tcrypt] tcrypt_mod_init+0x55/0x1000 [tcrypt] ... cpumask_weight() in padata_cpu_hash() returns 0 because the mask has no CPUs. The problem is __padata_remove_cpu() checks for valid masks too early and so doesn't mark the instance PADATA_INVALID as expected, which would have made padata_do_parallel() return error before doing the division. Fix by introducing a second padata CPU hotplug state before CPUHP_BRINGUP_CPU so that __padata_remove_cpu() sees the online mask without @cpu. No need for the second argument to padata_replace() since @cpu is now already missing from the online mask. Fixes: 33e54450 ("padata: Handle empty padata cpumasks") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey() is pointless because it always points to setkey() in crypto/cipher.c. ->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless, since if the algorithm doesn't have an alignmask, they are set directly to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization" isn't worthwhile because: - The "cipher" algorithm type is the only algorithm still using crt_u, so it's bloating every struct crypto_tfm for every algorithm type. - If the algorithm has an alignmask, this "optimization" actually makes things slower, as it causes 2 indirect calls per block rather than 1. - It adds extra code complexity. - Some templates already call ->cia_encrypt()/->cia_decrypt() directly instead of going through ->cit_encrypt_one()/->cit_decrypt_one(). - The "cipher" algorithm type never gives optimal performance anyway. For that, a higher-level type such as skcipher needs to be used. Therefore, just remove the extra indirection, and make crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c. Also remove the unused function crypto_cipher_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
crt_u.compress (struct compress_tfm) is pointless because its two fields, ->cot_compress() and ->cot_decompress(), always point to crypto_compress() and crypto_decompress(). Remove this pointless indirection, and just make crypto_comp_compress() and crypto_comp_decompress() be direct calls to what used to be crypto_compress() and crypto_decompress(). Also remove the unused function crypto_comp_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The whole point of using an AEAD over length-preserving encryption is that the data is authenticated. However currently the fuzz tests don't test any inauthentic inputs to verify that the data is actually being authenticated. And only two algorithms ("rfc4543(gcm(aes))" and "ccm(aes)") even have any inauthentic test vectors at all. Therefore, update the AEAD fuzz tests to sometimes generate inauthentic test vectors, either by generating a (ciphertext, AAD) pair without using the key, or by mutating an authentic pair that was generated. To avoid flakiness, only assume this works reliably if the auth tag is at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp algorithms intentionally ignoring the last 8 AAD bytes, and for some algorithms doing extra checks that result in EINVAL rather than EBADMSG. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
In preparation for adding inauthentic input fuzz tests, which don't require that a generic implementation of the algorithm be available, refactor test_aead_vs_generic_impl() so that instead there's a higher-level function test_aead_extra() which initializes a struct aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a pointer to that struct. As a bonus, this reduces stack usage. Also switch from crypto_aead_alg(tfm)->maxauthsize to crypto_aead_maxauthsize(), now that the latter is available in <crypto/aead.h>. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The alignment bug in ghash_setkey() fixed by commit 5c6bc4df ("crypto: ghash - fix unaligned memory access in ghash_setkey()") wasn't reliably detected by the crypto self-tests on ARM because the tests only set the keys directly from the test vectors. To improve test coverage, update the tests to sometimes pass misaligned keys to setkey(). This applies to shash, ahash, skcipher, and aead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
When checking two implementations of the same skcipher algorithm for consistency, require that the minimum key size be the same, not just the maximum key size. There's no good reason to allow different minimum key sizes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Currently if the comparison fuzz tests encounter an encryption error when generating an skcipher or AEAD test vector, they will still test the decryption side (passing it the uninitialized ciphertext buffer) and expect it to fail with the same error. This is sort of broken because it's not well-defined usage of the API to pass an uninitialized buffer, and furthermore in the AEAD case it's acceptable for the decryption error to be EBADMSG (meaning "inauthentic input") even if the encryption error was something else like EINVAL. Fix this for skcipher by explicitly initializing the ciphertext buffer on error, and for AEAD by skipping the decryption test on error. Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Fixes: d435e10e ("crypto: testmgr - fuzz skciphers against their generic implementation") Fixes: 40153b10 ("crypto: testmgr - fuzz AEADs against their generic implementation") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Add a helper function crypto_skcipher_min_keysize() to mirror crypto_skcipher_max_keysize(). This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Move crypto_aead_maxauthsize() to <crypto/aead.h> so that it's available to users of the API, not just AEAD implementations. This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-