- 09 Mar, 2018 7 commits
-
-
Tero Kristo authored
In certain platforms like DRA7xx having memory > 2GB with LPAE enabled has a constraint that DMA can be done with the initial 2GB and marks it as ZONE_DMA. But openssl when used with cryptodev does not make sure that input buffer is DMA capable. So, adding a check to verify if the input buffer is capable of DMA. Signed-off-by: Tero Kristo <t-kristo@ti.com> Reported-by: Aparna Balasubramanian <aparnab@ti.com> Reviewed-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
LEROY Christophe authored
req_ctx->hw_context is mainly used only by the HW. So it is not needed to sync the HW and the CPU each time hw_context in DMA mapped. This patch modifies the DMA mapping in order to limit synchronisation to necessary situations. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
LEROY Christophe authored
Commit 49f9783b ("crypto: talitos - do hw_context DMA mapping outside the requests") introduced a persistent dma mapping of req_ctx->hw_context Commit 37b5e889 ("crypto: talitos - chain in buffered data for ahash on SEC1") introduced a persistent dma mapping of req_ctx->buf As there is no destructor for req_ctx (the request context), the associated dma handlers where set in ctx (the tfm context). This is wrong as several hash operations can run with the same ctx. This patch removes this persistent mapping. Reported-by: Horia Geanta <horia.geanta@nxp.com> Cc: <stable@vger.kernel.org> Fixes: 49f9783b ("crypto: talitos - do hw_context DMA mapping outside the requests") Fixes: 37b5e889 ("crypto: talitos - chain in buffered data for ahash on SEC1") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Colin Ian King authored
Functions cavium_rng_remove and cavium_rng_remove_vf are local to the source and do not need to be in global scope, so make them static. Cleans up sparse warnings: drivers/char/hw_random/cavium-rng-vf.c:80:7: warning: symbol 'cavium_rng_remove_vf' was not declared. Should it be static? drivers/char/hw_random/cavium-rng.c:65:7: warning: symbol 'cavium_rng_remove' was not declared. Should it be static? Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch updates the safexcel_hmac_init_pad() function to also wait for completion when the digest return code is -EBUSY, as it would mean the request is in the backlog to be processed later. Fixes: 1b44c5a6 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") Suggested-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
In heavy traffic the DMA mapping is overwritten by multiple requests as the DMA address is stored in a global context. This patch moves this information to the per-hash request context so that it can't be overwritten. Fixes: 1b44c5a6 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
In heavy traffic the DMA mapping is overwritten by multiple requests as the DMA address is stored in a global context. This patch moves this information to the per-hash request context so that it can't be overwritten. Fixes: 1b44c5a6 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") Signed-off-by: Ofer Heifetz <oferh@marvell.com> [Antoine: rebased the patch, small fixes, commit message.] Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 Mar, 2018 33 commits
-
-
Brijesh Singh authored
Commit 1d57b17c ("crypto: ccp: Define SEV userspace ioctl and command id") added the invalid length enum but we missed capitalizing it. Fixes: 1d57b17c (crypto: ccp: Define SEV userspace ioctl ...) Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> CC: Gary R Hook <gary.hook@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Brijesh Singh authored
Fix sparse warning: Using plain integer as NULL pointer. Replaces assignment of 0 to pointer with NULL assignment. Fixes: 200664d5 (Add Secure Encrypted Virtualization ...) Cc: Borislav Petkov <bp@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Gary Hook <gary.hook@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Maciej S. Szmigiero authored
rsa-pkcs1pad uses a value returned from a RSA implementation max_size callback as a size of an input buffer passed to the RSA implementation for encrypt and sign operations. CCP RSA implementation uses a hardware input buffer which size depends only on the current RSA key length, so it should return this key length in the max_size callback, too. This also matches what the kernel software RSA implementation does. Previously, the value returned from this callback was always the maximum RSA key size the CCP hardware supports. This resulted in this huge buffer being passed by rsa-pkcs1pad to CCP even for smaller key sizes and then in a buffer overflow when ccp_run_rsa_cmd() tried to copy this large input buffer into a RSA key length-sized hardware input buffer. Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Fixes: ceeec0af ("crypto: ccp - Add support for RSA on the CCP") Cc: stable@vger.kernel.org Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Andrzej Siewior authored
I don't why we need take a single write lock and disable interrupts while setting up debugfs. This is what what happens when we try anyway: |ccp 0000:03:00.2: enabling device (0000 -> 0002) |BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:69 |in_atomic(): 1, irqs_disabled(): 1, pid: 3, name: kworker/0:0 |irq event stamp: 17150 |hardirqs last enabled at (17149): [<0000000097a18c49>] restore_regs_and_return_to_kernel+0x0/0x23 |hardirqs last disabled at (17150): [<000000000773b3a9>] _raw_write_lock_irqsave+0x1b/0x50 |softirqs last enabled at (17148): [<0000000064d56155>] __do_softirq+0x3b8/0x4c1 |softirqs last disabled at (17125): [<0000000092633c18>] irq_exit+0xb1/0xc0 |CPU: 0 PID: 3 Comm: kworker/0:0 Not tainted 4.16.0-rc2+ #30 |Workqueue: events work_for_cpu_fn |Call Trace: | dump_stack+0x7d/0xb6 | ___might_sleep+0x1eb/0x250 | down_write+0x17/0x60 | start_creating+0x4c/0xe0 | debugfs_create_dir+0x9/0x100 | ccp5_debugfs_setup+0x191/0x1b0 | ccp5_init+0x8a7/0x8c0 | ccp_dev_init+0xb8/0xe0 | sp_init+0x6c/0x90 | sp_pci_probe+0x26e/0x590 | local_pci_probe+0x3f/0x90 | work_for_cpu_fn+0x11/0x20 | process_one_work+0x1ff/0x650 | worker_thread+0x1d4/0x3a0 | kthread+0xfe/0x130 | ret_from_fork+0x27/0x50 If any locking is required, a simple mutex will do it. Cc: Gary R Hook <gary.hook@amd.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
The Atmel AES driver uses memzero_explicit on the keys on error, but the variable zeroed isn't the right one because of a typo. Fix this by using the right variable. Fixes: 89a82ef8 ("crypto: atmel-authenc - add support to authenc(hmac(shaX), Y(aes)) modes") Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Rui Miguel Silva authored
I.MX7x only use two clocks for the CAAM module, so make sure we do not try to use the mem and the emi_slow clock when running in that imx7d and imx7s machine type. Cc: "Horia Geantă" <horia.geanta@nxp.com> Cc: Aymen Sghaier <aymen.sghaier@nxp.com> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Peng Fan <peng.fan@nxp.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de> Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Rui Miguel Silva authored
caam_remove already removes the debugfs entry, so we need to remove the one immediately before calling caam_remove. This fix a NULL dereference at error paths is caam_probe fail. Fixes: 67c2315d ("crypto: caam - add Queue Interface (QI) backend support") Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Cc: "Horia Geantă" <horia.geanta@nxp.com> Cc: Aymen Sghaier <aymen.sghaier@nxp.com> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Peng Fan <peng.fan@nxp.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de> Cc: <stable@vger.kernel.org> # 4.12+ Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Brijesh Singh authored
Paulian reported the below kernel crash on Ryzen 5 system: BUG: unable to handle kernel NULL pointer dereference at 0000000000000073 RIP: 0010:.LC0+0x41f/0xa00 RSP: 0018:ffffa9968003bdd0 EFLAGS: 00010002 RAX: ffffffffb113b130 RBX: 0000000000000000 RCX: 00000000000005a7 RDX: 00000000000000ff RSI: ffff8b46dee651a0 RDI: ffffffffb1bd617c RBP: 0000000000000246 R08: 00000000000251a0 R09: 0000000000000000 R10: ffffd81f11a38200 R11: ffff8b52e8e0a161 R12: ffffffffb19db220 R13: 0000000000000007 R14: ffffffffb17e4888 R15: 5dccd7affc30a31e FS: 0000000000000000(0000) GS:ffff8b46dee40000(0000) knlGS:0000000000000000 CR2: 0000000000000073 CR3: 000080128120a000 CR4: 00000000003406e0 Call Trace: ? sp_get_psp_master_device+0x56/0x80 ? map_properties+0x540/0x540 ? psp_pci_init+0x20/0xe0 ? map_properties+0x540/0x540 ? sp_mod_init+0x16/0x1a ? do_one_initcall+0x4b/0x190 ? kernel_init_freeable+0x19b/0x23c ? rest_init+0xb0/0xb0 ? kernel_init+0xa/0x100 ? ret_from_fork+0x22/0x40 Since Ryzen does not support PSP/SEV firmware hence i->psp_data will NULL in all sp instances. In those cases, 'i' will point to the list head after list_for_each_entry(). Dereferencing the head will cause kernel crash. Add check to call get master device only when PSP/SEV is detected. Reported-by: Paulian Bogdan Marinca <paulian@marinca.net> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> CC: Gary R Hook <gary.hook@amd.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
All users of ablk_helper have been converted over to crypto_simd, so remove ablk_helper. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
There are no users of the original glue_fpu_begin() anymore, so rename glue_skwalk_fpu_begin() to glue_fpu_begin() so that it matches glue_fpu_end() again. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all glue_helper users have been switched from the blkcipher interface over to the skcipher interface, remove the versions of the glue_helper functions that handled the blkcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all users of lrw_crypt() have been removed in favor of the LRW template wrapping an ECB mode algorithm, remove lrw_crypt(). Also remove crypto/lrw.h as that is no longer needed either; and fold 'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into setkey(), and lrw_free_table() into exit_tfm(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all users of xts_crypt() have been removed in favor of the XTS template wrapping an ECB mode algorithm, remove xts_crypt(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Camellia from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-camellia-asm algorithm which did this. Users who request xts(camellia) and previously would have gotten xts-camellia-asm will now get xts(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-asm algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-asm will now get lrw(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni-avx2 algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Triple DES from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Blowfish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-cast6-avx algorithm which did this. Users who request lrw(cast6) and previously would have gotten lrw-cast6-avx will now get lrw(ecb-cast6-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
With ecb-cast5-avx, if a 128+ byte scatterlist element followed a shorter one, then the algorithm accidentally encrypted/decrypted only 8 bytes instead of the expected 128 bytes. Fix it by setting the encryption/decryption 'fn' correctly. Fixes: c12ab20b ("crypto: cast5/avx - avoid using temporary stack buffers") Cc: <stable@vger.kernel.org> # v3.8+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of Twofish from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-avx algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-avx will now get lrw(ecb-twofish-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the 3-way implementation of Twofish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-twofish-3way algorithm which did this. Users who request xts(twofish) and previously would have gotten xts-twofish-3way will now get xts(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-3way algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-3way will now get lrw(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX and AVX2 implementations of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx will now get lrw(ecb-serpent-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now get lrw(ecb-serpent-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-