- 13 Feb, 2023 28 commits
-
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the akcipher_request_complete helper instead of calling the completion function directly. In fact the previous code was buggy in that EINPROGRESS was never passed back to the original caller. Fixes: 3d5b1ecd ("crypto: rsa - RSA padding algorithm") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
AEAD reuses the existing request object for its child. This is error-prone and unnecessary. This patch adds a subrequest object just like we do for skcipher and hash. This patch also restores the original completion function as we do for skcipher/hash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The crypto_async_request passed to the completion is not guaranteed to be the original request object. Only the data field can be relied upon. Fix this by storing the socket pointer with the AEAD request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The crypto completion function currently takes a pointer to a struct crypto_async_request object. However, in reality the API does not allow the use of any part of the object apart from the data field. For example, ahash/shash will create a fake object on the stack to pass along a different data field. This leads to potential bugs where the user may try to dereference or otherwise use the crypto_async_request object. This patch adds some temporary scaffolding so that the completion function can take a void * instead. Once affected users have been converted this can be removed. The helper crypto_request_complete will remain even after the conversion is complete. It should be used instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 10 Feb, 2023 12 commits
-
-
Christophe JAILLET authored
There is no need to use GFP_ATOMIC here. GFP_KERNEL is already used for another memory allocation just the line after. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Neal Liu authored
This patch fixes following warnings: 1. sparse: incorrect type in assignment (different base types) Fix: change to __le32 type. 2. sparse: cast removes address space '__iomem' of expression Fix: use readb to avoid dereferencing the memory. Signed-off-by: Neal Liu <neal_liu@aspeedtech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zhang Yiqun authored
This type of request is often happened in AF_ALG cases. So add this vector in default cipher config array. Signed-off-by: Zhang Yiqun <zhangyiqun@phytium.com.cn> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
An often overlooked aspect of the skcipher walker API is that an error is not just indicated by a non-zero return value, but by the fact that walk->nbytes is zero. Thus it is an error to call skcipher_walk_done after getting back walk->nbytes == 0 from the previous interaction with the walker. This is because when walk->nbytes is zero the walker is left in an undefined state and any further calls to it may try to free uninitialised stack memory. The sm4 arm64 ccm code gets this wrong and ends up calling skcipher_walk_done even when walk->nbytes is zero. This patch rewrites the loop in a form that resembles other callers. Reported-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Fixes: ae1b83c7 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The functions qat_crypto_get_instance_node() and qat_compression_get_instance_node() allow to get a QAT instance (ring pair) on a device close to the node specified as input parameter. When this is not possible, and a QAT device is available in the system, these function return an instance on a remote node and they print a message reporting that it is not possible to find a device on the specified node. This is interpreted by people as an error rather than an info. The print "Could not find a device on node" indicates that a kernel application is running on a core in a socket that does not have a QAT device directly attached to it and performance might suffer. Due to the nature of the message, this can be considered as a debug message, therefore drop the severity to debug and report it only once to avoid flooding. Suggested-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Reviewed-by: Vladis Dronov <vdronov@redhat.com> Tested-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
When preparing an AER-CTR request, the driver copies the key provided by the user into a data structure that is accessible by the firmware. If the target device is QAT GEN4, the key size is rounded up by 16 since a rounded up size is expected by the device. If the key size is rounded up before the copy, the size used for copying the key might be bigger than the size of the region containing the key, causing an out-of-bounds read. Fix by doing the copy first and then update the keylen. This is to fix the following warning reported by KASAN: [ 138.150574] BUG: KASAN: global-out-of-bounds in qat_alg_skcipher_init_com.isra.0+0x197/0x250 [intel_qat] [ 138.150641] Read of size 32 at addr ffffffff88c402c0 by task cryptomgr_test/2340 [ 138.150651] CPU: 15 PID: 2340 Comm: cryptomgr_test Not tainted 6.2.0-rc1+ #45 [ 138.150659] Hardware name: Intel Corporation ArcherCity/ArcherCity, BIOS EGSDCRB1.86B.0087.D13.2208261706 08/26/2022 [ 138.150663] Call Trace: [ 138.150668] <TASK> [ 138.150922] kasan_check_range+0x13a/0x1c0 [ 138.150931] memcpy+0x1f/0x60 [ 138.150940] qat_alg_skcipher_init_com.isra.0+0x197/0x250 [intel_qat] [ 138.151006] qat_alg_skcipher_init_sessions+0xc1/0x240 [intel_qat] [ 138.151073] crypto_skcipher_setkey+0x82/0x160 [ 138.151085] ? prepare_keybuf+0xa2/0xd0 [ 138.151095] test_skcipher_vec_cfg+0x2b8/0x800 Fixes: 67916c95 ("crypto: qat - add AES-CTR support for QAT GEN4 devices") Cc: <stable@vger.kernel.org> Reported-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Reviewed-by: Vladis Dronov <vdronov@redhat.com> Tested-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tianjia Zhang authored
The fact that an error in the skcipher walker API are indicated not only by a non-zero return value, but also by the fact that walk->nbytes is zero, causes the layout of the skcipher walker loop to be sufficiently different from the usual layout, which is not a problem in itself, but it is likely to cause reading confusion and difficulty in code maintenance. This patch rewrites skcipher walker loop, and separates the last chunk cryption from the loop to avoid wrong calls to the skcipher walker API. In addition to following the usual convention of checking walk->nbytes, it also makes the loop execute logic clearer and easier to understand. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Uwe Kleine-König authored
id is unused in atmel_i2c_probe() and the callers have extra efforts to determine the right parameter. So drop the parameter simplifying both atmel_i2c_probe() and its callers. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Tudor Ambarus <tudor.ambarus@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Peter Lafreniere authored
Now that we use the ECB/CBC macros, none of the asm functions in blowfish-x86_64 are called indirectly. So we can safely use SYM_FUNC_START instead of SYM_TYPED_FUNC_START with no effect, allowing us to remove an include. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Peter Lafreniere authored
We can simplify the blowfish-x86_64 glue code by using the preexisting ECB/CBC helper macros. Additionally, this allows for easier reuse of asm functions in later x86 implementations of blowfish. This involves: 1 - Modifying blowfish_dec_blk_4way() to xor outputs when a flag is passed. 2 - Renaming blowfish_dec_blk_4way() to __blowfish_dec_blk_4way(). 3 - Creating two wrapper functions around __blowfish_dec_blk_4way() for use in the ECB/CBC macros. 4 - Removing the custom ecb_encrypt() and cbc_encrypt() routines in favor of macro-based routines. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Peter Lafreniere authored
The blowfish-x86_64 encryption functions have an unused argument. Remove it. This involves: 1 - Removing xor_block() macros. 2 - Removing handling of fourth argument from __blowfish_enc_blk{,_4way}() functions. 3 - Renaming __blowfish_enc_blk{,_4way}() to blowfish_enc_blk{,_4way}(). 4 - Removing the blowfish_enc_blk{,_4way}() wrappers from blowfish_glue.c 5 - Temporarily using SYM_TYPED_FUNC_START for now indirectly-callable encode functions. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-