- 22 Jun, 2015 9 commits
-
-
Herbert Xu authored
The bit CRYPTO_ALG_INTERNAL was added to stop af_alg from accessing internal algorithms. However, af_alg itself was never modified to actually stop that bit from being used by the user. Therefore the user could always override it by specifying the relevant bit in the type and/or mask. This patch silently discards the bit in both type and mask. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch changes the RNG allocation so that we only hold a reference to the RNG during initialisation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When seqiv is used in compatibility mode, this patch allows it to function even when an RNG Is not available. It also changes the RNG allocation for the new explicit seqiv interface so that we only hold a reference to the RNG during initialisation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The RNG may not be available during early boot, e.g., the relevant modules may not be included in the initramfs. As the RNG Is only needed for IPsec, we should not let this prevent use of ciphers without IV generators, e.g., for disk encryption. This patch postpones the RNG allocation to the init function so that one failure during early boot does not make the RNG unavailable for all subsequent users of the same cipher. More importantly, it lets the cipher live even if RNG allocation fails. Of course we no longer offer IV generation and which will fail with an error if invoked. But all other cipher capabilities will function as usual. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The RNG may not be available during early boot, e.g., the relevant modules may not be included in the initramfs. As the RNG Is only needed for IPsec, we should not let this prevent use of ciphers without IV generators, e.g., for disk encryption. This patch postpones the RNG allocation to the init function so that one failure during early boot does not make the RNG unavailable for all subsequent users of the same cipher. More importantly, it lets the cipher live even if RNG allocation fails. Of course we no longer offer IV generation and which will fail with an error if invoked. But all other cipher capabilities will function as usual. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds a new crypto_user command that allows the admin to delete the crypto system RNG. Note that this can only be done if the RNG is currently not in use. The next time it is used a new system RNG will be allocated. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The header file cryptouser.h only contains information that is exported to user-space. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently we free the default RNG when its use count hits zero. This was OK when the IV generators would latch onto the RNG at instance creation time and keep it until the instance is torn down. Now that IV generators only keep the RNG reference during init time this scheme causes the default RNG to come and go at a high frequencey. This is highly undesirable as we want to keep a single RNG in use unless the admin wants it to be removed. This patch changes the scheme so that the system RNG once allocated is never removed unless a specifically requested. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently for skcipher IV generators they must provide givencrypt as that is the whole point. We are currently replacing skcipher IV generators with explicit IV generators. In order to maintain backwards compatibility, we need to allow the IV generators to still function as a normal skcipher when the RNG Is not present (e.g., in the initramfs during boot). IOW everything but givencrypt and givdecrypt will still work but those two will fail. Therefore this patch assigns a default givencrypt that simply returns an error should it be NULL. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 21 Jun, 2015 4 commits
-
-
Fabio Estevam authored
clk_prepare_enable() may fail, so we should better check its return value and propagate it in the case of error. Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tadeusz Struk authored
Should be CRYPTO_AKCIPHER instead of AKCIPHER Reported-by: Andreas Ruprecht <andreas.ruprecht@fau.de> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Michael van der Westhuizen authored
The picoXcell hardware crypto accelerator driver was using an older version of the clk framework, and not (un)preparing the clock before enabling/disabling it. This change uses the handy clk_prepare_enable function to interact with the current clk framework correctly. Signed-off-by: Michael van der Westhuizen <michael@smart-africa.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The nx driver reads two crucial paramters from the firmware for each crypto algorithm, the maximum SG list length and byte limit. Unfortunately those two parameters may be bogus, or worse they may be absent altogether. When this happens the algorithms will still register successfully but will fail when used or tested. This patch adds checks to report any firmware entries which are found to be bogus, and avoid registering algorithms which have bogus parameters. A warning is also printed when an algorithm is not registered because of this as there may have been no firmware entries for it at all. Reported-by: Ondrej Moriš <omoris@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 19 Jun, 2015 21 commits
-
-
Boris BREZILLON authored
Add DT bindings documentation for the new marvell-cesa driver. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Arnaud Ebalard authored
Add the Kirkwood and Dove SoC descriptions, and control the allhwsupport module parameter to avoid probing the CESA IP when the old CESA driver is enabled (unless it is explicitly requested to do so). Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
Add the Orion SoC description, and select this implementation by default to support non-DT probing: Orion is the only platform where non-DT boards are declaring the CESA block. Control the allhwsupport module parameter to avoid probing the CESA IP when the old CESA driver is enabled (unless it is explicitly requested to do so). Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
The old and new marvell CESA drivers both support Orion and Kirkwood SoCs. Add a module parameter to choose whether these SoCs should be attached to the new or the old driver. The default policy is to keep attaching those IPs to the old driver if it is enabled, until we decide the new CESA driver is stable/secure enough. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
Add CESA IP description for all the missing armada SoCs (XP, 375 and 38x). Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Arnaud Ebalard authored
Add support for SHA256 operations. Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Arnaud Ebalard authored
Add support for MD5 operations. Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Arnaud Ebalard authored
Add support for Triple-DES operations. Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
Add support for DES operations. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
The CESA IP supports CPU offload through a dedicated DMA engine (TDMA) which can control the crypto block. When you use this mode, all the required data (operation metadata and payload data) are transferred using DMA, and the results are retrieved through DMA when possible (hash results are not retrieved through DMA yet), thus reducing the involvement of the CPU and providing better performances in most cases (for small requests, the cost of DMA preparation might exceed the performance gain). Note that some CESA IPs do not embed this dedicated DMA, hence the activation of this feature on a per platform basis. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
The existing mv_cesa driver supports some features of the CESA IP but is quite limited, and reworking it to support new features (like involving the TDMA engine to offload the CPU) is almost impossible. This driver has been rewritten from scratch to take those new features into account. This commit introduce the base infrastructure allowing us to add support for DMA optimization. It also includes support for one hash (SHA1) and one cipher (AES) algorithm, and enable those features on the Armada 370 SoC. Other algorithms and platforms will be added later on. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
We are about to add a new driver to support new features like using the TDMA engine to offload the CPU. Orion, Dove and Kirkwood platforms are already using the mv_cesa driver, but Orion SoCs do not embed the TDMA engine, which means we will have to differentiate them if we want to get TDMA support on Dove and Kirkwood. In the other hand, the migration from the old driver to the new one is not something all people are willing to do without first auditing the new driver. Hence we have to support the new compatible in the mv_cesa driver so that new platforms with updated DTs can still attach their crypto engine device to this driver. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
The mv_cesa driver currently expects the SRAM memory region to be passed as a platform device resource. This approach implies two drawbacks: - the DT representation is wrong - the only one that can access the SRAM is the crypto engine The last point is particularly annoying in some cases: for example on armada 370, a small region of the crypto SRAM is used to implement the cpuidle, which means you would not be able to enable both cpuidle and the CESA driver. To address that problem, we explicitly define the SRAM device in the DT and then reference the sram node from the crypto engine node. Also note that the old way of retrieving the SRAM memory region is still supported, or in other words, backward compatibility is preserved. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Boris BREZILLON authored
On Dove platforms, the crypto engine requires a clock. Document this clocks property in the mv_cesa bindings doc. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socHerbert Xu authored
Merge the mvebu/drivers branch of the arm-soc tree which contains just a single patch bfa1ce5f ("bus: mvebu-mbus: add mv_mbus_dram_info_nooverlap()") that happens to be a prerequisite of the new marvell/cesa crypto driver.
-
Dan Streetman authored
Update the "IBM Power in-Nest Crypto Acceleration" and "IBM Power 842 compression accelerator" sections to specify the correct files. The "IBM Power in-Nest Crypto Acceleration" was originally the only NX driver, and so its section listed all drivers/crypto/nx/ files, but now there is also the 842 driver which has its own section. This lists explicitly what files are owned by the Crypto driver and which files are owned by the 842 compression driver. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Streetman authored
Add support to the nx-842-pseries.c driver for running in little endian mode. The pSeries platform NX 842 driver currently only works as big endian. This adds cpu_to_be*() and be*_to_cpu() in the appropriate places to work in LE mode also. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The new aead_edesc_alloc left out the bit indicating the last entry on the source SG list. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
I incorrectly removed DESC_MAX_USED_BYTES when enlarging the size of the shared descriptor buffers, thus making it four times larger than what is necessary. This patch restores the division by four calculation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The struct aead_instance is meant to extend struct crypto_instance by incorporating the extra members of struct aead_alg. However, the current layout which is copied from shash/ahash does not specify the struct fully. In particular only aead_alg is present. For shash/ahash this works because users there add extra headroom to sizeof(struct crypto_instance) when allocating the instance. Unfortunately for aead, this bit was lost when the new aead_instance was added. Rather than fixing it like shash/ahash, this patch simply expands struct aead_instance to contain what is supposed to be there, i.e., adding struct crypto_instance. In order to not break existing AEAD users, this is done through an anonymous union. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The struct crypto_alg is embedded into various type-specific structs such as aead_alg. This is then used as part of instances such as struct aead_instance. It is also embedded into the generic struct crypto_instance. In order to ensure that struct aead_instance can be converted to struct crypto_instance when necessary, we need to ensure that crypto_alg is aligned properly. This patch adds an alignment attribute to struct crypto_alg to ensure this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 18 Jun, 2015 4 commits
-
-
Herbert Xu authored
This patch fixes a number of problems in crypto driver Kconfig entries: 1. Select BLKCIPHER instead of BLKCIPHER2. The latter is internal and should not be used outside of the crypto API itself. 2. Do not select ALGAPI unless you use a legacy type like CRYPTO_ALG_TYPE_CIPHER. 3. Select the algorithm type that you are implementing, e.g., AEAD. 4. Do not select generic C code such as CBC/ECB unless you use them as a fallback. 5. Remove default n since that is the default default. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The AEAD speed test SG list setup did not correctly mark the AD, potentially causing a crash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds a speed test for rfc4309(ccm(aes)) as mode 212. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Trumtrar authored
The patch crypto: caam - Add definition of rd/wr_reg64 for little endian platform added support for little endian platforms to the CAAM driver. Namely a write and read function for 64 bit registers. The only user of this functions is the Job Ring driver (drivers/crypto/caam/jr.c). It uses the functions to set the DMA addresses for the input/output rings. However, at least in the default configuration, the least significant 32 bits are always in the base+0x0004 address; independent of the endianness of the bytes itself. That means the addresses do not change with the system endianness. DMA addresses are only 32 bits wide on non-64-bit systems, writing the upper 32 bits of this value to the register for the least significant bits results in the DMA address being set to 0. Fix this by always writing the registers in the same way. Suggested-by: Russell King <linux@arm.linux.org.uk> Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 17 Jun, 2015 2 commits
-
-
Tadeusz Struk authored
New test vectors for RSA algorithm. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tadeusz Struk authored
Add a new rsa generic SW implementation. This implements only cryptographic primitives. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Added select on ASN1. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-