1. 26 Jun, 2014 3 commits
  2. 25 Jun, 2014 6 commits
  3. 23 Jun, 2014 1 commit
  4. 20 Jun, 2014 28 commits
    • Ard Biesheuvel's avatar
      crypto: testmgr - add 4 more test vectors for GHASH · 6c9e3dcd
      Ard Biesheuvel authored
      This adds 4 test vectors for GHASH (of which one for chunked mode), making
      a total of 5.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6c9e3dcd
    • chandramouli narayanan's avatar
      crypto: aes - AES CTR x86_64 "by8" AVX optimization · 22cddcc7
      chandramouli narayanan authored
      This patch introduces "by8" AES CTR mode AVX optimization inspired by
      Intel Optimized IPSEC Cryptograhpic library. For additional information,
      please see:
      http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=22972
      
      The functions aes_ctr_enc_128_avx_by8(), aes_ctr_enc_192_avx_by8() and
      aes_ctr_enc_256_avx_by8() are adapted from
      Intel Optimized IPSEC Cryptographic library. When both AES and AVX features
      are enabled in a platform, the glue code in AESNI module overrieds the
      existing "by4" CTR mode en/decryption with the "by8"
      AES CTR mode en/decryption.
      
      On a Haswell desktop, with turbo disabled and all cpus running
      at maximum frequency, the "by8" CTR mode optimization
      shows better performance results across data & key sizes
      as measured by tcrypt.
      
      The average performance improvement of the "by8" version over the "by4"
      version is as follows:
      
      For 128 bit key and data sizes >= 256 bytes, there is a 10-16% improvement.
      For 192 bit key and data sizes >= 256 bytes, there is a 20-22% improvement.
      For 256 bit key and data sizes >= 256 bytes, there is a 20-25% improvement.
      
      A typical run of tcrypt with AES CTR mode encryption of the "by4" and "by8"
      optimization shows the following results:
      
      tcrypt with "by4" AES CTR mode encryption optimization on a Haswell Desktop:
      ---------------------------------------------------------------------------
      
      testing speed of __ctr-aes-aesni encryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 343 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 336 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 491 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1130 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 7309 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 346 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 361 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 543 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1321 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 9649 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 369 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 366 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 595 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1531 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 10522 cycles (8192 bytes)
      
      testing speed of __ctr-aes-aesni decryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 336 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 350 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 487 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1129 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 7287 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 350 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 359 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 635 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1324 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 9595 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 364 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 377 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 604 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1527 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 10549 cycles (8192 bytes)
      
      tcrypt with "by8" AES CTR mode encryption optimization on a Haswell Desktop:
      ---------------------------------------------------------------------------
      
      testing speed of __ctr-aes-aesni encryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 340 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 330 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 450 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1043 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 6597 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 339 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 352 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 539 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1153 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 8458 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 353 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 360 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 512 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1277 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 8745 cycles (8192 bytes)
      
      testing speed of __ctr-aes-aesni decryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 348 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 335 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 451 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1030 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 6611 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 354 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 346 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 488 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1154 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 8390 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 357 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 362 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 515 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1284 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 8681 cycles (8192 bytes)
      
      crypto: Incorporate feed back to AES CTR mode optimization patch
      
      Specifically, the following:
      a) alignment around main loop in aes_ctrby8_avx_x86_64.S
      b) .rodata around data constants used in the assembely code.
      c) the use of CONFIG_AVX in the glue code.
      d) fix up white space.
      e) informational message for "by8" AES CTR mode optimization
      f) "by8" AES CTR mode optimization can be simply enabled
      if the platform supports both AES and AVX features. The
      optimization works superbly on Sandybridge as well.
      
      Testing on Haswell shows no performance change since the last.
      
      Testing on Sandybridge shows that the "by8" AES CTR mode optimization
      greatly improves performance.
      
      tcrypt log with "by4" AES CTR mode optimization on Sandybridge
      --------------------------------------------------------------
      
      testing speed of __ctr-aes-aesni encryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 383 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 408 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 707 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1864 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 12813 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 395 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 432 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 780 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 2132 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 15765 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 416 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 438 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 842 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 2383 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 16945 cycles (8192 bytes)
      
      testing speed of __ctr-aes-aesni decryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 389 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 409 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 704 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1865 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 12783 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 409 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 434 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 792 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 2151 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 15804 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 421 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 444 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 840 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 2394 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 16928 cycles (8192 bytes)
      
      tcrypt log with "by8" AES CTR mode optimization on Sandybridge
      --------------------------------------------------------------
      
      testing speed of __ctr-aes-aesni encryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 383 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 401 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 522 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1136 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 7046 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 394 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 418 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 559 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1263 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 9072 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 408 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 428 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 595 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1385 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 9224 cycles (8192 bytes)
      
      testing speed of __ctr-aes-aesni decryption
      test 0 (128 bit key, 16 byte blocks): 1 operation in 390 cycles (16 bytes)
      test 1 (128 bit key, 64 byte blocks): 1 operation in 402 cycles (64 bytes)
      test 2 (128 bit key, 256 byte blocks): 1 operation in 530 cycles (256 bytes)
      test 3 (128 bit key, 1024 byte blocks): 1 operation in 1135 cycles (1024 bytes)
      test 4 (128 bit key, 8192 byte blocks): 1 operation in 7079 cycles (8192 bytes)
      test 5 (192 bit key, 16 byte blocks): 1 operation in 414 cycles (16 bytes)
      test 6 (192 bit key, 64 byte blocks): 1 operation in 417 cycles (64 bytes)
      test 7 (192 bit key, 256 byte blocks): 1 operation in 572 cycles (256 bytes)
      test 8 (192 bit key, 1024 byte blocks): 1 operation in 1312 cycles (1024 bytes)
      test 9 (192 bit key, 8192 byte blocks): 1 operation in 9073 cycles (8192 bytes)
      test 10 (256 bit key, 16 byte blocks): 1 operation in 415 cycles (16 bytes)
      test 11 (256 bit key, 64 byte blocks): 1 operation in 454 cycles (64 bytes)
      test 12 (256 bit key, 256 byte blocks): 1 operation in 598 cycles (256 bytes)
      test 13 (256 bit key, 1024 byte blocks): 1 operation in 1407 cycles (1024 bytes)
      test 14 (256 bit key, 8192 byte blocks): 1 operation in 9288 cycles (8192 bytes)
      
      crypto: Fix redundant checks
      
      a) Fix the redundant check for cpu_has_aes
      b) Fix the key length check when invoking the CTR mode "by8"
      encryptor/decryptor.
      
      crypto: fix typo in AES ctr mode transform
      Signed-off-by: default avatarChandramouli Narayanan <mouli@linux.intel.com>
      Reviewed-by: default avatarMathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      22cddcc7
    • Jussi Kivilinna's avatar
      crypto: des_3des - add x86-64 assembly implementation · 6574e6c6
      Jussi Kivilinna authored
      Patch adds x86_64 assembly implementation of Triple DES EDE cipher algorithm.
      Two assembly implementations are provided. First is regular 'one-block at
      time' encrypt/decrypt function. Second is 'three-blocks at time' function that
      gains performance increase on out-of-order CPUs.
      
      tcrypt test results:
      
      Intel Core i5-4570:
      
      des3_ede-asm vs des3_ede-generic:
      size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec
      16B     1.21x   1.22x   1.27x   1.36x   1.25x   1.25x
      64B     1.98x   1.96x   1.23x   2.04x   2.01x   2.00x
      256B    2.34x   2.37x   1.21x   2.40x   2.38x   2.39x
      1024B   2.50x   2.47x   1.22x   2.51x   2.52x   2.51x
      8192B   2.51x   2.53x   1.21x   2.56x   2.54x   2.55x
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@iki.fi>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6574e6c6
    • Jussi Kivilinna's avatar
    • Dan Carpenter's avatar
      crypto: caam - remove duplicate FIFOST_CONT_MASK define · be513f44
      Dan Carpenter authored
      The FIFOST_CONT_MASK define is cut and pasted twice so we can delete the
      second instance.
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: default avatarKim Phillips <kim.phillips@freescale.com>
      Acked-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      be513f44
    • George Spelvin's avatar
      crypto: crc32c-pclmul - Shrink K_table to 32-bit words · 473946e6
      George Spelvin authored
      There's no need for the K_table to be made of 64-bit words.  For some
      reason, the original authors didn't fully reduce the values modulo the
      CRC32C polynomial, and so had some 33-bit values in there.  They can
      all be reduced to 32 bits.
      
      Doing that cuts the table size in half.  Since the code depends on both
      pclmulq and crc32, SSE 4.1 is obviously present, so we can use pmovzxdq
      to fetch it in the correct format.
      
      This adds (measured on Ivy Bridge) 1 cycle per main loop iteration
      (CRC of up to 3K bytes), less than 0.2%.  The hope is that the reduced
      D-cache footprint will make up the loss in other code.
      
      Two other related fixes:
      * K_table is read-only, so belongs in .rodata, and
      * There's no need for more than 8-byte alignment
      Acked-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarGeorge Spelvin <linux@horizon.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      473946e6
    • Tadeusz Struk's avatar
      crypto: qat - Update to makefiles · cea4001a
      Tadeusz Struk authored
      Update to makefiles etc.
      Don't update the firmware/Makefile yet since there is no FW binary in
      the crypto repo yet. This will be added later.
      
      v3 - removed change to ./firmware/Makefile
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      cea4001a
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT DH895xcc accelerator · 7afa232e
      Tadeusz Struk authored
      This patch adds DH895xCC hardware specific code.
      It hooks to the common infrastructure and provides acceleration for crypto
      algorithms.
      Acked-by: default avatarJohn Griffin <john.griffin@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7afa232e
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT accelengine part of fw loader · b3416fb8
      Tadeusz Struk authored
      This patch adds acceleration engine handler part the firmware loader.
      Acked-by: default avatarBo Cui <bo.cui@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarKaren Xiang <karen.xiang@intel.com>
      Signed-off-by: default avatarPingchaox Yang <pingchaox.yang@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b3416fb8
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT ucode part of fw loader · b4b7e67c
      Tadeusz Struk authored
      This patch adds microcode part of the firmware loader.
      
      v4 - splits FW loader part into two smaller patches.
      Acked-by: default avatarBo Cui <bo.cui@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarKaren Xiang <karen.xiang@intel.com>
      Signed-off-by: default avatarPingchaox Yang <pingchaox.yang@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b4b7e67c
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT crypto interface · d370cec3
      Tadeusz Struk authored
      This patch adds qat crypto interface.
      Acked-by: default avatarJohn Griffin <john.griffin@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d370cec3
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT FW interface · 38154e65
      Tadeusz Struk authored
      This patch adds FW interface structure definitions.
      Acked-by: default avatarJohn Griffin <john.griffin@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      38154e65
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT transport code · a672a9dc
      Tadeusz Struk authored
      This patch adds a code that implements communication channel between the
      driver and the firmware.
      Acked-by: default avatarJohn Griffin <john.griffin@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a672a9dc
    • Tadeusz Struk's avatar
      crypto: qat - Intel(R) QAT driver framework · d8cba25d
      Tadeusz Struk authored
      This patch adds a common infractructure that will be used by all Intel(R)
      QuickAssist Technology (QAT) devices.
      
      v2 - added ./drivers/crypto/qat/Kconfig and ./drivers/crypto/qat/Makefile
      v4 - splits common part into more, smaller patches
      Acked-by: default avatarJohn Griffin <john.griffin@intel.com>
      Reviewed-by: default avatarBruce W. Allan <bruce.w.allan@intel.com>
      Signed-off-by: default avatarTadeusz Struk <tadeusz.struk@intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d8cba25d
    • Tom Lendacky's avatar
      crypto: ccp - Add platform device support for arm64 · c4f4b325
      Tom Lendacky authored
      Add support for the CCP on arm64 as a platform device.
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c4f4b325
    • Tom Lendacky's avatar
      crypto: ccp - CCP device bindings documentation · 1ad348f4
      Tom Lendacky authored
      This patch provides the documentation of the device bindings
      for the AMD Cryptographic Coprocessor driver.
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      1ad348f4
    • Tom Lendacky's avatar
      crypto: ccp - Modify PCI support in prep for arm64 support · 3d77565b
      Tom Lendacky authored
      Modify the PCI device support in prep for supporting the
      CCP as a platform device for arm64.
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3d77565b
    • Stephan Mueller's avatar
      crypto: drbg - Add DRBG test code to testmgr · 64d1cdfb
      Stephan Mueller authored
      The DRBG test code implements the CAVS test approach.
      
      As discussed for the test vectors, all DRBG types are covered with
      testing. However, not every backend cipher is covered with testing. To
      prevent the testmgr from logging missing testing, the NULL test is
      registered for all backend ciphers not covered with specific test cases.
      
      All currently implemented DRBG types and backend ciphers are defined
      in SP800-90A. Therefore, the fips_allowed flag is set for all.
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      64d1cdfb
    • Stephan Mueller's avatar
      crypto: drbg - DRBG testmgr test vectors · 3332ee2a
      Stephan Mueller authored
      All types of the DRBG (CTR, HMAC, Hash) are covered with test vectors.
      In addition, all permutations of use cases of the DRBG are covered:
      
              * with and without predition resistance
              * with and without additional information string
              * with and without personalization string
      
      As the DRBG implementation is agnositc of the specific backend cipher,
      only test vectors for one specific backend cipher is used. For example:
      the Hash DRBG uses the same code paths irrespectively of using SHA-256
      or SHA-512. Thus, the test vectors for SHA-256 cover the testing of all
      DRBG code paths of SHA-512.
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3332ee2a
    • Stephan Mueller's avatar
      5bfcf65b
    • Stephan Mueller's avatar
      crypto: drbg - DRBG kernel configuration options · 419090c6
      Stephan Mueller authored
      The different DRBG types of CTR, Hash, HMAC can be enabled or disabled
      at compile time. At least one DRBG type shall be selected.
      
      The default is the HMAC DRBG as its code base is smallest.
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      419090c6
    • Stephan Mueller's avatar
      crypto: drbg - header file for DRBG · 3e16f959
      Stephan Mueller authored
      The header file includes the definition of:
      
      * DRBG data structures with
              - struct drbg_state as main structure
              - struct drbg_core referencing the backend ciphers
              - struct drbg_state_ops callbach handlers for specific code
                supporting the Hash, HMAC, CTR DRBG implementations
              - struct drbg_conc defining a linked list for input data
              - struct drbg_test_data holding the test "entropy" data for CAVS
                testing and testmgr.c
              - struct drbg_gen allowing test data, additional information
                string and personalization string data to be funneled through
                the kernel crypto API -- the DRBG requires additional
                parameters when invoking the reset and random number
                generation requests than intended by the kernel crypto API
      
      * wrapper function to the kernel crypto API functions using struct
        drbg_gen to pass through all data needed for DRBG
      
      * wrapper functions to kernel crypto API functions usable for testing
        code to inject test_data into the DRBG as needed by CAVS testing and
        testmgr.c.
      
      * DRBG flags required for the operation of the DRBG and for selecting
        the particular DRBG type and backend cipher
      
      * getter functions for data from struct drbg_core
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3e16f959
    • Stephan Mueller's avatar
      crypto: drbg - SP800-90A Deterministic Random Bit Generator · 541af946
      Stephan Mueller authored
      This is a clean-room implementation of the DRBG defined in SP800-90A.
      All three viable DRBGs defined in the standard are implemented:
      
       * HMAC: This is the leanest DRBG and compiled per default
       * Hash: The more complex DRBG can be enabled at compile time
       * CTR: The most complex DRBG can also be enabled at compile time
      
      The DRBG implementation offers the following:
      
       * All three DRBG types are implemented with a derivation function.
       * All DRBG types are available with and without prediction resistance.
       * All SHA types of SHA-1, SHA-256, SHA-384, SHA-512 are available for
         the HMAC and Hash DRBGs.
       * All AES types of AES-128, AES-192 and AES-256 are available for the
         CTR DRBG.
       * A self test is implemented with drbg_healthcheck().
       * The FIPS 140-2 continuous self test is implemented.
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      541af946
    • Jean Delvare's avatar
      crypto: drivers - Add 2 missing __exit_p · 13269ec6
      Jean Delvare authored
      References to __exit functions must be wrapped with __exit_p.
      Signed-off-by: default avatarJean Delvare <jdelvare@suse.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Robert Jennings <rcj@linux.vnet.ibm.com>
      Cc: Marcelo Henrique Cerri <mhcerri@linux.vnet.ibm.com>
      Cc: Fionnuala Gunter <fin@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      13269ec6
    • Himangi Saraogi's avatar
      crypto: caam - Introduce the use of the managed version of kzalloc · 4776d381
      Himangi Saraogi authored
      This patch moves data allocated using kzalloc to managed data allocated
      using devm_kzalloc and cleans now unnecessary kfrees in probe and remove
      functions.  Also, linux/device.h is added to make sure the devm_*()
      routine declarations are unambiguously available. Earlier, in the probe
      function ctrlpriv was leaked on the failure of ctrl = of_iomap(nprop, 0);
      as well as on the failure of ctrlpriv->jrpdev = kzalloc(...); . These
      two bugs have been fixed by the patch.
      
      The following Coccinelle semantic patch was used for making the change:
      
      identifier p, probefn, removefn;
      @@
      struct platform_driver p = {
        .probe = probefn,
        .remove = removefn,
      };
      
      @prb@
      identifier platform.probefn, pdev;
      expression e, e1, e2;
      @@
      probefn(struct platform_device *pdev, ...) {
        <+...
      - e = kzalloc(e1, e2)
      + e = devm_kzalloc(&pdev->dev, e1, e2)
        ...
      ?-kfree(e);
        ...+>
      }
      
      @rem depends on prb@
      identifier platform.removefn;
      expression e;
      @@
      removefn(...) {
        <...
      - kfree(e);
        ...>
      }
      Signed-off-by: default avatarHimangi Saraogi <himangi774@gmail.com>
      Acked-by: default avatarJulia Lawall <julia.lawall@lip6.fr>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4776d381
    • Eric Dumazet's avatar
      crypto: lzo - try kmalloc() before vmalloc() · 42614b05
      Eric Dumazet authored
      zswap allocates one LZO context per online cpu.
      
      Using vmalloc() for small (16KB) memory areas has drawback of slowing
      down /proc/vmallocinfo and /proc/meminfo reads, TLB pressure and poor
      NUMA locality, as default NUMA policy at boot time is to interleave
      pages :
      
      edumazet:~# grep lzo /proc/vmallocinfo | head -4
      0xffffc90006062000-0xffffc90006067000   20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
      0xffffc90006067000-0xffffc9000606c000   20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
      0xffffc9000606c000-0xffffc90006071000   20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
      0xffffc90006071000-0xffffc90006076000   20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
      
      This patch tries a regular kmalloc() and fallback to vmalloc in case
      memory is too fragmented.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      42614b05
    • Marek Vasut's avatar
      crypto: skcipher - Don't use __crypto_dequeue_request() · d656c180
      Marek Vasut authored
      Use skcipher_givcrypt_cast(crypto_dequeue_request(queue)) instead, which
      does the same thing in much cleaner way. The skcipher_givcrypt_cast()
      actually uses container_of() instead of messing around with offsetof()
      too.
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Reported-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Pantelis Antoniou <panto@antoniou-consulting.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d656c180
    • Marek Vasut's avatar
      crypto: api - Move crypto_yield() to algapi.h · bb55a4c1
      Marek Vasut authored
      It makes no sense for crypto_yield() to be defined in scatterwalk.h ,
      move it into algapi.h as it's an internal function to crypto API.
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      bb55a4c1
  5. 16 Jun, 2014 2 commits
    • Linus Torvalds's avatar
      Linux 3.16-rc1 · 7171511e
      Linus Torvalds authored
      7171511e
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · a9be2242
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) Fix checksumming regressions, from Tom Herbert.
      
       2) Undo unintentional permissions changes for SCTP rto_alpha and
          rto_beta sysfs knobs, from Denial Borkmann.
      
       3) VXLAN, like other IP tunnels, should advertize it's encapsulation
          size using dev->needed_headroom instead of dev->hard_header_len.
          From Cong Wang.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
        net: sctp: fix permissions for rto_alpha and rto_beta knobs
        vxlan: Checksum fixes
        net: add skb_pop_rcv_encapsulation
        udp: call __skb_checksum_complete when doing full checksum
        net: Fix save software checksum complete
        net: Fix GSO constants to match NETIF flags
        udp: ipv4: do not waste time in __udp4_lib_mcast_demux_lookup
        vxlan: use dev->needed_headroom instead of dev->hard_header_len
        MAINTAINERS: update cxgb4 maintainer
      a9be2242