Commit 7b742dac authored by Linus Torvalds's avatar Linus Torvalds

Merge master.kernel.org:/home/davem/BK/crypto-2.5

into home.transmeta.com:/home/torvalds/v2.5/linux
parents 23c0bc6f ada56aa5
Scatterlist Cryptographic API
INTRODUCTION
The Scatterlist Crypto API takes page vectors (scatterlists) as
arguments, and works directly on pages. In some cases (e.g. ECB
mode ciphers), this will allow for pages to be encrypted in-place
with no copying.
One of the initial goals of this design was to readily support IPsec,
so that processing can be applied to paged skb's without the need
for linearization.
DETAILS
At the lowest level are algorithms, which register dynamically with the
API.
'Transforms' are user-instantiated objects, which maintain state, handle all
of the implementation logic (e.g. manipulating page vectors), provide an
abstraction to the underlying algorithms, and handle common logical
operations (e.g. cipher modes, HMAC for digests). However, at the user
level they are very simple.
Conceptually, the API layering looks like this:
[transform api] (user interface)
[transform ops] (per-type logic glue e.g. cipher.c, digest.c)
[algorithm api] (for registering algorithms)
The idea is to make the user interface and algorithm registration API
very simple, while hiding the core logic from both. Many good ideas
from existing APIs such as Cryptoapi and Nettle have been adapted for this.
The API currently supports three types of transforms: Ciphers, Digests and
Compressors. The compression algorithms especially seem to be performing
very well so far.
An asynchronous scheduling interface is in planning but not yet
implemented, as we need to further analyze the requirements of all of
the possible hardware scenarios (e.g. IPsec NIC offload).
Here's an example of how to use the API:
#include <linux/crypto.h>
struct scatterlist sg[2];
char result[128];
struct crypto_tfm *tfm;
tfm = crypto_alloc_tfm("md5", 0);
if (tfm == NULL)
fail();
/* ... set up the scatterlists ... */
crypto_digest_init(tfm);
crypto_digest_update(tfm, &sg, 2);
crypto_digest_final(tfm, result);
crypto_free_tfm(tfm);
Many real examples are available in the regression test module (tcrypt.c).
CONFIGURATION NOTES
As Triple DES is part of the DES module, for those using modular builds,
add the following line to /etc/modules.conf:
alias des3_ede des
DEVELOPER NOTES
None of this code should be called from hardirq context, only softirq and
user contexts.
When using the API for ciphers, performance will be optimal if each
scatterlist contains data which is a multiple of the cipher's block
size (typically 8 bytes). This prevents having to do any copying
across non-aligned page fragment boundaries.
ADDING NEW ALGORITHMS
When submitting a new algorithm for inclusion, a mandatory requirement
is that at least a few test vectors from known sources (preferably
standards) be included.
Converting existing well known code is preferred, as it is more likely
to have been reviewed and widely tested. If submitting code from LGPL
sources, please consider changing the license to GPL (see section 3 of
the LGPL).
Algorithms submitted must also be generally patent-free (e.g. IDEA
will not be included in the mainline until around 2011), and be based
on a recognized standard and/or have been subjected to appropriate
peer review.
BUGS
Send bug reports to:
James Morris <jmorris@intercode.com.au>
Cc: David S. Miller <davem@redhat.com>
FURTHER INFORMATION
For further patches and various updates, including the current TODO
list, see:
http://samba.org/~jamesm/crypto/
Ongoing development discussion may also be found on
kerneli cryptoapi-devel,
see http://www.kerneli.org/mailman/listinfo/cryptoapi-devel
AUTHORS
James Morris
David S. Miller
Jean-Francois Dive (SHA1 algorithm module)
CREDITS
The following people provided invaluable feedback during the development
of the API:
Alexey Kuznetzov
Rusty Russell
Herbert Valerio Riedel
Jeff Garzik
Michael Richardson
Andrew Morton
Ingo Oeser
Christoph Hellwig
Portions of this API were derived from the following projects:
Kerneli Cryptoapi (http://www.kerneli.org/)
Alexander Kjeldaas
Herbert Valerio Riedel
Kyle McMartin
Jean-Luc Cooke
David Bryson
Clemens Fruhwirth
Tobias Ringstrom
Harald Welte
and;
Nettle (http://www.lysator.liu.se/~nisse/nettle/)
Niels Möller
Original developers of the initial set of crypto algorithms:
Dana L. How (DES)
Andrew Tridgell and Steve French (MD4)
Colin Plumb (MD5)
Steve Raid (SHA1)
USAGI project members (HMAC)
The DES code was subsequently redeveloped by:
Raimar Falke
Gisle Sælensminde
Niels Möller
Please send any credits updates or corrections to:
James Morris <jmorris@intercode.com.au>
This diff is collapsed.
......@@ -219,7 +219,7 @@ endif
include arch/$(ARCH)/Makefile
core-y += kernel/ mm/ fs/ ipc/ security/
core-y += kernel/ mm/ fs/ ipc/ security/ crypto/
SUBDIRS += $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
......
......@@ -398,4 +398,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -553,4 +553,5 @@ dep_bool ' Kernel low-level debugging messages via UART2' CONFIG_DEBUG_CLPS71
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -229,4 +229,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -477,6 +477,7 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
if [ "$CONFIG_SMP" = "y" ]; then
......
......@@ -293,3 +293,4 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
......@@ -547,4 +547,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -497,4 +497,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -251,4 +251,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -195,4 +195,5 @@ bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -625,4 +625,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
......@@ -207,4 +207,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -76,4 +76,5 @@ bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -80,4 +80,5 @@ bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -369,4 +369,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -254,4 +254,5 @@ bool 'Spinlock debugging' CONFIG_DEBUG_SPINLOCK
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -295,4 +295,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
......@@ -237,4 +237,5 @@ fi
endmenu
source security/Config.in
source crypto/Config.in
source lib/Config.in
#
# Cryptographic API Components
#
CONFIG_CRYPTO
This option provides the core Cryptographic API.
CONFIG_CRYPTO_MD4
MD4 message digest algorithm (RFC1320), including HMAC (RFC2104).
CONFIG_CRYPTO_MD5
MD5 message digest algorithm (RFC1321), including HMAC (RFC2104, RFC2403).
CONFIG_CRYPTO_SHA1
SHA-1 secure hash standard (FIPS 180-1), including HMAC (RFC2104, RFC2404).
CONFIG_CRYPTO_DES
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
CONFIG_CRYPTO_TEST
Quick & dirty crypto test module.
#
# Cryptographic API Configuration
#
mainmenu_option next_comment
comment 'Cryptographic options'
bool 'Cryptographic API' CONFIG_CRYPTO
if [ "$CONFIG_CRYPTO" = "y" ]; then
tristate ' MD4 digest algorithm' CONFIG_CRYPTO_MD4
tristate ' MD5 digest algorithm' CONFIG_CRYPTO_MD5
tristate ' SHA-1 digest algorithm' CONFIG_CRYPTO_SHA1
tristate ' DES and Triple DES EDE cipher algorithms' CONFIG_CRYPTO_DES
tristate ' Testing module' CONFIG_CRYPTO_TEST
fi
endmenu
#
# Cryptographic API
#
export-objs := api.o
obj-$(CONFIG_CRYPTO) += api.o cipher.o digest.o compress.o
obj-$(CONFIG_KMOD) += autoload.o
obj-$(CONFIG_CRYPTO_MD4) += md4.o
obj-$(CONFIG_CRYPTO_MD5) += md5.o
obj-$(CONFIG_CRYPTO_SHA1) += sha1.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
include $(TOPDIR)/Rules.make
/*
* Scatterlist Cryptographic API.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
* Copyright (c) 2002 David S. Miller (davem@redhat.com)
*
* Portions derived from Cryptoapi, by Alexander Kjeldaas <astor@fast.no>
* and Nettle, by Niels Mller.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/init.h>
#include <linux/crypto.h>
#include <linux/rwsem.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include "internal.h"
static LIST_HEAD(crypto_alg_list);
static DECLARE_RWSEM(crypto_alg_sem);
static inline int crypto_alg_get(struct crypto_alg *alg)
{
return try_inc_mod_count(alg->cra_module);
}
static inline void crypto_alg_put(struct crypto_alg *alg)
{
if (alg->cra_module)
__MOD_DEC_USE_COUNT(alg->cra_module);
}
struct crypto_alg *crypto_alg_lookup(char *name)
{
struct crypto_alg *q, *alg = NULL;
down_read(&crypto_alg_sem);
list_for_each_entry(q, &crypto_alg_list, cra_list) {
if (!(strcmp(q->cra_name, name))) {
if (crypto_alg_get(q))
alg = q;
break;
}
}
up_read(&crypto_alg_sem);
return alg;
}
static int crypto_init_flags(struct crypto_tfm *tfm, u32 flags)
{
tfm->crt_flags = 0;
switch (crypto_tfm_alg_type(tfm)) {
case CRYPTO_ALG_TYPE_CIPHER:
return crypto_init_cipher_flags(tfm, flags);
case CRYPTO_ALG_TYPE_DIGEST:
return crypto_init_digest_flags(tfm, flags);
case CRYPTO_ALG_TYPE_COMP:
return crypto_init_compress_flags(tfm, flags);
default:
BUG();
}
return -EINVAL;
}
static void crypto_init_ops(struct crypto_tfm *tfm)
{
switch (crypto_tfm_alg_type(tfm)) {
case CRYPTO_ALG_TYPE_CIPHER:
crypto_init_cipher_ops(tfm);
break;
case CRYPTO_ALG_TYPE_DIGEST:
crypto_init_digest_ops(tfm);
break;
case CRYPTO_ALG_TYPE_COMP:
crypto_init_compress_ops(tfm);
break;
default:
BUG();
}
}
struct crypto_tfm *crypto_alloc_tfm(char *name, u32 flags)
{
struct crypto_tfm *tfm = NULL;
struct crypto_alg *alg;
alg = crypto_alg_lookup(name);
#ifdef CONFIG_KMOD
if (alg == NULL) {
crypto_alg_autoload(name);
alg = crypto_alg_lookup(name);
}
#endif
if (alg == NULL)
goto out;
tfm = kmalloc(sizeof(*tfm), GFP_KERNEL);
if (tfm == NULL)
goto out_put;
if (alg->cra_ctxsize) {
tfm->crt_ctx = kmalloc(alg->cra_ctxsize, GFP_KERNEL);
if (tfm->crt_ctx == NULL)
goto out_free_tfm;
}
tfm->__crt_alg = alg;
if (crypto_init_flags(tfm, flags))
goto out_free_ctx;
crypto_init_ops(tfm);
goto out;
out_free_ctx:
if (tfm->__crt_alg->cra_ctxsize)
kfree(tfm->crt_ctx);
out_free_tfm:
kfree(tfm);
tfm = NULL;
out_put:
crypto_alg_put(alg);
out:
return tfm;
}
void crypto_free_tfm(struct crypto_tfm *tfm)
{
if (tfm->__crt_alg->cra_ctxsize)
kfree(tfm->crt_ctx);
if (crypto_tfm_alg_type(tfm) == CRYPTO_ALG_TYPE_CIPHER)
if (tfm->crt_cipher.cit_iv)
kfree(tfm->crt_cipher.cit_iv);
crypto_alg_put(tfm->__crt_alg);
kfree(tfm);
}
static inline int crypto_alg_blocksize_check(struct crypto_alg *alg)
{
return ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK)
== CRYPTO_ALG_TYPE_CIPHER &&
alg->cra_blocksize > CRYPTO_MAX_CIPHER_BLOCK_SIZE);
}
int crypto_register_alg(struct crypto_alg *alg)
{
int ret = 0;
struct crypto_alg *q;
down_write(&crypto_alg_sem);
list_for_each_entry(q, &crypto_alg_list, cra_list) {
if (!(strcmp(q->cra_name, alg->cra_name))) {
ret = -EEXIST;
goto out;
}
}
if (crypto_alg_blocksize_check(alg)) {
printk(KERN_WARNING "%s: blocksize %Zd exceeds max. "
"size %d\n", __FUNCTION__, alg->cra_blocksize,
CRYPTO_MAX_CIPHER_BLOCK_SIZE);
ret = -EINVAL;
}
else
list_add_tail(&alg->cra_list, &crypto_alg_list);
out:
up_write(&crypto_alg_sem);
return ret;
}
int crypto_unregister_alg(struct crypto_alg *alg)
{
int ret = -ENOENT;
struct crypto_alg *q;
BUG_ON(!alg->cra_module);
down_write(&crypto_alg_sem);
list_for_each_entry(q, &crypto_alg_list, cra_list) {
if (alg == q) {
list_del(&alg->cra_list);
ret = 0;
goto out;
}
}
out:
up_write(&crypto_alg_sem);
return ret;
}
static void *c_start(struct seq_file *m, loff_t *pos)
{
struct list_head *v;
loff_t n = *pos;
down_read(&crypto_alg_sem);
list_for_each(v, &crypto_alg_list)
if (!n--)
return list_entry(v, struct crypto_alg, cra_list);
return NULL;
}
static void *c_next(struct seq_file *m, void *p, loff_t *pos)
{
struct list_head *v = p;
(*pos)++;
v = v->next;
return (v == &crypto_alg_list) ?
NULL : list_entry(v, struct crypto_alg, cra_list);
}
static void c_stop(struct seq_file *m, void *p)
{
up_read(&crypto_alg_sem);
}
static int c_show(struct seq_file *m, void *p)
{
struct crypto_alg *alg = (struct crypto_alg *)p;
seq_printf(m, "name : %s\n", alg->cra_name);
seq_printf(m, "module : %s\n", alg->cra_module ?
alg->cra_module->name : "[static]");
seq_printf(m, "blocksize : %Zd\n", alg->cra_blocksize);
switch (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_CIPHER:
seq_printf(m, "keysize : %Zd\n", alg->cra_cipher.cia_keysize);
seq_printf(m, "ivsize : %Zd\n", alg->cra_cipher.cia_ivsize);
break;
case CRYPTO_ALG_TYPE_DIGEST:
seq_printf(m, "digestsize : %Zd\n",
alg->cra_digest.dia_digestsize);
break;
}
seq_putc(m, '\n');
return 0;
}
static struct seq_operations crypto_seq_ops = {
.start = c_start,
.next = c_next,
.stop = c_stop,
.show = c_show
};
static int crypto_info_open(struct inode *inode, struct file *file)
{
return seq_open(file, &crypto_seq_ops);
}
struct file_operations proc_crypto_ops = {
.open = crypto_info_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release
};
static int __init init_crypto(void)
{
struct proc_dir_entry *proc;
printk(KERN_INFO "Initializing Cryptographic API\n");
proc = create_proc_entry("crypto", 0, NULL);
if (proc)
proc->proc_fops = &proc_crypto_ops;
return 0;
}
__initcall(init_crypto);
EXPORT_SYMBOL_GPL(crypto_register_alg);
EXPORT_SYMBOL_GPL(crypto_unregister_alg);
EXPORT_SYMBOL_GPL(crypto_alloc_tfm);
EXPORT_SYMBOL_GPL(crypto_free_tfm);
/*
* Cryptographic API.
*
* Algorithm autoloader.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/kernel.h>
#include <linux/crypto.h>
#include <linux/string.h>
#include <linux/kmod.h>
#include "internal.h"
/*
* A far more intelligent version of this is planned. For now, just
* try an exact match on the name of the algorithm.
*/
void crypto_alg_autoload(char *name)
{
request_module(name);
return;
}
/*
* Cryptographic API.
*
* Cipher operations.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/kernel.h>
#include <linux/crypto.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/highmem.h>
#include <asm/scatterlist.h>
#include "internal.h"
typedef void (cryptfn_t)(void *, u8 *, u8 *);
typedef void (procfn_t)(struct crypto_tfm *, u8 *, cryptfn_t, int enc);
static inline void xor_64(u8 *a, const u8 *b)
{
((u32 *)a)[0] ^= ((u32 *)b)[0];
((u32 *)a)[1] ^= ((u32 *)b)[1];
}
static inline size_t sglen(struct scatterlist *sg, size_t nsg)
{
int i;
size_t n;
for (i = 0, n = 0; i < nsg; i++)
n += sg[i].length;
return n;
}
/*
* Do not call this unless the total length of all of the fragments
* has been verified as multiple of the block size.
*/
static int copy_chunks(struct crypto_tfm *tfm, u8 *buf,
struct scatterlist *sg, int sgidx,
int rlen, int *last, int in)
{
int i, copied, coff, j, aligned;
size_t bsize = crypto_tfm_alg_blocksize(tfm);
for (i = sgidx, j = copied = 0, aligned = 0 ; copied < bsize; i++) {
int len = sg[i].length;
int clen;
char *p;
if (copied) {
coff = 0;
clen = min_t(int, len, bsize - copied);
if (len == bsize - copied)
aligned = 1; /* last + right aligned */
} else {
coff = len - rlen;
clen = rlen;
}
p = crypto_kmap(sg[i].page) + sg[i].offset + coff;
if (in)
memcpy(&buf[copied], p, clen);
else
memcpy(p, &buf[copied], clen);
crypto_kunmap(p);
*last = aligned ? 0 : clen;
copied += clen;
}
return i - sgidx - 2 + aligned;
}
static inline int gather_chunks(struct crypto_tfm *tfm, u8 *buf,
struct scatterlist *sg,
int sgidx, int rlen, int *last)
{
return copy_chunks(tfm, buf, sg, sgidx, rlen, last, 1);
}
static inline int scatter_chunks(struct crypto_tfm *tfm, u8 *buf,
struct scatterlist *sg,
int sgidx, int rlen, int *last)
{
return copy_chunks(tfm, buf, sg, sgidx, rlen, last, 0);
}
/*
* Generic encrypt/decrypt wrapper for ciphers.
*
* If we find a a remnant at the end of a frag, we have to encrypt or
* decrypt across possibly multiple page boundaries via a temporary
* block, then continue processing with a chunk offset until the end
* of a frag is block aligned.
*
* The code is further complicated by having to remap a page after
* processing a block then yielding. The data will be offset from the
* start of page at the scatterlist offset, the chunking offset (coff)
* and the block offset (boff).
*/
static int crypt(struct crypto_tfm *tfm, struct scatterlist *sg,
size_t nsg, cryptfn_t crfn, procfn_t prfn, int enc)
{
int i, coff;
size_t bsize = crypto_tfm_alg_blocksize(tfm);
u8 tmp[CRYPTO_MAX_CIPHER_BLOCK_SIZE];
if (sglen(sg, nsg) % bsize) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
return -EINVAL;
}
for (i = 0, coff = 0; i < nsg; i++) {
int n = 0, boff = 0;
int len = sg[i].length - coff;
char *p = crypto_kmap(sg[i].page) + sg[i].offset + coff;
while (len) {
if (len < bsize) {
crypto_kunmap(p);
n = gather_chunks(tfm, tmp, sg, i, len, &coff);
prfn(tfm, tmp, crfn, enc);
scatter_chunks(tfm, tmp, sg, i, len, &coff);
crypto_yield(tfm);
goto unmapped;
} else {
prfn(tfm, p, crfn, enc);
crypto_kunmap(p);
crypto_yield(tfm);
/* remap and point to recalculated offset */
boff += bsize;
p = crypto_kmap(sg[i].page)
+ sg[i].offset + coff + boff;
len -= bsize;
/* End of frag with no remnant? */
if (coff && len == 0)
coff = 0;
}
}
crypto_kunmap(p);
unmapped:
i += n;
}
return 0;
}
static void cbc_process(struct crypto_tfm *tfm,
u8 *block, cryptfn_t fn, int enc)
{
if (enc) {
xor_64(tfm->crt_cipher.cit_iv, block);
fn(tfm->crt_ctx, block, tfm->crt_cipher.cit_iv);
memcpy(tfm->crt_cipher.cit_iv, block,
crypto_tfm_alg_blocksize(tfm));
} else {
u8 buf[CRYPTO_MAX_CIPHER_BLOCK_SIZE];
fn(tfm->crt_ctx, buf, block);
xor_64(buf, tfm->crt_cipher.cit_iv);
memcpy(tfm->crt_cipher.cit_iv, block,
crypto_tfm_alg_blocksize(tfm));
memcpy(block, buf, crypto_tfm_alg_blocksize(tfm));
}
}
static void ecb_process(struct crypto_tfm *tfm, u8 *block,
cryptfn_t fn, int enc)
{
fn(tfm->crt_ctx, block, block);
}
static int setkey(struct crypto_tfm *tfm, const u8 *key, size_t keylen)
{
return tfm->__crt_alg->cra_cipher.cia_setkey(tfm->crt_ctx, key,
keylen, &tfm->crt_flags);
}
static int ecb_encrypt(struct crypto_tfm *tfm,
struct scatterlist *sg, size_t nsg)
{
return crypt(tfm, sg, nsg,
tfm->__crt_alg->cra_cipher.cia_encrypt, ecb_process, 1);
}
static int ecb_decrypt(struct crypto_tfm *tfm,
struct scatterlist *sg, size_t nsg)
{
return crypt(tfm, sg, nsg,
tfm->__crt_alg->cra_cipher.cia_decrypt, ecb_process, 1);
}
static int cbc_encrypt(struct crypto_tfm *tfm,
struct scatterlist *sg, size_t nsg)
{
return crypt(tfm, sg, nsg,
tfm->__crt_alg->cra_cipher.cia_encrypt, cbc_process, 1);
}
static int cbc_decrypt(struct crypto_tfm *tfm,
struct scatterlist *sg, size_t nsg)
{
return crypt(tfm, sg, nsg,
tfm->__crt_alg->cra_cipher.cia_decrypt, cbc_process, 0);
}
static int nocrypt(struct crypto_tfm *tfm, struct scatterlist *sg, size_t nsg)
{
return -ENOSYS;
}
int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags)
{
struct crypto_alg *alg = tfm->__crt_alg;
u32 mode = flags & CRYPTO_TFM_MODE_MASK;
tfm->crt_cipher.cit_mode = mode ? mode : CRYPTO_TFM_MODE_ECB;
if (alg->cra_cipher.cia_ivsize && mode != CRYPTO_TFM_MODE_ECB) {
tfm->crt_cipher.cit_iv =
kmalloc(alg->cra_cipher.cia_ivsize, GFP_KERNEL);
if (tfm->crt_cipher.cit_iv == NULL)
return -ENOMEM;
} else
tfm->crt_cipher.cit_iv = NULL;
if (flags & CRYPTO_TFM_REQ_WEAK_KEY)
tfm->crt_flags = CRYPTO_TFM_REQ_WEAK_KEY;
return 0;
}
void crypto_init_cipher_ops(struct crypto_tfm *tfm)
{
struct cipher_tfm *ops = &tfm->crt_cipher;
ops->cit_setkey = setkey;
switch (tfm->crt_cipher.cit_mode) {
case CRYPTO_TFM_MODE_ECB:
ops->cit_encrypt = ecb_encrypt;
ops->cit_decrypt = ecb_decrypt;
break;
case CRYPTO_TFM_MODE_CBC:
ops->cit_encrypt = cbc_encrypt;
ops->cit_decrypt = cbc_decrypt;
break;
case CRYPTO_TFM_MODE_CFB:
ops->cit_encrypt = nocrypt;
ops->cit_decrypt = nocrypt;
break;
case CRYPTO_TFM_MODE_CTR:
ops->cit_encrypt = nocrypt;
ops->cit_decrypt = nocrypt;
break;
default:
BUG();
}
}
/*
* Cryptographic API.
*
* Compression operations.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/errno.h>
#include <asm/scatterlist.h>
#include <linux/string.h>
#include "internal.h"
/*
* This code currently implements blazingly fast and
* lossless Quadruple ROT13 compression.
*/
static void crypto_compress(struct crypto_tfm *tfm)
{
return;
}
static void crypto_decompress(struct crypto_tfm *tfm)
{
return;
}
int crypto_init_compress_flags(struct crypto_tfm *tfm, u32 flags)
{
return crypto_cipher_flags(flags) ? -EINVAL : 0;
}
void crypto_init_compress_ops(struct crypto_tfm *tfm)
{
struct compress_tfm *ops = &tfm->crt_compress;
ops->cot_compress = crypto_compress;
ops->cot_decompress = crypto_decompress;
}
This diff is collapsed.
/*
* Cryptographic API.
*
* Digest operations.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* The HMAC implementation is derived from USAGI.
* Copyright (c) 2002 USAGI/WIDE Project
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/crypto.h>
#include <linux/mm.h>
#include <linux/highmem.h>
#include <asm/scatterlist.h>
#include "internal.h"
static void init(struct crypto_tfm *tfm)
{
tfm->__crt_alg->cra_digest.dia_init(tfm->crt_ctx);
return;
}
static void update(struct crypto_tfm *tfm, struct scatterlist *sg, size_t nsg)
{
int i;
for (i = 0; i < nsg; i++) {
char *p = crypto_kmap(sg[i].page) + sg[i].offset;
tfm->__crt_alg->cra_digest.dia_update(tfm->crt_ctx,
p, sg[i].length);
crypto_kunmap(p);
crypto_yield(tfm);
}
return;
}
static void final(struct crypto_tfm *tfm, u8 *out)
{
tfm->__crt_alg->cra_digest.dia_final(tfm->crt_ctx, out);
return;
}
static void digest(struct crypto_tfm *tfm,
struct scatterlist *sg, size_t nsg, u8 *out)
{
int i;
tfm->crt_digest.dit_init(tfm);
for (i = 0; i < nsg; i++) {
char *p = crypto_kmap(sg[i].page) + sg[i].offset;
tfm->__crt_alg->cra_digest.dia_update(tfm->crt_ctx,
p, sg[i].length);
crypto_kunmap(p);
crypto_yield(tfm);
}
crypto_digest_final(tfm, out);
return;
}
static void hmac(struct crypto_tfm *tfm, u8 *key, size_t keylen,
struct scatterlist *sg, size_t nsg, u8 *out)
{
int i;
struct scatterlist tmp;
char ipad[crypto_tfm_alg_blocksize(tfm) + 1];
char opad[crypto_tfm_alg_blocksize(tfm) + 1];
if (keylen > crypto_tfm_alg_blocksize(tfm)) {
tmp.page = virt_to_page(key);
tmp.offset = ((long)key & ~PAGE_MASK);
tmp.length = keylen;
crypto_digest_digest(tfm, &tmp, 1, key);
keylen = crypto_tfm_alg_digestsize(tfm);
}
memset(ipad, 0, sizeof(ipad));
memset(opad, 0, sizeof(opad));
memcpy(ipad, key, keylen);
memcpy(opad, key, keylen);
for (i = 0; i < crypto_tfm_alg_blocksize(tfm); i++) {
ipad[i] ^= 0x36;
opad[i] ^= 0x5c;
}
tmp.page = virt_to_page(ipad);
tmp.offset = ((long)ipad & ~PAGE_MASK);
tmp.length = crypto_tfm_alg_blocksize(tfm);
crypto_digest_init(tfm);
crypto_digest_update(tfm, &tmp, 1);
crypto_digest_update(tfm, sg, nsg);
crypto_digest_final(tfm, out);
tmp.page = virt_to_page(opad);
tmp.offset = ((long)opad & ~PAGE_MASK);
tmp.length = crypto_tfm_alg_blocksize(tfm);
crypto_digest_init(tfm);
crypto_digest_update(tfm, &tmp, 1);
tmp.page = virt_to_page(out);
tmp.offset = ((long)out & ~PAGE_MASK);
tmp.length = crypto_tfm_alg_digestsize(tfm);
crypto_digest_update(tfm, &tmp, 1);
crypto_digest_final(tfm, out);
return;
}
int crypto_init_digest_flags(struct crypto_tfm *tfm, u32 flags)
{
return crypto_cipher_flags(flags) ? -EINVAL : 0;
}
void crypto_init_digest_ops(struct crypto_tfm *tfm)
{
struct digest_tfm *ops = &tfm->crt_digest;
ops->dit_init = init;
ops->dit_update = update;
ops->dit_final = final;
ops->dit_digest = digest;
ops->dit_hmac = hmac;
}
/*
* Cryptographic API.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#ifndef _CRYPTO_INTERNAL_H
#define _CRYPTO_INTERNAL_H
#include <linux/mm.h>
#include <linux/highmem.h>
#include <asm/hardirq.h>
#include <asm/softirq.h>
static inline void *crypto_kmap(struct page *page)
{
return kmap_atomic(page, in_softirq() ?
KM_CRYPTO_SOFTIRQ : KM_CRYPTO_USER);
}
static inline void crypto_kunmap(void *vaddr)
{
kunmap_atomic(vaddr, in_softirq() ?
KM_CRYPTO_SOFTIRQ : KM_CRYPTO_USER);
}
static inline void crypto_yield(struct crypto_tfm *tfm)
{
if (!in_softirq())
cond_resched();
}
static inline int crypto_cipher_flags(u32 flags)
{
return flags & (CRYPTO_TFM_MODE_MASK|CRYPTO_TFM_REQ_WEAK_KEY);
}
#ifdef CONFIG_KMOD
void crypto_alg_autoload(char *name);
#endif
int crypto_init_digest_flags(struct crypto_tfm *tfm, u32 flags);
int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags);
int crypto_init_compress_flags(struct crypto_tfm *tfm, u32 flags);
void crypto_init_digest_ops(struct crypto_tfm *tfm);
void crypto_init_cipher_ops(struct crypto_tfm *tfm);
void crypto_init_compress_ops(struct crypto_tfm *tfm);
#endif /* _CRYPTO_INTERNAL_H */
/*
* Cryptographic API.
*
* MD4 Message Digest Algorithm (RFC1320).
*
* Implementation derived from Andrew Tridgell and Steve French's
* CIFS MD4 implementation, and the cryptoapi implementation
* originally based on the public domain implementation written
* by Colin Plumb in 1993.
*
* Copyright (c) Andrew Tridgell 1997-1998.
* Modified by Steve French (sfrench@us.ibm.com) 2002
* Copyright (c) Cryptoapi developers.
* Copyright (c) 2002 David S. Miller (davem@redhat.com)
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
*/
#include <linux/init.h>
#include <linux/crypto.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <asm/byteorder.h>
#define MD4_DIGEST_SIZE 16
#define MD4_HMAC_BLOCK_SIZE 64
#define MD4_BLOCK_WORDS 16
#define MD4_HASH_WORDS 4
struct md4_ctx {
u32 hash[MD4_HASH_WORDS];
u32 block[MD4_BLOCK_WORDS];
u64 byte_count;
};
static u32 lshift(u32 x, int s)
{
x &= 0xFFFFFFFF;
return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s));
}
static inline u32 F(u32 x, u32 y, u32 z)
{
return (x & y) | ((~x) & z);
}
static inline u32 G(u32 x, u32 y, u32 z)
{
return (x & y) | (x & z) | (y & z);
}
static inline u32 H(u32 x, u32 y, u32 z)
{
return x ^ y ^ z;
}
#define ROUND1(a,b,c,d,k,s) (a = lshift(a + F(b,c,d) + k, s))
#define ROUND2(a,b,c,d,k,s) (a = lshift(a + G(b,c,d) + k + (u32)0x5A827999,s))
#define ROUND3(a,b,c,d,k,s) (a = lshift(a + H(b,c,d) + k + (u32)0x6ED9EBA1,s))
/* XXX: this stuff can be optimized */
static inline void le32_to_cpu_array(u32 *buf, unsigned words)
{
while (words--) {
__le32_to_cpus(buf);
buf++;
}
}
static inline void cpu_to_le32_array(u32 *buf, unsigned words)
{
while (words--) {
__cpu_to_le32s(buf);
buf++;
}
}
static inline void md4_transform(u32 *hash, u32 const *in)
{
u32 a, b, c, d;
a = hash[0];
b = hash[1];
c = hash[2];
d = hash[3];
ROUND1(a, b, c, d, in[0], 3);
ROUND1(d, a, b, c, in[1], 7);
ROUND1(c, d, a, b, in[2], 11);
ROUND1(b, c, d, a, in[3], 19);
ROUND1(a, b, c, d, in[4], 3);
ROUND1(d, a, b, c, in[5], 7);
ROUND1(c, d, a, b, in[6], 11);
ROUND1(b, c, d, a, in[7], 19);
ROUND1(a, b, c, d, in[8], 3);
ROUND1(d, a, b, c, in[9], 7);
ROUND1(c, d, a, b, in[10], 11);
ROUND1(b, c, d, a, in[11], 19);
ROUND1(a, b, c, d, in[12], 3);
ROUND1(d, a, b, c, in[13], 7);
ROUND1(c, d, a, b, in[14], 11);
ROUND1(b, c, d, a, in[15], 19);
ROUND2(a, b, c, d,in[ 0], 3);
ROUND2(d, a, b, c, in[4], 5);
ROUND2(c, d, a, b, in[8], 9);
ROUND2(b, c, d, a, in[12], 13);
ROUND2(a, b, c, d, in[1], 3);
ROUND2(d, a, b, c, in[5], 5);
ROUND2(c, d, a, b, in[9], 9);
ROUND2(b, c, d, a, in[13], 13);
ROUND2(a, b, c, d, in[2], 3);
ROUND2(d, a, b, c, in[6], 5);
ROUND2(c, d, a, b, in[10], 9);
ROUND2(b, c, d, a, in[14], 13);
ROUND2(a, b, c, d, in[3], 3);
ROUND2(d, a, b, c, in[7], 5);
ROUND2(c, d, a, b, in[11], 9);
ROUND2(b, c, d, a, in[15], 13);
ROUND3(a, b, c, d,in[ 0], 3);
ROUND3(d, a, b, c, in[8], 9);
ROUND3(c, d, a, b, in[4], 11);
ROUND3(b, c, d, a, in[12], 15);
ROUND3(a, b, c, d, in[2], 3);
ROUND3(d, a, b, c, in[10], 9);
ROUND3(c, d, a, b, in[6], 11);
ROUND3(b, c, d, a, in[14], 15);
ROUND3(a, b, c, d, in[1], 3);
ROUND3(d, a, b, c, in[9], 9);
ROUND3(c, d, a, b, in[5], 11);
ROUND3(b, c, d, a, in[13], 15);
ROUND3(a, b, c, d, in[3], 3);
ROUND3(d, a, b, c, in[11], 9);
ROUND3(c, d, a, b, in[7], 11);
ROUND3(b, c, d, a, in[15], 15);
hash[0] += a;
hash[1] += b;
hash[2] += c;
hash[3] += d;
}
static inline void md4_transform_helper(struct md4_ctx *ctx)
{
le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(u32));
md4_transform(ctx->hash, ctx->block);
}
static void md4_init(void *ctx)
{
struct md4_ctx *mctx = ctx;
mctx->hash[0] = 0x67452301;
mctx->hash[1] = 0xefcdab89;
mctx->hash[2] = 0x98badcfe;
mctx->hash[3] = 0x10325476;
mctx->byte_count = 0;
}
static void md4_update(void *ctx, const u8 *data, size_t len)
{
struct md4_ctx *mctx = ctx;
const u32 avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
mctx->byte_count += len;
if (avail > len) {
memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
data, len);
return;
}
memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
data, avail);
md4_transform_helper(mctx);
data += avail;
len -= avail;
while (len >= sizeof(mctx->block)) {
memcpy(mctx->block, data, sizeof(mctx->block));
md4_transform_helper(mctx);
data += sizeof(mctx->block);
len -= sizeof(mctx->block);
}
memcpy(mctx->block, data, len);
}
static void md4_final(void *ctx, u8 *out)
{
struct md4_ctx *mctx = ctx;
const int offset = mctx->byte_count & 0x3f;
char *p = (char *)mctx->block + offset;
int padding = 56 - (offset + 1);
*p++ = 0x80;
if (padding < 0) {
memset(p, 0x00, padding + sizeof (u64));
md4_transform_helper(mctx);
p = (char *)mctx->block;
padding = 56;
}
memset(p, 0, padding);
mctx->block[14] = mctx->byte_count << 3;
mctx->block[15] = mctx->byte_count >> 29;
le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
sizeof(u64)) / sizeof(u32));
md4_transform(mctx->hash, mctx->block);
cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(u32));
memcpy(out, mctx->hash, sizeof(mctx->hash));
memset(mctx, 0, sizeof(mctx));
}
static struct crypto_alg alg = {
.cra_name = "md4",
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = MD4_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct md4_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
.dia_digestsize = MD4_DIGEST_SIZE,
.dia_init = md4_init,
.dia_update = md4_update,
.dia_final = md4_final } }
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MD4 Message Digest Algorithm");
/*
* Cryptographic API.
*
* MD5 Message Digest Algorithm (RFC1321).
*
* Derived from cryptoapi implementation, originally based on the
* public domain implementation written by Colin Plumb in 1993.
*
* Copyright (c) Cryptoapi developers.
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/crypto.h>
#include <asm/byteorder.h>
#define MD5_DIGEST_SIZE 16
#define MD5_HMAC_BLOCK_SIZE 64
#define MD5_BLOCK_WORDS 16
#define MD5_HASH_WORDS 4
#define F1(x, y, z) (z ^ (x & (y ^ z)))
#define F2(x, y, z) F1(z, x, y)
#define F3(x, y, z) (x ^ y ^ z)
#define F4(x, y, z) (y ^ (x | ~z))
#define MD5STEP(f, w, x, y, z, in, s) \
(w += f(x, y, z) + in, w = (w<<s | w>>(32-s)) + x)
struct md5_ctx {
u32 hash[MD5_HASH_WORDS];
u32 block[MD5_BLOCK_WORDS];
u64 byte_count;
};
static inline void md5_transform(u32 *hash, u32 const *in)
{
u32 a, b, c, d;
a = hash[0];
b = hash[1];
c = hash[2];
d = hash[3];
MD5STEP(F1, a, b, c, d, in[0] + 0xd76aa478, 7);
MD5STEP(F1, d, a, b, c, in[1] + 0xe8c7b756, 12);
MD5STEP(F1, c, d, a, b, in[2] + 0x242070db, 17);
MD5STEP(F1, b, c, d, a, in[3] + 0xc1bdceee, 22);
MD5STEP(F1, a, b, c, d, in[4] + 0xf57c0faf, 7);
MD5STEP(F1, d, a, b, c, in[5] + 0x4787c62a, 12);
MD5STEP(F1, c, d, a, b, in[6] + 0xa8304613, 17);
MD5STEP(F1, b, c, d, a, in[7] + 0xfd469501, 22);
MD5STEP(F1, a, b, c, d, in[8] + 0x698098d8, 7);
MD5STEP(F1, d, a, b, c, in[9] + 0x8b44f7af, 12);
MD5STEP(F1, c, d, a, b, in[10] + 0xffff5bb1, 17);
MD5STEP(F1, b, c, d, a, in[11] + 0x895cd7be, 22);
MD5STEP(F1, a, b, c, d, in[12] + 0x6b901122, 7);
MD5STEP(F1, d, a, b, c, in[13] + 0xfd987193, 12);
MD5STEP(F1, c, d, a, b, in[14] + 0xa679438e, 17);
MD5STEP(F1, b, c, d, a, in[15] + 0x49b40821, 22);
MD5STEP(F2, a, b, c, d, in[1] + 0xf61e2562, 5);
MD5STEP(F2, d, a, b, c, in[6] + 0xc040b340, 9);
MD5STEP(F2, c, d, a, b, in[11] + 0x265e5a51, 14);
MD5STEP(F2, b, c, d, a, in[0] + 0xe9b6c7aa, 20);
MD5STEP(F2, a, b, c, d, in[5] + 0xd62f105d, 5);
MD5STEP(F2, d, a, b, c, in[10] + 0x02441453, 9);
MD5STEP(F2, c, d, a, b, in[15] + 0xd8a1e681, 14);
MD5STEP(F2, b, c, d, a, in[4] + 0xe7d3fbc8, 20);
MD5STEP(F2, a, b, c, d, in[9] + 0x21e1cde6, 5);
MD5STEP(F2, d, a, b, c, in[14] + 0xc33707d6, 9);
MD5STEP(F2, c, d, a, b, in[3] + 0xf4d50d87, 14);
MD5STEP(F2, b, c, d, a, in[8] + 0x455a14ed, 20);
MD5STEP(F2, a, b, c, d, in[13] + 0xa9e3e905, 5);
MD5STEP(F2, d, a, b, c, in[2] + 0xfcefa3f8, 9);
MD5STEP(F2, c, d, a, b, in[7] + 0x676f02d9, 14);
MD5STEP(F2, b, c, d, a, in[12] + 0x8d2a4c8a, 20);
MD5STEP(F3, a, b, c, d, in[5] + 0xfffa3942, 4);
MD5STEP(F3, d, a, b, c, in[8] + 0x8771f681, 11);
MD5STEP(F3, c, d, a, b, in[11] + 0x6d9d6122, 16);
MD5STEP(F3, b, c, d, a, in[14] + 0xfde5380c, 23);
MD5STEP(F3, a, b, c, d, in[1] + 0xa4beea44, 4);
MD5STEP(F3, d, a, b, c, in[4] + 0x4bdecfa9, 11);
MD5STEP(F3, c, d, a, b, in[7] + 0xf6bb4b60, 16);
MD5STEP(F3, b, c, d, a, in[10] + 0xbebfbc70, 23);
MD5STEP(F3, a, b, c, d, in[13] + 0x289b7ec6, 4);
MD5STEP(F3, d, a, b, c, in[0] + 0xeaa127fa, 11);
MD5STEP(F3, c, d, a, b, in[3] + 0xd4ef3085, 16);
MD5STEP(F3, b, c, d, a, in[6] + 0x04881d05, 23);
MD5STEP(F3, a, b, c, d, in[9] + 0xd9d4d039, 4);
MD5STEP(F3, d, a, b, c, in[12] + 0xe6db99e5, 11);
MD5STEP(F3, c, d, a, b, in[15] + 0x1fa27cf8, 16);
MD5STEP(F3, b, c, d, a, in[2] + 0xc4ac5665, 23);
MD5STEP(F4, a, b, c, d, in[0] + 0xf4292244, 6);
MD5STEP(F4, d, a, b, c, in[7] + 0x432aff97, 10);
MD5STEP(F4, c, d, a, b, in[14] + 0xab9423a7, 15);
MD5STEP(F4, b, c, d, a, in[5] + 0xfc93a039, 21);
MD5STEP(F4, a, b, c, d, in[12] + 0x655b59c3, 6);
MD5STEP(F4, d, a, b, c, in[3] + 0x8f0ccc92, 10);
MD5STEP(F4, c, d, a, b, in[10] + 0xffeff47d, 15);
MD5STEP(F4, b, c, d, a, in[1] + 0x85845dd1, 21);
MD5STEP(F4, a, b, c, d, in[8] + 0x6fa87e4f, 6);
MD5STEP(F4, d, a, b, c, in[15] + 0xfe2ce6e0, 10);
MD5STEP(F4, c, d, a, b, in[6] + 0xa3014314, 15);
MD5STEP(F4, b, c, d, a, in[13] + 0x4e0811a1, 21);
MD5STEP(F4, a, b, c, d, in[4] + 0xf7537e82, 6);
MD5STEP(F4, d, a, b, c, in[11] + 0xbd3af235, 10);
MD5STEP(F4, c, d, a, b, in[2] + 0x2ad7d2bb, 15);
MD5STEP(F4, b, c, d, a, in[9] + 0xeb86d391, 21);
hash[0] += a;
hash[1] += b;
hash[2] += c;
hash[3] += d;
}
/* XXX: this stuff can be optimized */
static inline void le32_to_cpu_array(u32 *buf, unsigned words)
{
while (words--) {
__le32_to_cpus(buf);
buf++;
}
}
static inline void cpu_to_le32_array(u32 *buf, unsigned words)
{
while (words--) {
__cpu_to_le32s(buf);
buf++;
}
}
static inline void md5_transform_helper(struct md5_ctx *ctx)
{
le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(u32));
md5_transform(ctx->hash, ctx->block);
}
static void md5_init(void *ctx)
{
struct md5_ctx *mctx = ctx;
mctx->hash[0] = 0x67452301;
mctx->hash[1] = 0xefcdab89;
mctx->hash[2] = 0x98badcfe;
mctx->hash[3] = 0x10325476;
mctx->byte_count = 0;
}
static void md5_update(void *ctx, const u8 *data, size_t len)
{
struct md5_ctx *mctx = ctx;
const u32 avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
mctx->byte_count += len;
if (avail > len) {
memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
data, len);
return;
}
memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
data, avail);
md5_transform_helper(mctx);
data += avail;
len -= avail;
while (len >= sizeof(mctx->block)) {
memcpy(mctx->block, data, sizeof(mctx->block));
md5_transform_helper(mctx);
data += sizeof(mctx->block);
len -= sizeof(mctx->block);
}
memcpy(mctx->block, data, len);
}
static void md5_final(void *ctx, u8 *out)
{
struct md5_ctx *mctx = ctx;
const int offset = mctx->byte_count & 0x3f;
char *p = (char *)mctx->block + offset;
int padding = 56 - (offset + 1);
*p++ = 0x80;
if (padding < 0) {
memset(p, 0x00, padding + sizeof (u64));
md5_transform_helper(mctx);
p = (char *)mctx->block;
padding = 56;
}
memset(p, 0, padding);
mctx->block[14] = mctx->byte_count << 3;
mctx->block[15] = mctx->byte_count >> 29;
le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
sizeof(u64)) / sizeof(u32));
md5_transform(mctx->hash, mctx->block);
cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(u32));
memcpy(out, mctx->hash, sizeof(mctx->hash));
memset(mctx, 0, sizeof(mctx));
}
static struct crypto_alg alg = {
.cra_name = "md5",
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct md5_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
.dia_digestsize = MD5_DIGEST_SIZE,
.dia_init = md5_init,
.dia_update = md5_update,
.dia_final = md5_final } }
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MD5 Message Digest Algorithm");
/*
* Cryptographic API.
*
* SHA1 Secure Hash Algorithm.
*
* Derived from cryptoapi implementation, adapted for in-place
* scatterlist interface. Originally based on the public domain
* implementation written by Steve Raid.
*
* Copyright (c) Alan Smithee.
* Copyright (c) McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/crypto.h>
#include <asm/scatterlist.h>
#include <asm/byteorder.h>
#define SHA1_DIGEST_SIZE 20
#define SHA1_HMAC_BLOCK_SIZE 64
static inline u32 rol(u32 value, u32 bits)
{
return (((value) << (bits)) | ((value) >> (32 - (bits))));
}
/* blk0() and blk() perform the initial expand. */
/* I got the idea of expanding during the round function from SSLeay */
# define blk0(i) block32[i]
#define blk(i) (block32[i&15] = rol(block32[(i+13)&15]^block32[(i+8)&15] \
^block32[(i+2)&15]^block32[i&15],1))
/* (R0+R1), R2, R3, R4 are the different operations used in SHA1 */
#define R0(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk0(i)+0x5A827999+rol(v,5); \
w=rol(w,30);
#define R1(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk(i)+0x5A827999+rol(v,5); \
w=rol(w,30);
#define R2(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0x6ED9EBA1+rol(v,5);w=rol(w,30);
#define R3(v,w,x,y,z,i) z+=(((w|x)&y)|(w&x))+blk(i)+0x8F1BBCDC+rol(v,5); \
w=rol(w,30);
#define R4(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0xCA62C1D6+rol(v,5);w=rol(w,30);
struct sha1_ctx {
u64 count;
u32 state[5];
u8 buffer[64];
};
/* Hash a single 512-bit block. This is the core of the algorithm. */
static void sha1_transform(u32 *state, const u8 *in)
{
u32 a, b, c, d, e;
u32 block32[16];
/* convert/copy data to workspace */
for (a = 0; a < sizeof(block32)/sizeof(u32); a++)
block32[a] = be32_to_cpu (((const u32 *)in)[a]);
/* Copy context->state[] to working vars */
a = state[0];
b = state[1];
c = state[2];
d = state[3];
e = state[4];
/* 4 rounds of 20 operations each. Loop unrolled. */
R0(a,b,c,d,e, 0); R0(e,a,b,c,d, 1); R0(d,e,a,b,c, 2); R0(c,d,e,a,b, 3);
R0(b,c,d,e,a, 4); R0(a,b,c,d,e, 5); R0(e,a,b,c,d, 6); R0(d,e,a,b,c, 7);
R0(c,d,e,a,b, 8); R0(b,c,d,e,a, 9); R0(a,b,c,d,e,10); R0(e,a,b,c,d,11);
R0(d,e,a,b,c,12); R0(c,d,e,a,b,13); R0(b,c,d,e,a,14); R0(a,b,c,d,e,15);
R1(e,a,b,c,d,16); R1(d,e,a,b,c,17); R1(c,d,e,a,b,18); R1(b,c,d,e,a,19);
R2(a,b,c,d,e,20); R2(e,a,b,c,d,21); R2(d,e,a,b,c,22); R2(c,d,e,a,b,23);
R2(b,c,d,e,a,24); R2(a,b,c,d,e,25); R2(e,a,b,c,d,26); R2(d,e,a,b,c,27);
R2(c,d,e,a,b,28); R2(b,c,d,e,a,29); R2(a,b,c,d,e,30); R2(e,a,b,c,d,31);
R2(d,e,a,b,c,32); R2(c,d,e,a,b,33); R2(b,c,d,e,a,34); R2(a,b,c,d,e,35);
R2(e,a,b,c,d,36); R2(d,e,a,b,c,37); R2(c,d,e,a,b,38); R2(b,c,d,e,a,39);
R3(a,b,c,d,e,40); R3(e,a,b,c,d,41); R3(d,e,a,b,c,42); R3(c,d,e,a,b,43);
R3(b,c,d,e,a,44); R3(a,b,c,d,e,45); R3(e,a,b,c,d,46); R3(d,e,a,b,c,47);
R3(c,d,e,a,b,48); R3(b,c,d,e,a,49); R3(a,b,c,d,e,50); R3(e,a,b,c,d,51);
R3(d,e,a,b,c,52); R3(c,d,e,a,b,53); R3(b,c,d,e,a,54); R3(a,b,c,d,e,55);
R3(e,a,b,c,d,56); R3(d,e,a,b,c,57); R3(c,d,e,a,b,58); R3(b,c,d,e,a,59);
R4(a,b,c,d,e,60); R4(e,a,b,c,d,61); R4(d,e,a,b,c,62); R4(c,d,e,a,b,63);
R4(b,c,d,e,a,64); R4(a,b,c,d,e,65); R4(e,a,b,c,d,66); R4(d,e,a,b,c,67);
R4(c,d,e,a,b,68); R4(b,c,d,e,a,69); R4(a,b,c,d,e,70); R4(e,a,b,c,d,71);
R4(d,e,a,b,c,72); R4(c,d,e,a,b,73); R4(b,c,d,e,a,74); R4(a,b,c,d,e,75);
R4(e,a,b,c,d,76); R4(d,e,a,b,c,77); R4(c,d,e,a,b,78); R4(b,c,d,e,a,79);
/* Add the working vars back into context.state[] */
state[0] += a;
state[1] += b;
state[2] += c;
state[3] += d;
state[4] += e;
/* Wipe variables */
a = b = c = d = e = 0;
memset (block32, 0x00, sizeof block32);
}
static void sha1_init(void *ctx)
{
struct sha1_ctx *sctx = ctx;
const static struct sha1_ctx initstate = {
0,
{ 0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0 },
{ 0, }
};
*sctx = initstate;
}
static void sha1_update(void *ctx, const u8 *data, size_t len)
{
struct sha1_ctx *sctx = ctx;
unsigned i, j;
j = (sctx->count >> 3) & 0x3f;
sctx->count += len << 3;
if ((j + len) > 63) {
memcpy(&sctx->buffer[j], data, (i = 64-j));
sha1_transform(sctx->state, sctx->buffer);
for ( ; i + 63 < len; i += 64) {
sha1_transform(sctx->state, &data[i]);
}
j = 0;
}
else i = 0;
memcpy(&sctx->buffer[j], &data[i], len - i);
}
/* Add padding and return the message digest. */
static void sha1_final(void* ctx, u8 *out)
{
struct sha1_ctx *sctx = ctx;
u32 i, j, index, padlen;
u64 t;
u8 bits[8] = { 0, };
const static u8 padding[64] = { 0x80, };
t = sctx->count;
bits[7] = 0xff & t; t>>=8;
bits[6] = 0xff & t; t>>=8;
bits[5] = 0xff & t; t>>=8;
bits[4] = 0xff & t; t>>=8;
bits[3] = 0xff & t; t>>=8;
bits[2] = 0xff & t; t>>=8;
bits[1] = 0xff & t; t>>=8;
bits[0] = 0xff & t;
/* Pad out to 56 mod 64 */
index = (sctx->count >> 3) & 0x3f;
padlen = (index < 56) ? (56 - index) : ((64+56) - index);
sha1_update(sctx, padding, padlen);
/* Append length */
sha1_update(sctx, bits, sizeof bits);
/* Store state in digest */
for (i = j = 0; i < 5; i++, j += 4) {
u32 t2 = sctx->state[i];
out[j+3] = t2 & 0xff; t2>>=8;
out[j+2] = t2 & 0xff; t2>>=8;
out[j+1] = t2 & 0xff; t2>>=8;
out[j ] = t2 & 0xff;
}
/* Wipe context */
memset(sctx, 0, sizeof *sctx);
}
static struct crypto_alg alg = {
.cra_name = "sha1",
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA1_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sha1_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
.dia_digestsize = SHA1_DIGEST_SIZE,
.dia_init = sha1_init,
.dia_update = sha1_update,
.dia_final = sha1_final } }
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");
This diff is collapsed.
This diff is collapsed.
......@@ -23,7 +23,9 @@ D(7) KM_PTE0,
D(8) KM_PTE1,
D(9) KM_IRQ0,
D(10) KM_IRQ1,
D(11) KM_TYPE_NR
D(11) KM_CRYPTO_USER,
D(12) KM_CRYPTO_SOFTIRQ,
D(13) KM_TYPE_NR
};
#undef D
......
......@@ -22,7 +22,9 @@ D(8) KM_PTE1,
D(9) KM_PTE2,
D(10) KM_IRQ0,
D(11) KM_IRQ1,
D(12) KM_TYPE_NR
D(12) KM_CRYPTO_USER,
D(13) KM_CRYPTO_SOFTIRQ,
D(14) KM_TYPE_NR
};
#undef D
......
......@@ -21,7 +21,9 @@ D(7) KM_PTE0,
D(8) KM_PTE1,
D(9) KM_IRQ0,
D(10) KM_IRQ1,
D(11) KM_TYPE_NR
D(11) KM_CRYPTO_USER,
D(12) KM_CRYPTO_SOFTIRQ,
D(13) KM_TYPE_NR
};
#undef D
......
......@@ -14,6 +14,8 @@ enum km_type {
KM_PTE1,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
......@@ -14,6 +14,8 @@ enum km_type {
KM_PTE1,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
......@@ -14,6 +14,8 @@ enum km_type {
KM_PTE1,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
......@@ -13,6 +13,8 @@ enum km_type {
KM_PTE1,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
......@@ -17,6 +17,8 @@ enum km_type {
KM_PTE1,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
......@@ -11,6 +11,8 @@ enum km_type {
KM_BIO_DST_IRQ,
KM_IRQ0,
KM_IRQ1,
KM_CRYPTO_USER,
KM_CRYPTO_SOFTIRQ,
KM_TYPE_NR
};
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment