diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-01-31 17:22:45 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-01-31 17:22:45 -0500 |
commit | a103950e0dd2058df5e8a8d4a915707bdcf205f0 (patch) | |
tree | af5d091f768db4ed7a12fc3c5484d3e20ad9d514 /crypto | |
parent | 2cfa1cd3da14814a1e9ec6a4fce8612637d3ee3d (diff) | |
parent | 2d55807b7f7bf62bb05a8b91247c5eb7cd19ac04 (diff) |
Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Enforce the setting of keys for keyed aead/hash/skcipher
algorithms.
- Add multibuf speed tests in tcrypt.
Algorithms:
- Improve performance of sha3-generic.
- Add native sha512 support on arm64.
- Add v8.2 Crypto Extentions version of sha3/sm3 on arm64.
- Avoid hmac nesting by requiring underlying algorithm to be unkeyed.
- Add cryptd_max_cpu_qlen module parameter to cryptd.
Drivers:
- Add support for EIP97 engine in inside-secure.
- Add inline IPsec support to chelsio.
- Add RevB core support to crypto4xx.
- Fix AEAD ICV check in crypto4xx.
- Add stm32 crypto driver.
- Add support for BCM63xx platforms in bcm2835 and remove bcm63xx.
- Add Derived Key Protocol (DKP) support in caam.
- Add Samsung Exynos True RNG driver.
- Add support for Exynos5250+ SoCs in exynos PRNG driver"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (166 commits)
crypto: picoxcell - Fix error handling in spacc_probe()
crypto: arm64/sha512 - fix/improve new v8.2 Crypto Extensions code
crypto: arm64/sm3 - new v8.2 Crypto Extensions implementation
crypto: arm64/sha3 - new v8.2 Crypto Extensions implementation
crypto: testmgr - add new testcases for sha3
crypto: sha3-generic - export init/update/final routines
crypto: sha3-generic - simplify code
crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize
crypto: sha3-generic - fixes for alignment and big endian operation
crypto: aesni - handle zero length dst buffer
crypto: artpec6 - remove select on non-existing CRYPTO_SHA384
hwrng: bcm2835 - Remove redundant dev_err call in bcm2835_rng_probe()
crypto: stm32 - remove redundant dev_err call in stm32_cryp_probe()
crypto: axis - remove unnecessary platform_get_resource() error check
crypto: testmgr - test misuse of result in ahash
crypto: inside-secure - make function safexcel_try_push_requests static
crypto: aes-generic - fix aes-generic regression on powerpc
crypto: chelsio - Fix indentation warning
crypto: arm64/sha1-ce - get rid of literal pool
crypto: arm64/sha2-ce - move the round constant table to .rodata section
...
Diffstat (limited to 'crypto')
44 files changed, 2032 insertions, 653 deletions
diff --git a/crypto/Kconfig b/crypto/Kconfig index 20360e040425..b75264b09a46 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig | |||
@@ -131,7 +131,7 @@ config CRYPTO_DH | |||
131 | 131 | ||
132 | config CRYPTO_ECDH | 132 | config CRYPTO_ECDH |
133 | tristate "ECDH algorithm" | 133 | tristate "ECDH algorithm" |
134 | select CRYTPO_KPP | 134 | select CRYPTO_KPP |
135 | select CRYPTO_RNG_DEFAULT | 135 | select CRYPTO_RNG_DEFAULT |
136 | help | 136 | help |
137 | Generic implementation of the ECDH algorithm | 137 | Generic implementation of the ECDH algorithm |
@@ -1340,6 +1340,7 @@ config CRYPTO_SALSA20_586 | |||
1340 | tristate "Salsa20 stream cipher algorithm (i586)" | 1340 | tristate "Salsa20 stream cipher algorithm (i586)" |
1341 | depends on (X86 || UML_X86) && !64BIT | 1341 | depends on (X86 || UML_X86) && !64BIT |
1342 | select CRYPTO_BLKCIPHER | 1342 | select CRYPTO_BLKCIPHER |
1343 | select CRYPTO_SALSA20 | ||
1343 | help | 1344 | help |
1344 | Salsa20 stream cipher algorithm. | 1345 | Salsa20 stream cipher algorithm. |
1345 | 1346 | ||
@@ -1353,6 +1354,7 @@ config CRYPTO_SALSA20_X86_64 | |||
1353 | tristate "Salsa20 stream cipher algorithm (x86_64)" | 1354 | tristate "Salsa20 stream cipher algorithm (x86_64)" |
1354 | depends on (X86 || UML_X86) && 64BIT | 1355 | depends on (X86 || UML_X86) && 64BIT |
1355 | select CRYPTO_BLKCIPHER | 1356 | select CRYPTO_BLKCIPHER |
1357 | select CRYPTO_SALSA20 | ||
1356 | help | 1358 | help |
1357 | Salsa20 stream cipher algorithm. | 1359 | Salsa20 stream cipher algorithm. |
1358 | 1360 | ||
diff --git a/crypto/Makefile b/crypto/Makefile index d674884b2d51..cdbc03b35510 100644 --- a/crypto/Makefile +++ b/crypto/Makefile | |||
@@ -99,6 +99,7 @@ obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o | |||
99 | obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o | 99 | obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o |
100 | CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 | 100 | CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 |
101 | obj-$(CONFIG_CRYPTO_AES) += aes_generic.o | 101 | obj-$(CONFIG_CRYPTO_AES) += aes_generic.o |
102 | CFLAGS_aes_generic.o := $(call cc-option,-fno-code-hoisting) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356 | ||
102 | obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o | 103 | obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o |
103 | obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o | 104 | obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o |
104 | obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o | 105 | obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o |
diff --git a/crypto/ablk_helper.c b/crypto/ablk_helper.c index 1441f07d0a19..09776bb1360e 100644 --- a/crypto/ablk_helper.c +++ b/crypto/ablk_helper.c | |||
@@ -18,9 +18,7 @@ | |||
18 | * GNU General Public License for more details. | 18 | * GNU General Public License for more details. |
19 | * | 19 | * |
20 | * You should have received a copy of the GNU General Public License | 20 | * You should have received a copy of the GNU General Public License |
21 | * along with this program; if not, write to the Free Software | 21 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
22 | * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 | ||
23 | * USA | ||
24 | * | 22 | * |
25 | */ | 23 | */ |
26 | 24 | ||
@@ -28,7 +26,6 @@ | |||
28 | #include <linux/crypto.h> | 26 | #include <linux/crypto.h> |
29 | #include <linux/init.h> | 27 | #include <linux/init.h> |
30 | #include <linux/module.h> | 28 | #include <linux/module.h> |
31 | #include <linux/hardirq.h> | ||
32 | #include <crypto/algapi.h> | 29 | #include <crypto/algapi.h> |
33 | #include <crypto/cryptd.h> | 30 | #include <crypto/cryptd.h> |
34 | #include <crypto/ablk_helper.h> | 31 | #include <crypto/ablk_helper.h> |
diff --git a/crypto/aead.c b/crypto/aead.c index f794b30a9407..60b3bbe973e7 100644 --- a/crypto/aead.c +++ b/crypto/aead.c | |||
@@ -54,11 +54,18 @@ int crypto_aead_setkey(struct crypto_aead *tfm, | |||
54 | const u8 *key, unsigned int keylen) | 54 | const u8 *key, unsigned int keylen) |
55 | { | 55 | { |
56 | unsigned long alignmask = crypto_aead_alignmask(tfm); | 56 | unsigned long alignmask = crypto_aead_alignmask(tfm); |
57 | int err; | ||
57 | 58 | ||
58 | if ((unsigned long)key & alignmask) | 59 | if ((unsigned long)key & alignmask) |
59 | return setkey_unaligned(tfm, key, keylen); | 60 | err = setkey_unaligned(tfm, key, keylen); |
61 | else | ||
62 | err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen); | ||
63 | |||
64 | if (err) | ||
65 | return err; | ||
60 | 66 | ||
61 | return crypto_aead_alg(tfm)->setkey(tfm, key, keylen); | 67 | crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
68 | return 0; | ||
62 | } | 69 | } |
63 | EXPORT_SYMBOL_GPL(crypto_aead_setkey); | 70 | EXPORT_SYMBOL_GPL(crypto_aead_setkey); |
64 | 71 | ||
@@ -93,6 +100,8 @@ static int crypto_aead_init_tfm(struct crypto_tfm *tfm) | |||
93 | struct crypto_aead *aead = __crypto_aead_cast(tfm); | 100 | struct crypto_aead *aead = __crypto_aead_cast(tfm); |
94 | struct aead_alg *alg = crypto_aead_alg(aead); | 101 | struct aead_alg *alg = crypto_aead_alg(aead); |
95 | 102 | ||
103 | crypto_aead_set_flags(aead, CRYPTO_TFM_NEED_KEY); | ||
104 | |||
96 | aead->authsize = alg->maxauthsize; | 105 | aead->authsize = alg->maxauthsize; |
97 | 106 | ||
98 | if (alg->exit) | 107 | if (alg->exit) |
@@ -295,7 +304,7 @@ int aead_init_geniv(struct crypto_aead *aead) | |||
295 | if (err) | 304 | if (err) |
296 | goto out; | 305 | goto out; |
297 | 306 | ||
298 | ctx->sknull = crypto_get_default_null_skcipher2(); | 307 | ctx->sknull = crypto_get_default_null_skcipher(); |
299 | err = PTR_ERR(ctx->sknull); | 308 | err = PTR_ERR(ctx->sknull); |
300 | if (IS_ERR(ctx->sknull)) | 309 | if (IS_ERR(ctx->sknull)) |
301 | goto out; | 310 | goto out; |
@@ -315,7 +324,7 @@ out: | |||
315 | return err; | 324 | return err; |
316 | 325 | ||
317 | drop_null: | 326 | drop_null: |
318 | crypto_put_default_null_skcipher2(); | 327 | crypto_put_default_null_skcipher(); |
319 | goto out; | 328 | goto out; |
320 | } | 329 | } |
321 | EXPORT_SYMBOL_GPL(aead_init_geniv); | 330 | EXPORT_SYMBOL_GPL(aead_init_geniv); |
@@ -325,7 +334,7 @@ void aead_exit_geniv(struct crypto_aead *tfm) | |||
325 | struct aead_geniv_ctx *ctx = crypto_aead_ctx(tfm); | 334 | struct aead_geniv_ctx *ctx = crypto_aead_ctx(tfm); |
326 | 335 | ||
327 | crypto_free_aead(ctx->child); | 336 | crypto_free_aead(ctx->child); |
328 | crypto_put_default_null_skcipher2(); | 337 | crypto_put_default_null_skcipher(); |
329 | } | 338 | } |
330 | EXPORT_SYMBOL_GPL(aead_exit_geniv); | 339 | EXPORT_SYMBOL_GPL(aead_exit_geniv); |
331 | 340 | ||
diff --git a/crypto/af_alg.c b/crypto/af_alg.c index f41047ab60f5..0f8d8d5523c3 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c | |||
@@ -150,7 +150,7 @@ EXPORT_SYMBOL_GPL(af_alg_release_parent); | |||
150 | 150 | ||
151 | static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) | 151 | static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) |
152 | { | 152 | { |
153 | const u32 forbidden = CRYPTO_ALG_INTERNAL; | 153 | const u32 allowed = CRYPTO_ALG_KERN_DRIVER_ONLY; |
154 | struct sock *sk = sock->sk; | 154 | struct sock *sk = sock->sk; |
155 | struct alg_sock *ask = alg_sk(sk); | 155 | struct alg_sock *ask = alg_sk(sk); |
156 | struct sockaddr_alg *sa = (void *)uaddr; | 156 | struct sockaddr_alg *sa = (void *)uaddr; |
@@ -158,6 +158,10 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) | |||
158 | void *private; | 158 | void *private; |
159 | int err; | 159 | int err; |
160 | 160 | ||
161 | /* If caller uses non-allowed flag, return error. */ | ||
162 | if ((sa->salg_feat & ~allowed) || (sa->salg_mask & ~allowed)) | ||
163 | return -EINVAL; | ||
164 | |||
161 | if (sock->state == SS_CONNECTED) | 165 | if (sock->state == SS_CONNECTED) |
162 | return -EINVAL; | 166 | return -EINVAL; |
163 | 167 | ||
@@ -176,9 +180,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) | |||
176 | if (IS_ERR(type)) | 180 | if (IS_ERR(type)) |
177 | return PTR_ERR(type); | 181 | return PTR_ERR(type); |
178 | 182 | ||
179 | private = type->bind(sa->salg_name, | 183 | private = type->bind(sa->salg_name, sa->salg_feat, sa->salg_mask); |
180 | sa->salg_feat & ~forbidden, | ||
181 | sa->salg_mask & ~forbidden); | ||
182 | if (IS_ERR(private)) { | 184 | if (IS_ERR(private)) { |
183 | module_put(type->owner); | 185 | module_put(type->owner); |
184 | return PTR_ERR(private); | 186 | return PTR_ERR(private); |
diff --git a/crypto/ahash.c b/crypto/ahash.c index 3a35d67de7d9..266fc1d64f61 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c | |||
@@ -193,11 +193,18 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, | |||
193 | unsigned int keylen) | 193 | unsigned int keylen) |
194 | { | 194 | { |
195 | unsigned long alignmask = crypto_ahash_alignmask(tfm); | 195 | unsigned long alignmask = crypto_ahash_alignmask(tfm); |
196 | int err; | ||
196 | 197 | ||
197 | if ((unsigned long)key & alignmask) | 198 | if ((unsigned long)key & alignmask) |
198 | return ahash_setkey_unaligned(tfm, key, keylen); | 199 | err = ahash_setkey_unaligned(tfm, key, keylen); |
200 | else | ||
201 | err = tfm->setkey(tfm, key, keylen); | ||
202 | |||
203 | if (err) | ||
204 | return err; | ||
199 | 205 | ||
200 | return tfm->setkey(tfm, key, keylen); | 206 | crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
207 | return 0; | ||
201 | } | 208 | } |
202 | EXPORT_SYMBOL_GPL(crypto_ahash_setkey); | 209 | EXPORT_SYMBOL_GPL(crypto_ahash_setkey); |
203 | 210 | ||
@@ -368,7 +375,12 @@ EXPORT_SYMBOL_GPL(crypto_ahash_finup); | |||
368 | 375 | ||
369 | int crypto_ahash_digest(struct ahash_request *req) | 376 | int crypto_ahash_digest(struct ahash_request *req) |
370 | { | 377 | { |
371 | return crypto_ahash_op(req, crypto_ahash_reqtfm(req)->digest); | 378 | struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); |
379 | |||
380 | if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) | ||
381 | return -ENOKEY; | ||
382 | |||
383 | return crypto_ahash_op(req, tfm->digest); | ||
372 | } | 384 | } |
373 | EXPORT_SYMBOL_GPL(crypto_ahash_digest); | 385 | EXPORT_SYMBOL_GPL(crypto_ahash_digest); |
374 | 386 | ||
@@ -450,7 +462,6 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) | |||
450 | struct ahash_alg *alg = crypto_ahash_alg(hash); | 462 | struct ahash_alg *alg = crypto_ahash_alg(hash); |
451 | 463 | ||
452 | hash->setkey = ahash_nosetkey; | 464 | hash->setkey = ahash_nosetkey; |
453 | hash->has_setkey = false; | ||
454 | hash->export = ahash_no_export; | 465 | hash->export = ahash_no_export; |
455 | hash->import = ahash_no_import; | 466 | hash->import = ahash_no_import; |
456 | 467 | ||
@@ -465,7 +476,8 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) | |||
465 | 476 | ||
466 | if (alg->setkey) { | 477 | if (alg->setkey) { |
467 | hash->setkey = alg->setkey; | 478 | hash->setkey = alg->setkey; |
468 | hash->has_setkey = true; | 479 | if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) |
480 | crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY); | ||
469 | } | 481 | } |
470 | if (alg->export) | 482 | if (alg->export) |
471 | hash->export = alg->export; | 483 | hash->export = alg->export; |
@@ -649,5 +661,16 @@ struct hash_alg_common *ahash_attr_alg(struct rtattr *rta, u32 type, u32 mask) | |||
649 | } | 661 | } |
650 | EXPORT_SYMBOL_GPL(ahash_attr_alg); | 662 | EXPORT_SYMBOL_GPL(ahash_attr_alg); |
651 | 663 | ||
664 | bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg) | ||
665 | { | ||
666 | struct crypto_alg *alg = &halg->base; | ||
667 | |||
668 | if (alg->cra_type != &crypto_ahash_type) | ||
669 | return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); | ||
670 | |||
671 | return __crypto_ahash_alg(alg)->setkey != NULL; | ||
672 | } | ||
673 | EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey); | ||
674 | |||
652 | MODULE_LICENSE("GPL"); | 675 | MODULE_LICENSE("GPL"); |
653 | MODULE_DESCRIPTION("Asynchronous cryptographic hash type"); | 676 | MODULE_DESCRIPTION("Asynchronous cryptographic hash type"); |
diff --git a/crypto/algapi.c b/crypto/algapi.c index 9a636f961572..395b082d03a9 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c | |||
@@ -62,7 +62,7 @@ static int crypto_check_alg(struct crypto_alg *alg) | |||
62 | if (alg->cra_priority < 0) | 62 | if (alg->cra_priority < 0) |
63 | return -EINVAL; | 63 | return -EINVAL; |
64 | 64 | ||
65 | atomic_set(&alg->cra_refcnt, 1); | 65 | refcount_set(&alg->cra_refcnt, 1); |
66 | 66 | ||
67 | return crypto_set_driver_name(alg); | 67 | return crypto_set_driver_name(alg); |
68 | } | 68 | } |
@@ -123,7 +123,6 @@ static void crypto_remove_instance(struct crypto_instance *inst, | |||
123 | if (!tmpl || !crypto_tmpl_get(tmpl)) | 123 | if (!tmpl || !crypto_tmpl_get(tmpl)) |
124 | return; | 124 | return; |
125 | 125 | ||
126 | crypto_notify(CRYPTO_MSG_ALG_UNREGISTER, &inst->alg); | ||
127 | list_move(&inst->alg.cra_list, list); | 126 | list_move(&inst->alg.cra_list, list); |
128 | hlist_del(&inst->list); | 127 | hlist_del(&inst->list); |
129 | inst->alg.cra_destroy = crypto_destroy_instance; | 128 | inst->alg.cra_destroy = crypto_destroy_instance; |
@@ -236,7 +235,7 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg) | |||
236 | if (!larval->adult) | 235 | if (!larval->adult) |
237 | goto free_larval; | 236 | goto free_larval; |
238 | 237 | ||
239 | atomic_set(&larval->alg.cra_refcnt, 1); | 238 | refcount_set(&larval->alg.cra_refcnt, 1); |
240 | memcpy(larval->alg.cra_driver_name, alg->cra_driver_name, | 239 | memcpy(larval->alg.cra_driver_name, alg->cra_driver_name, |
241 | CRYPTO_MAX_ALG_NAME); | 240 | CRYPTO_MAX_ALG_NAME); |
242 | larval->alg.cra_priority = alg->cra_priority; | 241 | larval->alg.cra_priority = alg->cra_priority; |
@@ -392,7 +391,6 @@ static int crypto_remove_alg(struct crypto_alg *alg, struct list_head *list) | |||
392 | 391 | ||
393 | alg->cra_flags |= CRYPTO_ALG_DEAD; | 392 | alg->cra_flags |= CRYPTO_ALG_DEAD; |
394 | 393 | ||
395 | crypto_notify(CRYPTO_MSG_ALG_UNREGISTER, alg); | ||
396 | list_del_init(&alg->cra_list); | 394 | list_del_init(&alg->cra_list); |
397 | crypto_remove_spawns(alg, list, NULL); | 395 | crypto_remove_spawns(alg, list, NULL); |
398 | 396 | ||
@@ -411,7 +409,7 @@ int crypto_unregister_alg(struct crypto_alg *alg) | |||
411 | if (ret) | 409 | if (ret) |
412 | return ret; | 410 | return ret; |
413 | 411 | ||
414 | BUG_ON(atomic_read(&alg->cra_refcnt) != 1); | 412 | BUG_ON(refcount_read(&alg->cra_refcnt) != 1); |
415 | if (alg->cra_destroy) | 413 | if (alg->cra_destroy) |
416 | alg->cra_destroy(alg); | 414 | alg->cra_destroy(alg); |
417 | 415 | ||
@@ -470,7 +468,6 @@ int crypto_register_template(struct crypto_template *tmpl) | |||
470 | } | 468 | } |
471 | 469 | ||
472 | list_add(&tmpl->list, &crypto_template_list); | 470 | list_add(&tmpl->list, &crypto_template_list); |
473 | crypto_notify(CRYPTO_MSG_TMPL_REGISTER, tmpl); | ||
474 | err = 0; | 471 | err = 0; |
475 | out: | 472 | out: |
476 | up_write(&crypto_alg_sem); | 473 | up_write(&crypto_alg_sem); |
@@ -497,12 +494,10 @@ void crypto_unregister_template(struct crypto_template *tmpl) | |||
497 | BUG_ON(err); | 494 | BUG_ON(err); |
498 | } | 495 | } |
499 | 496 | ||
500 | crypto_notify(CRYPTO_MSG_TMPL_UNREGISTER, tmpl); | ||
501 | |||
502 | up_write(&crypto_alg_sem); | 497 | up_write(&crypto_alg_sem); |
503 | 498 | ||
504 | hlist_for_each_entry_safe(inst, n, list, list) { | 499 | hlist_for_each_entry_safe(inst, n, list, list) { |
505 | BUG_ON(atomic_read(&inst->alg.cra_refcnt) != 1); | 500 | BUG_ON(refcount_read(&inst->alg.cra_refcnt) != 1); |
506 | crypto_free_instance(inst); | 501 | crypto_free_instance(inst); |
507 | } | 502 | } |
508 | crypto_remove_final(&users); | 503 | crypto_remove_final(&users); |
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index e9885a35ef6e..4b07edd5a9ff 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c | |||
@@ -42,7 +42,6 @@ | |||
42 | 42 | ||
43 | struct aead_tfm { | 43 | struct aead_tfm { |
44 | struct crypto_aead *aead; | 44 | struct crypto_aead *aead; |
45 | bool has_key; | ||
46 | struct crypto_skcipher *null_tfm; | 45 | struct crypto_skcipher *null_tfm; |
47 | }; | 46 | }; |
48 | 47 | ||
@@ -398,7 +397,7 @@ static int aead_check_key(struct socket *sock) | |||
398 | 397 | ||
399 | err = -ENOKEY; | 398 | err = -ENOKEY; |
400 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); | 399 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); |
401 | if (!tfm->has_key) | 400 | if (crypto_aead_get_flags(tfm->aead) & CRYPTO_TFM_NEED_KEY) |
402 | goto unlock; | 401 | goto unlock; |
403 | 402 | ||
404 | if (!pask->refcnt++) | 403 | if (!pask->refcnt++) |
@@ -491,7 +490,7 @@ static void *aead_bind(const char *name, u32 type, u32 mask) | |||
491 | return ERR_CAST(aead); | 490 | return ERR_CAST(aead); |
492 | } | 491 | } |
493 | 492 | ||
494 | null_tfm = crypto_get_default_null_skcipher2(); | 493 | null_tfm = crypto_get_default_null_skcipher(); |
495 | if (IS_ERR(null_tfm)) { | 494 | if (IS_ERR(null_tfm)) { |
496 | crypto_free_aead(aead); | 495 | crypto_free_aead(aead); |
497 | kfree(tfm); | 496 | kfree(tfm); |
@@ -509,7 +508,7 @@ static void aead_release(void *private) | |||
509 | struct aead_tfm *tfm = private; | 508 | struct aead_tfm *tfm = private; |
510 | 509 | ||
511 | crypto_free_aead(tfm->aead); | 510 | crypto_free_aead(tfm->aead); |
512 | crypto_put_default_null_skcipher2(); | 511 | crypto_put_default_null_skcipher(); |
513 | kfree(tfm); | 512 | kfree(tfm); |
514 | } | 513 | } |
515 | 514 | ||
@@ -523,12 +522,8 @@ static int aead_setauthsize(void *private, unsigned int authsize) | |||
523 | static int aead_setkey(void *private, const u8 *key, unsigned int keylen) | 522 | static int aead_setkey(void *private, const u8 *key, unsigned int keylen) |
524 | { | 523 | { |
525 | struct aead_tfm *tfm = private; | 524 | struct aead_tfm *tfm = private; |
526 | int err; | ||
527 | |||
528 | err = crypto_aead_setkey(tfm->aead, key, keylen); | ||
529 | tfm->has_key = !err; | ||
530 | 525 | ||
531 | return err; | 526 | return crypto_aead_setkey(tfm->aead, key, keylen); |
532 | } | 527 | } |
533 | 528 | ||
534 | static void aead_sock_destruct(struct sock *sk) | 529 | static void aead_sock_destruct(struct sock *sk) |
@@ -589,7 +584,7 @@ static int aead_accept_parent(void *private, struct sock *sk) | |||
589 | { | 584 | { |
590 | struct aead_tfm *tfm = private; | 585 | struct aead_tfm *tfm = private; |
591 | 586 | ||
592 | if (!tfm->has_key) | 587 | if (crypto_aead_get_flags(tfm->aead) & CRYPTO_TFM_NEED_KEY) |
593 | return -ENOKEY; | 588 | return -ENOKEY; |
594 | 589 | ||
595 | return aead_accept_parent_nokey(private, sk); | 590 | return aead_accept_parent_nokey(private, sk); |
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c index 76d2e716c792..6c9b1927a520 100644 --- a/crypto/algif_hash.c +++ b/crypto/algif_hash.c | |||
@@ -34,11 +34,6 @@ struct hash_ctx { | |||
34 | struct ahash_request req; | 34 | struct ahash_request req; |
35 | }; | 35 | }; |
36 | 36 | ||
37 | struct algif_hash_tfm { | ||
38 | struct crypto_ahash *hash; | ||
39 | bool has_key; | ||
40 | }; | ||
41 | |||
42 | static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx) | 37 | static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx) |
43 | { | 38 | { |
44 | unsigned ds; | 39 | unsigned ds; |
@@ -307,7 +302,7 @@ static int hash_check_key(struct socket *sock) | |||
307 | int err = 0; | 302 | int err = 0; |
308 | struct sock *psk; | 303 | struct sock *psk; |
309 | struct alg_sock *pask; | 304 | struct alg_sock *pask; |
310 | struct algif_hash_tfm *tfm; | 305 | struct crypto_ahash *tfm; |
311 | struct sock *sk = sock->sk; | 306 | struct sock *sk = sock->sk; |
312 | struct alg_sock *ask = alg_sk(sk); | 307 | struct alg_sock *ask = alg_sk(sk); |
313 | 308 | ||
@@ -321,7 +316,7 @@ static int hash_check_key(struct socket *sock) | |||
321 | 316 | ||
322 | err = -ENOKEY; | 317 | err = -ENOKEY; |
323 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); | 318 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); |
324 | if (!tfm->has_key) | 319 | if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) |
325 | goto unlock; | 320 | goto unlock; |
326 | 321 | ||
327 | if (!pask->refcnt++) | 322 | if (!pask->refcnt++) |
@@ -412,41 +407,17 @@ static struct proto_ops algif_hash_ops_nokey = { | |||
412 | 407 | ||
413 | static void *hash_bind(const char *name, u32 type, u32 mask) | 408 | static void *hash_bind(const char *name, u32 type, u32 mask) |
414 | { | 409 | { |
415 | struct algif_hash_tfm *tfm; | 410 | return crypto_alloc_ahash(name, type, mask); |
416 | struct crypto_ahash *hash; | ||
417 | |||
418 | tfm = kzalloc(sizeof(*tfm), GFP_KERNEL); | ||
419 | if (!tfm) | ||
420 | return ERR_PTR(-ENOMEM); | ||
421 | |||
422 | hash = crypto_alloc_ahash(name, type, mask); | ||
423 | if (IS_ERR(hash)) { | ||
424 | kfree(tfm); | ||
425 | return ERR_CAST(hash); | ||
426 | } | ||
427 | |||
428 | tfm->hash = hash; | ||
429 | |||
430 | return tfm; | ||
431 | } | 411 | } |
432 | 412 | ||
433 | static void hash_release(void *private) | 413 | static void hash_release(void *private) |
434 | { | 414 | { |
435 | struct algif_hash_tfm *tfm = private; | 415 | crypto_free_ahash(private); |
436 | |||
437 | crypto_free_ahash(tfm->hash); | ||
438 | kfree(tfm); | ||
439 | } | 416 | } |
440 | 417 | ||
441 | static int hash_setkey(void *private, const u8 *key, unsigned int keylen) | 418 | static int hash_setkey(void *private, const u8 *key, unsigned int keylen) |
442 | { | 419 | { |
443 | struct algif_hash_tfm *tfm = private; | 420 | return crypto_ahash_setkey(private, key, keylen); |
444 | int err; | ||
445 | |||
446 | err = crypto_ahash_setkey(tfm->hash, key, keylen); | ||
447 | tfm->has_key = !err; | ||
448 | |||
449 | return err; | ||
450 | } | 421 | } |
451 | 422 | ||
452 | static void hash_sock_destruct(struct sock *sk) | 423 | static void hash_sock_destruct(struct sock *sk) |
@@ -461,11 +432,10 @@ static void hash_sock_destruct(struct sock *sk) | |||
461 | 432 | ||
462 | static int hash_accept_parent_nokey(void *private, struct sock *sk) | 433 | static int hash_accept_parent_nokey(void *private, struct sock *sk) |
463 | { | 434 | { |
464 | struct hash_ctx *ctx; | 435 | struct crypto_ahash *tfm = private; |
465 | struct alg_sock *ask = alg_sk(sk); | 436 | struct alg_sock *ask = alg_sk(sk); |
466 | struct algif_hash_tfm *tfm = private; | 437 | struct hash_ctx *ctx; |
467 | struct crypto_ahash *hash = tfm->hash; | 438 | unsigned int len = sizeof(*ctx) + crypto_ahash_reqsize(tfm); |
468 | unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(hash); | ||
469 | 439 | ||
470 | ctx = sock_kmalloc(sk, len, GFP_KERNEL); | 440 | ctx = sock_kmalloc(sk, len, GFP_KERNEL); |
471 | if (!ctx) | 441 | if (!ctx) |
@@ -478,7 +448,7 @@ static int hash_accept_parent_nokey(void *private, struct sock *sk) | |||
478 | 448 | ||
479 | ask->private = ctx; | 449 | ask->private = ctx; |
480 | 450 | ||
481 | ahash_request_set_tfm(&ctx->req, hash); | 451 | ahash_request_set_tfm(&ctx->req, tfm); |
482 | ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, | 452 | ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, |
483 | crypto_req_done, &ctx->wait); | 453 | crypto_req_done, &ctx->wait); |
484 | 454 | ||
@@ -489,9 +459,9 @@ static int hash_accept_parent_nokey(void *private, struct sock *sk) | |||
489 | 459 | ||
490 | static int hash_accept_parent(void *private, struct sock *sk) | 460 | static int hash_accept_parent(void *private, struct sock *sk) |
491 | { | 461 | { |
492 | struct algif_hash_tfm *tfm = private; | 462 | struct crypto_ahash *tfm = private; |
493 | 463 | ||
494 | if (!tfm->has_key && crypto_ahash_has_setkey(tfm->hash)) | 464 | if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) |
495 | return -ENOKEY; | 465 | return -ENOKEY; |
496 | 466 | ||
497 | return hash_accept_parent_nokey(private, sk); | 467 | return hash_accept_parent_nokey(private, sk); |
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index f50907430c92..c4e885df4564 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c | |||
@@ -38,11 +38,6 @@ | |||
38 | #include <linux/net.h> | 38 | #include <linux/net.h> |
39 | #include <net/sock.h> | 39 | #include <net/sock.h> |
40 | 40 | ||
41 | struct skcipher_tfm { | ||
42 | struct crypto_skcipher *skcipher; | ||
43 | bool has_key; | ||
44 | }; | ||
45 | |||
46 | static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg, | 41 | static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg, |
47 | size_t size) | 42 | size_t size) |
48 | { | 43 | { |
@@ -50,8 +45,7 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg, | |||
50 | struct alg_sock *ask = alg_sk(sk); | 45 | struct alg_sock *ask = alg_sk(sk); |
51 | struct sock *psk = ask->parent; | 46 | struct sock *psk = ask->parent; |
52 | struct alg_sock *pask = alg_sk(psk); | 47 | struct alg_sock *pask = alg_sk(psk); |
53 | struct skcipher_tfm *skc = pask->private; | 48 | struct crypto_skcipher *tfm = pask->private; |
54 | struct crypto_skcipher *tfm = skc->skcipher; | ||
55 | unsigned ivsize = crypto_skcipher_ivsize(tfm); | 49 | unsigned ivsize = crypto_skcipher_ivsize(tfm); |
56 | 50 | ||
57 | return af_alg_sendmsg(sock, msg, size, ivsize); | 51 | return af_alg_sendmsg(sock, msg, size, ivsize); |
@@ -65,8 +59,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, | |||
65 | struct sock *psk = ask->parent; | 59 | struct sock *psk = ask->parent; |
66 | struct alg_sock *pask = alg_sk(psk); | 60 | struct alg_sock *pask = alg_sk(psk); |
67 | struct af_alg_ctx *ctx = ask->private; | 61 | struct af_alg_ctx *ctx = ask->private; |
68 | struct skcipher_tfm *skc = pask->private; | 62 | struct crypto_skcipher *tfm = pask->private; |
69 | struct crypto_skcipher *tfm = skc->skcipher; | ||
70 | unsigned int bs = crypto_skcipher_blocksize(tfm); | 63 | unsigned int bs = crypto_skcipher_blocksize(tfm); |
71 | struct af_alg_async_req *areq; | 64 | struct af_alg_async_req *areq; |
72 | int err = 0; | 65 | int err = 0; |
@@ -220,7 +213,7 @@ static int skcipher_check_key(struct socket *sock) | |||
220 | int err = 0; | 213 | int err = 0; |
221 | struct sock *psk; | 214 | struct sock *psk; |
222 | struct alg_sock *pask; | 215 | struct alg_sock *pask; |
223 | struct skcipher_tfm *tfm; | 216 | struct crypto_skcipher *tfm; |
224 | struct sock *sk = sock->sk; | 217 | struct sock *sk = sock->sk; |
225 | struct alg_sock *ask = alg_sk(sk); | 218 | struct alg_sock *ask = alg_sk(sk); |
226 | 219 | ||
@@ -234,7 +227,7 @@ static int skcipher_check_key(struct socket *sock) | |||
234 | 227 | ||
235 | err = -ENOKEY; | 228 | err = -ENOKEY; |
236 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); | 229 | lock_sock_nested(psk, SINGLE_DEPTH_NESTING); |
237 | if (!tfm->has_key) | 230 | if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) |
238 | goto unlock; | 231 | goto unlock; |
239 | 232 | ||
240 | if (!pask->refcnt++) | 233 | if (!pask->refcnt++) |
@@ -313,41 +306,17 @@ static struct proto_ops algif_skcipher_ops_nokey = { | |||
313 | 306 | ||
314 | static void *skcipher_bind(const char *name, u32 type, u32 mask) | 307 | static void *skcipher_bind(const char *name, u32 type, u32 mask) |
315 | { | 308 | { |
316 | struct skcipher_tfm *tfm; | 309 | return crypto_alloc_skcipher(name, type, mask); |
317 | struct crypto_skcipher *skcipher; | ||
318 | |||
319 | tfm = kzalloc(sizeof(*tfm), GFP_KERNEL); | ||
320 | if (!tfm) | ||
321 | return ERR_PTR(-ENOMEM); | ||
322 | |||
323 | skcipher = crypto_alloc_skcipher(name, type, mask); | ||
324 | if (IS_ERR(skcipher)) { | ||
325 | kfree(tfm); | ||
326 | return ERR_CAST(skcipher); | ||
327 | } | ||
328 | |||
329 | tfm->skcipher = skcipher; | ||
330 | |||
331 | return tfm; | ||
332 | } | 310 | } |
333 | 311 | ||
334 | static void skcipher_release(void *private) | 312 | static void skcipher_release(void *private) |
335 | { | 313 | { |
336 | struct skcipher_tfm *tfm = private; | 314 | crypto_free_skcipher(private); |
337 | |||
338 | crypto_free_skcipher(tfm->skcipher); | ||
339 | kfree(tfm); | ||
340 | } | 315 | } |
341 | 316 | ||
342 | static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen) | 317 | static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen) |
343 | { | 318 | { |
344 | struct skcipher_tfm *tfm = private; | 319 | return crypto_skcipher_setkey(private, key, keylen); |
345 | int err; | ||
346 | |||
347 | err = crypto_skcipher_setkey(tfm->skcipher, key, keylen); | ||
348 | tfm->has_key = !err; | ||
349 | |||
350 | return err; | ||
351 | } | 320 | } |
352 | 321 | ||
353 | static void skcipher_sock_destruct(struct sock *sk) | 322 | static void skcipher_sock_destruct(struct sock *sk) |
@@ -356,8 +325,7 @@ static void skcipher_sock_destruct(struct sock *sk) | |||
356 | struct af_alg_ctx *ctx = ask->private; | 325 | struct af_alg_ctx *ctx = ask->private; |
357 | struct sock *psk = ask->parent; | 326 | struct sock *psk = ask->parent; |
358 | struct alg_sock *pask = alg_sk(psk); | 327 | struct alg_sock *pask = alg_sk(psk); |
359 | struct skcipher_tfm *skc = pask->private; | 328 | struct crypto_skcipher *tfm = pask->private; |
360 | struct crypto_skcipher *tfm = skc->skcipher; | ||
361 | 329 | ||
362 | af_alg_pull_tsgl(sk, ctx->used, NULL, 0); | 330 | af_alg_pull_tsgl(sk, ctx->used, NULL, 0); |
363 | sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm)); | 331 | sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm)); |
@@ -369,22 +337,21 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk) | |||
369 | { | 337 | { |
370 | struct af_alg_ctx *ctx; | 338 | struct af_alg_ctx *ctx; |
371 | struct alg_sock *ask = alg_sk(sk); | 339 | struct alg_sock *ask = alg_sk(sk); |
372 | struct skcipher_tfm *tfm = private; | 340 | struct crypto_skcipher *tfm = private; |
373 | struct crypto_skcipher *skcipher = tfm->skcipher; | ||
374 | unsigned int len = sizeof(*ctx); | 341 | unsigned int len = sizeof(*ctx); |
375 | 342 | ||
376 | ctx = sock_kmalloc(sk, len, GFP_KERNEL); | 343 | ctx = sock_kmalloc(sk, len, GFP_KERNEL); |
377 | if (!ctx) | 344 | if (!ctx) |
378 | return -ENOMEM; | 345 | return -ENOMEM; |
379 | 346 | ||
380 | ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(skcipher), | 347 | ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(tfm), |
381 | GFP_KERNEL); | 348 | GFP_KERNEL); |
382 | if (!ctx->iv) { | 349 | if (!ctx->iv) { |
383 | sock_kfree_s(sk, ctx, len); | 350 | sock_kfree_s(sk, ctx, len); |
384 | return -ENOMEM; | 351 | return -ENOMEM; |
385 | } | 352 | } |
386 | 353 | ||
387 | memset(ctx->iv, 0, crypto_skcipher_ivsize(skcipher)); | 354 | memset(ctx->iv, 0, crypto_skcipher_ivsize(tfm)); |
388 | 355 | ||
389 | INIT_LIST_HEAD(&ctx->tsgl_list); | 356 | INIT_LIST_HEAD(&ctx->tsgl_list); |
390 | ctx->len = len; | 357 | ctx->len = len; |
@@ -404,9 +371,9 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk) | |||
404 | 371 | ||
405 | static int skcipher_accept_parent(void *private, struct sock *sk) | 372 | static int skcipher_accept_parent(void *private, struct sock *sk) |
406 | { | 373 | { |
407 | struct skcipher_tfm *tfm = private; | 374 | struct crypto_skcipher *tfm = private; |
408 | 375 | ||
409 | if (!tfm->has_key && crypto_skcipher_has_setkey(tfm->skcipher)) | 376 | if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) |
410 | return -ENOKEY; | 377 | return -ENOKEY; |
411 | 378 | ||
412 | return skcipher_accept_parent_nokey(private, sk); | 379 | return skcipher_accept_parent_nokey(private, sk); |
diff --git a/crypto/api.c b/crypto/api.c index 2a2479d168aa..70a894e52ff3 100644 --- a/crypto/api.c +++ b/crypto/api.c | |||
@@ -137,7 +137,7 @@ static struct crypto_alg *crypto_larval_add(const char *name, u32 type, | |||
137 | if (IS_ERR(larval)) | 137 | if (IS_ERR(larval)) |
138 | return ERR_CAST(larval); | 138 | return ERR_CAST(larval); |
139 | 139 | ||
140 | atomic_set(&larval->alg.cra_refcnt, 2); | 140 | refcount_set(&larval->alg.cra_refcnt, 2); |
141 | 141 | ||
142 | down_write(&crypto_alg_sem); | 142 | down_write(&crypto_alg_sem); |
143 | alg = __crypto_alg_lookup(name, type, mask); | 143 | alg = __crypto_alg_lookup(name, type, mask); |
@@ -205,7 +205,8 @@ struct crypto_alg *crypto_alg_lookup(const char *name, u32 type, u32 mask) | |||
205 | } | 205 | } |
206 | EXPORT_SYMBOL_GPL(crypto_alg_lookup); | 206 | EXPORT_SYMBOL_GPL(crypto_alg_lookup); |
207 | 207 | ||
208 | struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask) | 208 | static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, |
209 | u32 mask) | ||
209 | { | 210 | { |
210 | struct crypto_alg *alg; | 211 | struct crypto_alg *alg; |
211 | 212 | ||
@@ -231,7 +232,6 @@ struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask) | |||
231 | 232 | ||
232 | return crypto_larval_add(name, type, mask); | 233 | return crypto_larval_add(name, type, mask); |
233 | } | 234 | } |
234 | EXPORT_SYMBOL_GPL(crypto_larval_lookup); | ||
235 | 235 | ||
236 | int crypto_probing_notify(unsigned long val, void *v) | 236 | int crypto_probing_notify(unsigned long val, void *v) |
237 | { | 237 | { |
diff --git a/crypto/authenc.c b/crypto/authenc.c index 875470b0e026..d3d6d72fe649 100644 --- a/crypto/authenc.c +++ b/crypto/authenc.c | |||
@@ -329,7 +329,7 @@ static int crypto_authenc_init_tfm(struct crypto_aead *tfm) | |||
329 | if (IS_ERR(enc)) | 329 | if (IS_ERR(enc)) |
330 | goto err_free_ahash; | 330 | goto err_free_ahash; |
331 | 331 | ||
332 | null = crypto_get_default_null_skcipher2(); | 332 | null = crypto_get_default_null_skcipher(); |
333 | err = PTR_ERR(null); | 333 | err = PTR_ERR(null); |
334 | if (IS_ERR(null)) | 334 | if (IS_ERR(null)) |
335 | goto err_free_skcipher; | 335 | goto err_free_skcipher; |
@@ -363,7 +363,7 @@ static void crypto_authenc_exit_tfm(struct crypto_aead *tfm) | |||
363 | 363 | ||
364 | crypto_free_ahash(ctx->auth); | 364 | crypto_free_ahash(ctx->auth); |
365 | crypto_free_skcipher(ctx->enc); | 365 | crypto_free_skcipher(ctx->enc); |
366 | crypto_put_default_null_skcipher2(); | 366 | crypto_put_default_null_skcipher(); |
367 | } | 367 | } |
368 | 368 | ||
369 | static void crypto_authenc_free(struct aead_instance *inst) | 369 | static void crypto_authenc_free(struct aead_instance *inst) |
diff --git a/crypto/authencesn.c b/crypto/authencesn.c index 0cf5fefdb859..15f91ddd7f0e 100644 --- a/crypto/authencesn.c +++ b/crypto/authencesn.c | |||
@@ -352,7 +352,7 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm) | |||
352 | if (IS_ERR(enc)) | 352 | if (IS_ERR(enc)) |
353 | goto err_free_ahash; | 353 | goto err_free_ahash; |
354 | 354 | ||
355 | null = crypto_get_default_null_skcipher2(); | 355 | null = crypto_get_default_null_skcipher(); |
356 | err = PTR_ERR(null); | 356 | err = PTR_ERR(null); |
357 | if (IS_ERR(null)) | 357 | if (IS_ERR(null)) |
358 | goto err_free_skcipher; | 358 | goto err_free_skcipher; |
@@ -389,7 +389,7 @@ static void crypto_authenc_esn_exit_tfm(struct crypto_aead *tfm) | |||
389 | 389 | ||
390 | crypto_free_ahash(ctx->auth); | 390 | crypto_free_ahash(ctx->auth); |
391 | crypto_free_skcipher(ctx->enc); | 391 | crypto_free_skcipher(ctx->enc); |
392 | crypto_put_default_null_skcipher2(); | 392 | crypto_put_default_null_skcipher(); |
393 | } | 393 | } |
394 | 394 | ||
395 | static void crypto_authenc_esn_free(struct aead_instance *inst) | 395 | static void crypto_authenc_esn_free(struct aead_instance *inst) |
diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c index 6c43a0a17a55..01c0d4aa2563 100644 --- a/crypto/blkcipher.c +++ b/crypto/blkcipher.c | |||
@@ -18,7 +18,6 @@ | |||
18 | #include <crypto/internal/skcipher.h> | 18 | #include <crypto/internal/skcipher.h> |
19 | #include <crypto/scatterwalk.h> | 19 | #include <crypto/scatterwalk.h> |
20 | #include <linux/errno.h> | 20 | #include <linux/errno.h> |
21 | #include <linux/hardirq.h> | ||
22 | #include <linux/kernel.h> | 21 | #include <linux/kernel.h> |
23 | #include <linux/module.h> | 22 | #include <linux/module.h> |
24 | #include <linux/seq_file.h> | 23 | #include <linux/seq_file.h> |
diff --git a/crypto/camellia_generic.c b/crypto/camellia_generic.c index a02286bf319e..32ddd4836ff5 100644 --- a/crypto/camellia_generic.c +++ b/crypto/camellia_generic.c | |||
@@ -13,8 +13,7 @@ | |||
13 | * GNU General Public License for more details. | 13 | * GNU General Public License for more details. |
14 | * | 14 | * |
15 | * You should have received a copy of the GNU General Public License | 15 | * You should have received a copy of the GNU General Public License |
16 | * along with this program; if not, write to the Free Software | 16 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
17 | * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. | ||
18 | */ | 17 | */ |
19 | 18 | ||
20 | /* | 19 | /* |
diff --git a/crypto/cast5_generic.c b/crypto/cast5_generic.c index df5c72629383..66169c178314 100644 --- a/crypto/cast5_generic.c +++ b/crypto/cast5_generic.c | |||
@@ -16,8 +16,7 @@ | |||
16 | * any later version. | 16 | * any later version. |
17 | * | 17 | * |
18 | * You should have received a copy of the GNU General Public License | 18 | * You should have received a copy of the GNU General Public License |
19 | * along with this program; if not, write to the Free Software | 19 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
20 | * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA | ||
21 | */ | 20 | */ |
22 | 21 | ||
23 | 22 | ||
diff --git a/crypto/cast6_generic.c b/crypto/cast6_generic.c index 058c8d755d03..c8e5ec69790e 100644 --- a/crypto/cast6_generic.c +++ b/crypto/cast6_generic.c | |||
@@ -13,8 +13,7 @@ | |||
13 | * any later version. | 13 | * any later version. |
14 | * | 14 | * |
15 | * You should have received a copy of the GNU General Public License | 15 | * You should have received a copy of the GNU General Public License |
16 | * along with this program; if not, write to the Free Software | 16 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
17 | * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA | ||
18 | */ | 17 | */ |
19 | 18 | ||
20 | 19 | ||
diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c index 4a45fa4890c0..e451c3cb6a56 100644 --- a/crypto/chacha20_generic.c +++ b/crypto/chacha20_generic.c | |||
@@ -9,44 +9,38 @@ | |||
9 | * (at your option) any later version. | 9 | * (at your option) any later version. |
10 | */ | 10 | */ |
11 | 11 | ||
12 | #include <asm/unaligned.h> | ||
12 | #include <crypto/algapi.h> | 13 | #include <crypto/algapi.h> |
13 | #include <crypto/chacha20.h> | 14 | #include <crypto/chacha20.h> |
14 | #include <crypto/internal/skcipher.h> | 15 | #include <crypto/internal/skcipher.h> |
15 | #include <linux/module.h> | 16 | #include <linux/module.h> |
16 | 17 | ||
17 | static inline u32 le32_to_cpuvp(const void *p) | ||
18 | { | ||
19 | return le32_to_cpup(p); | ||
20 | } | ||
21 | |||
22 | static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src, | 18 | static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src, |
23 | unsigned int bytes) | 19 | unsigned int bytes) |
24 | { | 20 | { |
25 | u8 stream[CHACHA20_BLOCK_SIZE]; | 21 | u32 stream[CHACHA20_BLOCK_WORDS]; |
26 | 22 | ||
27 | if (dst != src) | 23 | if (dst != src) |
28 | memcpy(dst, src, bytes); | 24 | memcpy(dst, src, bytes); |
29 | 25 | ||
30 | while (bytes >= CHACHA20_BLOCK_SIZE) { | 26 | while (bytes >= CHACHA20_BLOCK_SIZE) { |
31 | chacha20_block(state, stream); | 27 | chacha20_block(state, stream); |
32 | crypto_xor(dst, stream, CHACHA20_BLOCK_SIZE); | 28 | crypto_xor(dst, (const u8 *)stream, CHACHA20_BLOCK_SIZE); |
33 | bytes -= CHACHA20_BLOCK_SIZE; | 29 | bytes -= CHACHA20_BLOCK_SIZE; |
34 | dst += CHACHA20_BLOCK_SIZE; | 30 | dst += CHACHA20_BLOCK_SIZE; |
35 | } | 31 | } |
36 | if (bytes) { | 32 | if (bytes) { |
37 | chacha20_block(state, stream); | 33 | chacha20_block(state, stream); |
38 | crypto_xor(dst, stream, bytes); | 34 | crypto_xor(dst, (const u8 *)stream, bytes); |
39 | } | 35 | } |
40 | } | 36 | } |
41 | 37 | ||
42 | void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) | 38 | void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) |
43 | { | 39 | { |
44 | static const char constant[16] = "expand 32-byte k"; | 40 | state[0] = 0x61707865; /* "expa" */ |
45 | 41 | state[1] = 0x3320646e; /* "nd 3" */ | |
46 | state[0] = le32_to_cpuvp(constant + 0); | 42 | state[2] = 0x79622d32; /* "2-by" */ |
47 | state[1] = le32_to_cpuvp(constant + 4); | 43 | state[3] = 0x6b206574; /* "te k" */ |
48 | state[2] = le32_to_cpuvp(constant + 8); | ||
49 | state[3] = le32_to_cpuvp(constant + 12); | ||
50 | state[4] = ctx->key[0]; | 44 | state[4] = ctx->key[0]; |
51 | state[5] = ctx->key[1]; | 45 | state[5] = ctx->key[1]; |
52 | state[6] = ctx->key[2]; | 46 | state[6] = ctx->key[2]; |
@@ -55,10 +49,10 @@ void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) | |||
55 | state[9] = ctx->key[5]; | 49 | state[9] = ctx->key[5]; |
56 | state[10] = ctx->key[6]; | 50 | state[10] = ctx->key[6]; |
57 | state[11] = ctx->key[7]; | 51 | state[11] = ctx->key[7]; |
58 | state[12] = le32_to_cpuvp(iv + 0); | 52 | state[12] = get_unaligned_le32(iv + 0); |
59 | state[13] = le32_to_cpuvp(iv + 4); | 53 | state[13] = get_unaligned_le32(iv + 4); |
60 | state[14] = le32_to_cpuvp(iv + 8); | 54 | state[14] = get_unaligned_le32(iv + 8); |
61 | state[15] = le32_to_cpuvp(iv + 12); | 55 | state[15] = get_unaligned_le32(iv + 12); |
62 | } | 56 | } |
63 | EXPORT_SYMBOL_GPL(crypto_chacha20_init); | 57 | EXPORT_SYMBOL_GPL(crypto_chacha20_init); |
64 | 58 | ||
@@ -72,7 +66,7 @@ int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key, | |||
72 | return -EINVAL; | 66 | return -EINVAL; |
73 | 67 | ||
74 | for (i = 0; i < ARRAY_SIZE(ctx->key); i++) | 68 | for (i = 0; i < ARRAY_SIZE(ctx->key); i++) |
75 | ctx->key[i] = le32_to_cpuvp(key + i * sizeof(u32)); | 69 | ctx->key[i] = get_unaligned_le32(key + i * sizeof(u32)); |
76 | 70 | ||
77 | return 0; | 71 | return 0; |
78 | } | 72 | } |
@@ -111,7 +105,6 @@ static struct skcipher_alg alg = { | |||
111 | .base.cra_priority = 100, | 105 | .base.cra_priority = 100, |
112 | .base.cra_blocksize = 1, | 106 | .base.cra_blocksize = 1, |
113 | .base.cra_ctxsize = sizeof(struct chacha20_ctx), | 107 | .base.cra_ctxsize = sizeof(struct chacha20_ctx), |
114 | .base.cra_alignmask = sizeof(u32) - 1, | ||
115 | .base.cra_module = THIS_MODULE, | 108 | .base.cra_module = THIS_MODULE, |
116 | 109 | ||
117 | .min_keysize = CHACHA20_KEY_SIZE, | 110 | .min_keysize = CHACHA20_KEY_SIZE, |
diff --git a/crypto/crc32_generic.c b/crypto/crc32_generic.c index aa2a25fc7482..718cbce8d169 100644 --- a/crypto/crc32_generic.c +++ b/crypto/crc32_generic.c | |||
@@ -133,6 +133,7 @@ static struct shash_alg alg = { | |||
133 | .cra_name = "crc32", | 133 | .cra_name = "crc32", |
134 | .cra_driver_name = "crc32-generic", | 134 | .cra_driver_name = "crc32-generic", |
135 | .cra_priority = 100, | 135 | .cra_priority = 100, |
136 | .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, | ||
136 | .cra_blocksize = CHKSUM_BLOCK_SIZE, | 137 | .cra_blocksize = CHKSUM_BLOCK_SIZE, |
137 | .cra_ctxsize = sizeof(u32), | 138 | .cra_ctxsize = sizeof(u32), |
138 | .cra_module = THIS_MODULE, | 139 | .cra_module = THIS_MODULE, |
diff --git a/crypto/crc32c_generic.c b/crypto/crc32c_generic.c index 4c0a0e271876..372320399622 100644 --- a/crypto/crc32c_generic.c +++ b/crypto/crc32c_generic.c | |||
@@ -146,6 +146,7 @@ static struct shash_alg alg = { | |||
146 | .cra_name = "crc32c", | 146 | .cra_name = "crc32c", |
147 | .cra_driver_name = "crc32c-generic", | 147 | .cra_driver_name = "crc32c-generic", |
148 | .cra_priority = 100, | 148 | .cra_priority = 100, |
149 | .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, | ||
149 | .cra_blocksize = CHKSUM_BLOCK_SIZE, | 150 | .cra_blocksize = CHKSUM_BLOCK_SIZE, |
150 | .cra_alignmask = 3, | 151 | .cra_alignmask = 3, |
151 | .cra_ctxsize = sizeof(struct chksum_ctx), | 152 | .cra_ctxsize = sizeof(struct chksum_ctx), |
diff --git a/crypto/cryptd.c b/crypto/cryptd.c index bd43cf5be14c..addca7bae33f 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c | |||
@@ -32,7 +32,9 @@ | |||
32 | #include <linux/sched.h> | 32 | #include <linux/sched.h> |
33 | #include <linux/slab.h> | 33 | #include <linux/slab.h> |
34 | 34 | ||
35 | #define CRYPTD_MAX_CPU_QLEN 1000 | 35 | static unsigned int cryptd_max_cpu_qlen = 1000; |
36 | module_param(cryptd_max_cpu_qlen, uint, 0); | ||
37 | MODULE_PARM_DESC(cryptd_max_cpu_qlen, "Set cryptd Max queue depth"); | ||
36 | 38 | ||
37 | struct cryptd_cpu_queue { | 39 | struct cryptd_cpu_queue { |
38 | struct crypto_queue queue; | 40 | struct crypto_queue queue; |
@@ -116,6 +118,7 @@ static int cryptd_init_queue(struct cryptd_queue *queue, | |||
116 | crypto_init_queue(&cpu_queue->queue, max_cpu_qlen); | 118 | crypto_init_queue(&cpu_queue->queue, max_cpu_qlen); |
117 | INIT_WORK(&cpu_queue->work, cryptd_queue_worker); | 119 | INIT_WORK(&cpu_queue->work, cryptd_queue_worker); |
118 | } | 120 | } |
121 | pr_info("cryptd: max_cpu_qlen set to %d\n", max_cpu_qlen); | ||
119 | return 0; | 122 | return 0; |
120 | } | 123 | } |
121 | 124 | ||
@@ -893,10 +896,9 @@ static int cryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb, | |||
893 | if (err) | 896 | if (err) |
894 | goto out_free_inst; | 897 | goto out_free_inst; |
895 | 898 | ||
896 | type = CRYPTO_ALG_ASYNC; | 899 | inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC | |
897 | if (alg->cra_flags & CRYPTO_ALG_INTERNAL) | 900 | (alg->cra_flags & (CRYPTO_ALG_INTERNAL | |
898 | type |= CRYPTO_ALG_INTERNAL; | 901 | CRYPTO_ALG_OPTIONAL_KEY)); |
899 | inst->alg.halg.base.cra_flags = type; | ||
900 | 902 | ||
901 | inst->alg.halg.digestsize = salg->digestsize; | 903 | inst->alg.halg.digestsize = salg->digestsize; |
902 | inst->alg.halg.statesize = salg->statesize; | 904 | inst->alg.halg.statesize = salg->statesize; |
@@ -911,7 +913,8 @@ static int cryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb, | |||
911 | inst->alg.finup = cryptd_hash_finup_enqueue; | 913 | inst->alg.finup = cryptd_hash_finup_enqueue; |
912 | inst->alg.export = cryptd_hash_export; | 914 | inst->alg.export = cryptd_hash_export; |
913 | inst->alg.import = cryptd_hash_import; | 915 | inst->alg.import = cryptd_hash_import; |
914 | inst->alg.setkey = cryptd_hash_setkey; | 916 | if (crypto_shash_alg_has_setkey(salg)) |
917 | inst->alg.setkey = cryptd_hash_setkey; | ||
915 | inst->alg.digest = cryptd_hash_digest_enqueue; | 918 | inst->alg.digest = cryptd_hash_digest_enqueue; |
916 | 919 | ||
917 | err = ahash_register_instance(tmpl, inst); | 920 | err = ahash_register_instance(tmpl, inst); |
@@ -1372,7 +1375,7 @@ static int __init cryptd_init(void) | |||
1372 | { | 1375 | { |
1373 | int err; | 1376 | int err; |
1374 | 1377 | ||
1375 | err = cryptd_init_queue(&queue, CRYPTD_MAX_CPU_QLEN); | 1378 | err = cryptd_init_queue(&queue, cryptd_max_cpu_qlen); |
1376 | if (err) | 1379 | if (err) |
1377 | return err; | 1380 | return err; |
1378 | 1381 | ||
diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 0dbe2be7f783..5c291eedaa70 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c | |||
@@ -169,7 +169,7 @@ static int crypto_report_one(struct crypto_alg *alg, | |||
169 | ualg->cru_type = 0; | 169 | ualg->cru_type = 0; |
170 | ualg->cru_mask = 0; | 170 | ualg->cru_mask = 0; |
171 | ualg->cru_flags = alg->cra_flags; | 171 | ualg->cru_flags = alg->cra_flags; |
172 | ualg->cru_refcnt = atomic_read(&alg->cra_refcnt); | 172 | ualg->cru_refcnt = refcount_read(&alg->cra_refcnt); |
173 | 173 | ||
174 | if (nla_put_u32(skb, CRYPTOCFGA_PRIORITY_VAL, alg->cra_priority)) | 174 | if (nla_put_u32(skb, CRYPTOCFGA_PRIORITY_VAL, alg->cra_priority)) |
175 | goto nla_put_failure; | 175 | goto nla_put_failure; |
@@ -387,7 +387,7 @@ static int crypto_del_alg(struct sk_buff *skb, struct nlmsghdr *nlh, | |||
387 | goto drop_alg; | 387 | goto drop_alg; |
388 | 388 | ||
389 | err = -EBUSY; | 389 | err = -EBUSY; |
390 | if (atomic_read(&alg->cra_refcnt) > 2) | 390 | if (refcount_read(&alg->cra_refcnt) > 2) |
391 | goto drop_alg; | 391 | goto drop_alg; |
392 | 392 | ||
393 | err = crypto_unregister_instance((struct crypto_instance *)alg); | 393 | err = crypto_unregister_instance((struct crypto_instance *)alg); |
diff --git a/crypto/ecc.c b/crypto/ecc.c index 633a9bcdc574..18f32f2a5e1c 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c | |||
@@ -964,7 +964,7 @@ int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey) | |||
964 | * DRBG with a security strength of 256. | 964 | * DRBG with a security strength of 256. |
965 | */ | 965 | */ |
966 | if (crypto_get_default_rng()) | 966 | if (crypto_get_default_rng()) |
967 | err = -EFAULT; | 967 | return -EFAULT; |
968 | 968 | ||
969 | err = crypto_rng_get_bytes(crypto_default_rng, (u8 *)priv, nbytes); | 969 | err = crypto_rng_get_bytes(crypto_default_rng, (u8 *)priv, nbytes); |
970 | crypto_put_default_rng(); | 970 | crypto_put_default_rng(); |
diff --git a/crypto/echainiv.c b/crypto/echainiv.c index e3d889b122e0..45819e6015bf 100644 --- a/crypto/echainiv.c +++ b/crypto/echainiv.c | |||
@@ -118,8 +118,6 @@ static int echainiv_aead_create(struct crypto_template *tmpl, | |||
118 | struct rtattr **tb) | 118 | struct rtattr **tb) |
119 | { | 119 | { |
120 | struct aead_instance *inst; | 120 | struct aead_instance *inst; |
121 | struct crypto_aead_spawn *spawn; | ||
122 | struct aead_alg *alg; | ||
123 | int err; | 121 | int err; |
124 | 122 | ||
125 | inst = aead_geniv_alloc(tmpl, tb, 0, 0); | 123 | inst = aead_geniv_alloc(tmpl, tb, 0, 0); |
@@ -127,9 +125,6 @@ static int echainiv_aead_create(struct crypto_template *tmpl, | |||
127 | if (IS_ERR(inst)) | 125 | if (IS_ERR(inst)) |
128 | return PTR_ERR(inst); | 126 | return PTR_ERR(inst); |
129 | 127 | ||
130 | spawn = aead_instance_ctx(inst); | ||
131 | alg = crypto_spawn_aead_alg(spawn); | ||
132 | |||
133 | err = -EINVAL; | 128 | err = -EINVAL; |
134 | if (inst->alg.ivsize & (sizeof(u64) - 1) || !inst->alg.ivsize) | 129 | if (inst->alg.ivsize & (sizeof(u64) - 1) || !inst->alg.ivsize) |
135 | goto free_inst; | 130 | goto free_inst; |
diff --git a/crypto/gcm.c b/crypto/gcm.c index 8589681fb9f6..0ad879e1f9b2 100644 --- a/crypto/gcm.c +++ b/crypto/gcm.c | |||
@@ -1101,7 +1101,7 @@ static int crypto_rfc4543_init_tfm(struct crypto_aead *tfm) | |||
1101 | if (IS_ERR(aead)) | 1101 | if (IS_ERR(aead)) |
1102 | return PTR_ERR(aead); | 1102 | return PTR_ERR(aead); |
1103 | 1103 | ||
1104 | null = crypto_get_default_null_skcipher2(); | 1104 | null = crypto_get_default_null_skcipher(); |
1105 | err = PTR_ERR(null); | 1105 | err = PTR_ERR(null); |
1106 | if (IS_ERR(null)) | 1106 | if (IS_ERR(null)) |
1107 | goto err_free_aead; | 1107 | goto err_free_aead; |
@@ -1129,7 +1129,7 @@ static void crypto_rfc4543_exit_tfm(struct crypto_aead *tfm) | |||
1129 | struct crypto_rfc4543_ctx *ctx = crypto_aead_ctx(tfm); | 1129 | struct crypto_rfc4543_ctx *ctx = crypto_aead_ctx(tfm); |
1130 | 1130 | ||
1131 | crypto_free_aead(ctx->child); | 1131 | crypto_free_aead(ctx->child); |
1132 | crypto_put_default_null_skcipher2(); | 1132 | crypto_put_default_null_skcipher(); |
1133 | } | 1133 | } |
1134 | 1134 | ||
1135 | static void crypto_rfc4543_free(struct aead_instance *inst) | 1135 | static void crypto_rfc4543_free(struct aead_instance *inst) |
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c index 24e601954c7a..a4b1c026aaee 100644 --- a/crypto/gf128mul.c +++ b/crypto/gf128mul.c | |||
@@ -160,8 +160,6 @@ void gf128mul_x8_ble(le128 *r, const le128 *x) | |||
160 | { | 160 | { |
161 | u64 a = le64_to_cpu(x->a); | 161 | u64 a = le64_to_cpu(x->a); |
162 | u64 b = le64_to_cpu(x->b); | 162 | u64 b = le64_to_cpu(x->b); |
163 | |||
164 | /* equivalent to gf128mul_table_be[b >> 63] (see crypto/gf128mul.c): */ | ||
165 | u64 _tt = gf128mul_table_be[a >> 56]; | 163 | u64 _tt = gf128mul_table_be[a >> 56]; |
166 | 164 | ||
167 | r->a = cpu_to_le64((a << 8) | (b >> 56)); | 165 | r->a = cpu_to_le64((a << 8) | (b >> 56)); |
diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index 12ad3e3a84e3..1bffb3f712dd 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c | |||
@@ -56,9 +56,6 @@ static int ghash_update(struct shash_desc *desc, | |||
56 | struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); | 56 | struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); |
57 | u8 *dst = dctx->buffer; | 57 | u8 *dst = dctx->buffer; |
58 | 58 | ||
59 | if (!ctx->gf128) | ||
60 | return -ENOKEY; | ||
61 | |||
62 | if (dctx->bytes) { | 59 | if (dctx->bytes) { |
63 | int n = min(srclen, dctx->bytes); | 60 | int n = min(srclen, dctx->bytes); |
64 | u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); | 61 | u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); |
@@ -111,9 +108,6 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) | |||
111 | struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); | 108 | struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); |
112 | u8 *buf = dctx->buffer; | 109 | u8 *buf = dctx->buffer; |
113 | 110 | ||
114 | if (!ctx->gf128) | ||
115 | return -ENOKEY; | ||
116 | |||
117 | ghash_flush(ctx, dctx); | 111 | ghash_flush(ctx, dctx); |
118 | memcpy(dst, buf, GHASH_BLOCK_SIZE); | 112 | memcpy(dst, buf, GHASH_BLOCK_SIZE); |
119 | 113 | ||
diff --git a/crypto/internal.h b/crypto/internal.h index f07320423191..5ac27fba10e8 100644 --- a/crypto/internal.h +++ b/crypto/internal.h | |||
@@ -30,9 +30,6 @@ | |||
30 | enum { | 30 | enum { |
31 | CRYPTO_MSG_ALG_REQUEST, | 31 | CRYPTO_MSG_ALG_REQUEST, |
32 | CRYPTO_MSG_ALG_REGISTER, | 32 | CRYPTO_MSG_ALG_REGISTER, |
33 | CRYPTO_MSG_ALG_UNREGISTER, | ||
34 | CRYPTO_MSG_TMPL_REGISTER, | ||
35 | CRYPTO_MSG_TMPL_UNREGISTER, | ||
36 | }; | 33 | }; |
37 | 34 | ||
38 | struct crypto_instance; | 35 | struct crypto_instance; |
@@ -78,7 +75,6 @@ int crypto_init_compress_ops(struct crypto_tfm *tfm); | |||
78 | 75 | ||
79 | struct crypto_larval *crypto_larval_alloc(const char *name, u32 type, u32 mask); | 76 | struct crypto_larval *crypto_larval_alloc(const char *name, u32 type, u32 mask); |
80 | void crypto_larval_kill(struct crypto_alg *alg); | 77 | void crypto_larval_kill(struct crypto_alg *alg); |
81 | struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask); | ||
82 | void crypto_alg_tested(const char *name, int err); | 78 | void crypto_alg_tested(const char *name, int err); |
83 | 79 | ||
84 | void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list, | 80 | void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list, |
@@ -106,13 +102,13 @@ int crypto_type_has_alg(const char *name, const struct crypto_type *frontend, | |||
106 | 102 | ||
107 | static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg) | 103 | static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg) |
108 | { | 104 | { |
109 | atomic_inc(&alg->cra_refcnt); | 105 | refcount_inc(&alg->cra_refcnt); |
110 | return alg; | 106 | return alg; |
111 | } | 107 | } |
112 | 108 | ||
113 | static inline void crypto_alg_put(struct crypto_alg *alg) | 109 | static inline void crypto_alg_put(struct crypto_alg *alg) |
114 | { | 110 | { |
115 | if (atomic_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy) | 111 | if (refcount_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy) |
116 | alg->cra_destroy(alg); | 112 | alg->cra_destroy(alg); |
117 | } | 113 | } |
118 | 114 | ||
diff --git a/crypto/keywrap.c b/crypto/keywrap.c index 744e35134c45..ec5c6a087c90 100644 --- a/crypto/keywrap.c +++ b/crypto/keywrap.c | |||
@@ -188,7 +188,7 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc, | |||
188 | } | 188 | } |
189 | 189 | ||
190 | /* Perform authentication check */ | 190 | /* Perform authentication check */ |
191 | if (block.A != cpu_to_be64(0xa6a6a6a6a6a6a6a6)) | 191 | if (block.A != cpu_to_be64(0xa6a6a6a6a6a6a6a6ULL)) |
192 | ret = -EBADMSG; | 192 | ret = -EBADMSG; |
193 | 193 | ||
194 | memzero_explicit(&block, sizeof(struct crypto_kw_block)); | 194 | memzero_explicit(&block, sizeof(struct crypto_kw_block)); |
@@ -221,7 +221,7 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc, | |||
221 | * Place the predefined IV into block A -- for encrypt, the caller | 221 | * Place the predefined IV into block A -- for encrypt, the caller |
222 | * does not need to provide an IV, but he needs to fetch the final IV. | 222 | * does not need to provide an IV, but he needs to fetch the final IV. |
223 | */ | 223 | */ |
224 | block.A = cpu_to_be64(0xa6a6a6a6a6a6a6a6); | 224 | block.A = cpu_to_be64(0xa6a6a6a6a6a6a6a6ULL); |
225 | 225 | ||
226 | /* | 226 | /* |
227 | * src scatterlist is read-only. dst scatterlist is r/w. During the | 227 | * src scatterlist is read-only. dst scatterlist is r/w. During the |
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c index eca04d3729b3..fe5129d6ff4e 100644 --- a/crypto/mcryptd.c +++ b/crypto/mcryptd.c | |||
@@ -26,7 +26,6 @@ | |||
26 | #include <linux/sched.h> | 26 | #include <linux/sched.h> |
27 | #include <linux/sched/stat.h> | 27 | #include <linux/sched/stat.h> |
28 | #include <linux/slab.h> | 28 | #include <linux/slab.h> |
29 | #include <linux/hardirq.h> | ||
30 | 29 | ||
31 | #define MCRYPTD_MAX_CPU_QLEN 100 | 30 | #define MCRYPTD_MAX_CPU_QLEN 100 |
32 | #define MCRYPTD_BATCH 9 | 31 | #define MCRYPTD_BATCH 9 |
@@ -517,10 +516,9 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb, | |||
517 | if (err) | 516 | if (err) |
518 | goto out_free_inst; | 517 | goto out_free_inst; |
519 | 518 | ||
520 | type = CRYPTO_ALG_ASYNC; | 519 | inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC | |
521 | if (alg->cra_flags & CRYPTO_ALG_INTERNAL) | 520 | (alg->cra_flags & (CRYPTO_ALG_INTERNAL | |
522 | type |= CRYPTO_ALG_INTERNAL; | 521 | CRYPTO_ALG_OPTIONAL_KEY)); |
523 | inst->alg.halg.base.cra_flags = type; | ||
524 | 522 | ||
525 | inst->alg.halg.digestsize = halg->digestsize; | 523 | inst->alg.halg.digestsize = halg->digestsize; |
526 | inst->alg.halg.statesize = halg->statesize; | 524 | inst->alg.halg.statesize = halg->statesize; |
@@ -535,7 +533,8 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb, | |||
535 | inst->alg.finup = mcryptd_hash_finup_enqueue; | 533 | inst->alg.finup = mcryptd_hash_finup_enqueue; |
536 | inst->alg.export = mcryptd_hash_export; | 534 | inst->alg.export = mcryptd_hash_export; |
537 | inst->alg.import = mcryptd_hash_import; | 535 | inst->alg.import = mcryptd_hash_import; |
538 | inst->alg.setkey = mcryptd_hash_setkey; | 536 | if (crypto_hash_alg_has_setkey(halg)) |
537 | inst->alg.setkey = mcryptd_hash_setkey; | ||
539 | inst->alg.digest = mcryptd_hash_digest_enqueue; | 538 | inst->alg.digest = mcryptd_hash_digest_enqueue; |
540 | 539 | ||
541 | err = ahash_register_instance(tmpl, inst); | 540 | err = ahash_register_instance(tmpl, inst); |
diff --git a/crypto/poly1305_generic.c b/crypto/poly1305_generic.c index b1c2d57dc734..b7a3a0613a30 100644 --- a/crypto/poly1305_generic.c +++ b/crypto/poly1305_generic.c | |||
@@ -47,17 +47,6 @@ int crypto_poly1305_init(struct shash_desc *desc) | |||
47 | } | 47 | } |
48 | EXPORT_SYMBOL_GPL(crypto_poly1305_init); | 48 | EXPORT_SYMBOL_GPL(crypto_poly1305_init); |
49 | 49 | ||
50 | int crypto_poly1305_setkey(struct crypto_shash *tfm, | ||
51 | const u8 *key, unsigned int keylen) | ||
52 | { | ||
53 | /* Poly1305 requires a unique key for each tag, which implies that | ||
54 | * we can't set it on the tfm that gets accessed by multiple users | ||
55 | * simultaneously. Instead we expect the key as the first 32 bytes in | ||
56 | * the update() call. */ | ||
57 | return -ENOTSUPP; | ||
58 | } | ||
59 | EXPORT_SYMBOL_GPL(crypto_poly1305_setkey); | ||
60 | |||
61 | static void poly1305_setrkey(struct poly1305_desc_ctx *dctx, const u8 *key) | 50 | static void poly1305_setrkey(struct poly1305_desc_ctx *dctx, const u8 *key) |
62 | { | 51 | { |
63 | /* r &= 0xffffffc0ffffffc0ffffffc0fffffff */ | 52 | /* r &= 0xffffffc0ffffffc0ffffffc0fffffff */ |
@@ -76,6 +65,11 @@ static void poly1305_setskey(struct poly1305_desc_ctx *dctx, const u8 *key) | |||
76 | dctx->s[3] = get_unaligned_le32(key + 12); | 65 | dctx->s[3] = get_unaligned_le32(key + 12); |
77 | } | 66 | } |
78 | 67 | ||
68 | /* | ||
69 | * Poly1305 requires a unique key for each tag, which implies that we can't set | ||
70 | * it on the tfm that gets accessed by multiple users simultaneously. Instead we | ||
71 | * expect the key as the first 32 bytes in the update() call. | ||
72 | */ | ||
79 | unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx, | 73 | unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx, |
80 | const u8 *src, unsigned int srclen) | 74 | const u8 *src, unsigned int srclen) |
81 | { | 75 | { |
@@ -210,7 +204,6 @@ EXPORT_SYMBOL_GPL(crypto_poly1305_update); | |||
210 | int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) | 204 | int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) |
211 | { | 205 | { |
212 | struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc); | 206 | struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc); |
213 | __le32 *mac = (__le32 *)dst; | ||
214 | u32 h0, h1, h2, h3, h4; | 207 | u32 h0, h1, h2, h3, h4; |
215 | u32 g0, g1, g2, g3, g4; | 208 | u32 g0, g1, g2, g3, g4; |
216 | u32 mask; | 209 | u32 mask; |
@@ -267,10 +260,10 @@ int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) | |||
267 | h3 = (h3 >> 18) | (h4 << 8); | 260 | h3 = (h3 >> 18) | (h4 << 8); |
268 | 261 | ||
269 | /* mac = (h + s) % (2^128) */ | 262 | /* mac = (h + s) % (2^128) */ |
270 | f = (f >> 32) + h0 + dctx->s[0]; mac[0] = cpu_to_le32(f); | 263 | f = (f >> 32) + h0 + dctx->s[0]; put_unaligned_le32(f, dst + 0); |
271 | f = (f >> 32) + h1 + dctx->s[1]; mac[1] = cpu_to_le32(f); | 264 | f = (f >> 32) + h1 + dctx->s[1]; put_unaligned_le32(f, dst + 4); |
272 | f = (f >> 32) + h2 + dctx->s[2]; mac[2] = cpu_to_le32(f); | 265 | f = (f >> 32) + h2 + dctx->s[2]; put_unaligned_le32(f, dst + 8); |
273 | f = (f >> 32) + h3 + dctx->s[3]; mac[3] = cpu_to_le32(f); | 266 | f = (f >> 32) + h3 + dctx->s[3]; put_unaligned_le32(f, dst + 12); |
274 | 267 | ||
275 | return 0; | 268 | return 0; |
276 | } | 269 | } |
@@ -281,14 +274,12 @@ static struct shash_alg poly1305_alg = { | |||
281 | .init = crypto_poly1305_init, | 274 | .init = crypto_poly1305_init, |
282 | .update = crypto_poly1305_update, | 275 | .update = crypto_poly1305_update, |
283 | .final = crypto_poly1305_final, | 276 | .final = crypto_poly1305_final, |
284 | .setkey = crypto_poly1305_setkey, | ||
285 | .descsize = sizeof(struct poly1305_desc_ctx), | 277 | .descsize = sizeof(struct poly1305_desc_ctx), |
286 | .base = { | 278 | .base = { |
287 | .cra_name = "poly1305", | 279 | .cra_name = "poly1305", |
288 | .cra_driver_name = "poly1305-generic", | 280 | .cra_driver_name = "poly1305-generic", |
289 | .cra_priority = 100, | 281 | .cra_priority = 100, |
290 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, | 282 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, |
291 | .cra_alignmask = sizeof(u32) - 1, | ||
292 | .cra_blocksize = POLY1305_BLOCK_SIZE, | 283 | .cra_blocksize = POLY1305_BLOCK_SIZE, |
293 | .cra_module = THIS_MODULE, | 284 | .cra_module = THIS_MODULE, |
294 | }, | 285 | }, |
diff --git a/crypto/proc.c b/crypto/proc.c index 2cc10c96d753..822fcef6d91c 100644 --- a/crypto/proc.c +++ b/crypto/proc.c | |||
@@ -46,7 +46,7 @@ static int c_show(struct seq_file *m, void *p) | |||
46 | seq_printf(m, "driver : %s\n", alg->cra_driver_name); | 46 | seq_printf(m, "driver : %s\n", alg->cra_driver_name); |
47 | seq_printf(m, "module : %s\n", module_name(alg->cra_module)); | 47 | seq_printf(m, "module : %s\n", module_name(alg->cra_module)); |
48 | seq_printf(m, "priority : %d\n", alg->cra_priority); | 48 | seq_printf(m, "priority : %d\n", alg->cra_priority); |
49 | seq_printf(m, "refcnt : %d\n", atomic_read(&alg->cra_refcnt)); | 49 | seq_printf(m, "refcnt : %u\n", refcount_read(&alg->cra_refcnt)); |
50 | seq_printf(m, "selftest : %s\n", | 50 | seq_printf(m, "selftest : %s\n", |
51 | (alg->cra_flags & CRYPTO_ALG_TESTED) ? | 51 | (alg->cra_flags & CRYPTO_ALG_TESTED) ? |
52 | "passed" : "unknown"); | 52 | "passed" : "unknown"); |
diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c index d7da0eea5622..5074006a56c3 100644 --- a/crypto/salsa20_generic.c +++ b/crypto/salsa20_generic.c | |||
@@ -19,49 +19,19 @@ | |||
19 | * | 19 | * |
20 | */ | 20 | */ |
21 | 21 | ||
22 | #include <linux/init.h> | 22 | #include <asm/unaligned.h> |
23 | #include <crypto/internal/skcipher.h> | ||
24 | #include <crypto/salsa20.h> | ||
23 | #include <linux/module.h> | 25 | #include <linux/module.h> |
24 | #include <linux/errno.h> | ||
25 | #include <linux/crypto.h> | ||
26 | #include <linux/types.h> | ||
27 | #include <linux/bitops.h> | ||
28 | #include <crypto/algapi.h> | ||
29 | #include <asm/byteorder.h> | ||
30 | 26 | ||
31 | #define SALSA20_IV_SIZE 8U | 27 | static void salsa20_block(u32 *state, __le32 *stream) |
32 | #define SALSA20_MIN_KEY_SIZE 16U | ||
33 | #define SALSA20_MAX_KEY_SIZE 32U | ||
34 | |||
35 | /* | ||
36 | * Start of code taken from D. J. Bernstein's reference implementation. | ||
37 | * With some modifications and optimizations made to suit our needs. | ||
38 | */ | ||
39 | |||
40 | /* | ||
41 | salsa20-ref.c version 20051118 | ||
42 | D. J. Bernstein | ||
43 | Public domain. | ||
44 | */ | ||
45 | |||
46 | #define U32TO8_LITTLE(p, v) \ | ||
47 | { (p)[0] = (v >> 0) & 0xff; (p)[1] = (v >> 8) & 0xff; \ | ||
48 | (p)[2] = (v >> 16) & 0xff; (p)[3] = (v >> 24) & 0xff; } | ||
49 | #define U8TO32_LITTLE(p) \ | ||
50 | (((u32)((p)[0]) ) | ((u32)((p)[1]) << 8) | \ | ||
51 | ((u32)((p)[2]) << 16) | ((u32)((p)[3]) << 24) ) | ||
52 | |||
53 | struct salsa20_ctx | ||
54 | { | ||
55 | u32 input[16]; | ||
56 | }; | ||
57 | |||
58 | static void salsa20_wordtobyte(u8 output[64], const u32 input[16]) | ||
59 | { | 28 | { |
60 | u32 x[16]; | 29 | u32 x[16]; |
61 | int i; | 30 | int i; |
62 | 31 | ||
63 | memcpy(x, input, sizeof(x)); | 32 | memcpy(x, state, sizeof(x)); |
64 | for (i = 20; i > 0; i -= 2) { | 33 | |
34 | for (i = 0; i < 20; i += 2) { | ||
65 | x[ 4] ^= rol32((x[ 0] + x[12]), 7); | 35 | x[ 4] ^= rol32((x[ 0] + x[12]), 7); |
66 | x[ 8] ^= rol32((x[ 4] + x[ 0]), 9); | 36 | x[ 8] ^= rol32((x[ 4] + x[ 0]), 9); |
67 | x[12] ^= rol32((x[ 8] + x[ 4]), 13); | 37 | x[12] ^= rol32((x[ 8] + x[ 4]), 13); |
@@ -95,145 +65,137 @@ static void salsa20_wordtobyte(u8 output[64], const u32 input[16]) | |||
95 | x[14] ^= rol32((x[13] + x[12]), 13); | 65 | x[14] ^= rol32((x[13] + x[12]), 13); |
96 | x[15] ^= rol32((x[14] + x[13]), 18); | 66 | x[15] ^= rol32((x[14] + x[13]), 18); |
97 | } | 67 | } |
98 | for (i = 0; i < 16; ++i) | ||
99 | x[i] += input[i]; | ||
100 | for (i = 0; i < 16; ++i) | ||
101 | U32TO8_LITTLE(output + 4 * i,x[i]); | ||
102 | } | ||
103 | 68 | ||
104 | static const char sigma[16] = "expand 32-byte k"; | 69 | for (i = 0; i < 16; i++) |
105 | static const char tau[16] = "expand 16-byte k"; | 70 | stream[i] = cpu_to_le32(x[i] + state[i]); |
71 | |||
72 | if (++state[8] == 0) | ||
73 | state[9]++; | ||
74 | } | ||
106 | 75 | ||
107 | static void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k, u32 kbytes) | 76 | static void salsa20_docrypt(u32 *state, u8 *dst, const u8 *src, |
77 | unsigned int bytes) | ||
108 | { | 78 | { |
109 | const char *constants; | 79 | __le32 stream[SALSA20_BLOCK_SIZE / sizeof(__le32)]; |
110 | 80 | ||
111 | ctx->input[1] = U8TO32_LITTLE(k + 0); | 81 | if (dst != src) |
112 | ctx->input[2] = U8TO32_LITTLE(k + 4); | 82 | memcpy(dst, src, bytes); |
113 | ctx->input[3] = U8TO32_LITTLE(k + 8); | 83 | |
114 | ctx->input[4] = U8TO32_LITTLE(k + 12); | 84 | while (bytes >= SALSA20_BLOCK_SIZE) { |
115 | if (kbytes == 32) { /* recommended */ | 85 | salsa20_block(state, stream); |
116 | k += 16; | 86 | crypto_xor(dst, (const u8 *)stream, SALSA20_BLOCK_SIZE); |
117 | constants = sigma; | 87 | bytes -= SALSA20_BLOCK_SIZE; |
118 | } else { /* kbytes == 16 */ | 88 | dst += SALSA20_BLOCK_SIZE; |
119 | constants = tau; | 89 | } |
90 | if (bytes) { | ||
91 | salsa20_block(state, stream); | ||
92 | crypto_xor(dst, (const u8 *)stream, bytes); | ||
120 | } | 93 | } |
121 | ctx->input[11] = U8TO32_LITTLE(k + 0); | ||
122 | ctx->input[12] = U8TO32_LITTLE(k + 4); | ||
123 | ctx->input[13] = U8TO32_LITTLE(k + 8); | ||
124 | ctx->input[14] = U8TO32_LITTLE(k + 12); | ||
125 | ctx->input[0] = U8TO32_LITTLE(constants + 0); | ||
126 | ctx->input[5] = U8TO32_LITTLE(constants + 4); | ||
127 | ctx->input[10] = U8TO32_LITTLE(constants + 8); | ||
128 | ctx->input[15] = U8TO32_LITTLE(constants + 12); | ||
129 | } | 94 | } |
130 | 95 | ||
131 | static void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv) | 96 | void crypto_salsa20_init(u32 *state, const struct salsa20_ctx *ctx, |
97 | const u8 *iv) | ||
132 | { | 98 | { |
133 | ctx->input[6] = U8TO32_LITTLE(iv + 0); | 99 | memcpy(state, ctx->initial_state, sizeof(ctx->initial_state)); |
134 | ctx->input[7] = U8TO32_LITTLE(iv + 4); | 100 | state[6] = get_unaligned_le32(iv + 0); |
135 | ctx->input[8] = 0; | 101 | state[7] = get_unaligned_le32(iv + 4); |
136 | ctx->input[9] = 0; | ||
137 | } | 102 | } |
103 | EXPORT_SYMBOL_GPL(crypto_salsa20_init); | ||
138 | 104 | ||
139 | static void salsa20_encrypt_bytes(struct salsa20_ctx *ctx, u8 *dst, | 105 | int crypto_salsa20_setkey(struct crypto_skcipher *tfm, const u8 *key, |
140 | const u8 *src, unsigned int bytes) | 106 | unsigned int keysize) |
141 | { | 107 | { |
142 | u8 buf[64]; | 108 | static const char sigma[16] = "expand 32-byte k"; |
143 | 109 | static const char tau[16] = "expand 16-byte k"; | |
144 | if (dst != src) | 110 | struct salsa20_ctx *ctx = crypto_skcipher_ctx(tfm); |
145 | memcpy(dst, src, bytes); | 111 | const char *constants; |
146 | |||
147 | while (bytes) { | ||
148 | salsa20_wordtobyte(buf, ctx->input); | ||
149 | |||
150 | ctx->input[8]++; | ||
151 | if (!ctx->input[8]) | ||
152 | ctx->input[9]++; | ||
153 | 112 | ||
154 | if (bytes <= 64) { | 113 | if (keysize != SALSA20_MIN_KEY_SIZE && |
155 | crypto_xor(dst, buf, bytes); | 114 | keysize != SALSA20_MAX_KEY_SIZE) |
156 | return; | 115 | return -EINVAL; |
157 | } | ||
158 | 116 | ||
159 | crypto_xor(dst, buf, 64); | 117 | ctx->initial_state[1] = get_unaligned_le32(key + 0); |
160 | bytes -= 64; | 118 | ctx->initial_state[2] = get_unaligned_le32(key + 4); |
161 | dst += 64; | 119 | ctx->initial_state[3] = get_unaligned_le32(key + 8); |
120 | ctx->initial_state[4] = get_unaligned_le32(key + 12); | ||
121 | if (keysize == 32) { /* recommended */ | ||
122 | key += 16; | ||
123 | constants = sigma; | ||
124 | } else { /* keysize == 16 */ | ||
125 | constants = tau; | ||
162 | } | 126 | } |
163 | } | 127 | ctx->initial_state[11] = get_unaligned_le32(key + 0); |
164 | 128 | ctx->initial_state[12] = get_unaligned_le32(key + 4); | |
165 | /* | 129 | ctx->initial_state[13] = get_unaligned_le32(key + 8); |
166 | * End of code taken from D. J. Bernstein's reference implementation. | 130 | ctx->initial_state[14] = get_unaligned_le32(key + 12); |
167 | */ | 131 | ctx->initial_state[0] = get_unaligned_le32(constants + 0); |
132 | ctx->initial_state[5] = get_unaligned_le32(constants + 4); | ||
133 | ctx->initial_state[10] = get_unaligned_le32(constants + 8); | ||
134 | ctx->initial_state[15] = get_unaligned_le32(constants + 12); | ||
135 | |||
136 | /* space for the nonce; it will be overridden for each request */ | ||
137 | ctx->initial_state[6] = 0; | ||
138 | ctx->initial_state[7] = 0; | ||
139 | |||
140 | /* initial block number */ | ||
141 | ctx->initial_state[8] = 0; | ||
142 | ctx->initial_state[9] = 0; | ||
168 | 143 | ||
169 | static int setkey(struct crypto_tfm *tfm, const u8 *key, | ||
170 | unsigned int keysize) | ||
171 | { | ||
172 | struct salsa20_ctx *ctx = crypto_tfm_ctx(tfm); | ||
173 | salsa20_keysetup(ctx, key, keysize); | ||
174 | return 0; | 144 | return 0; |
175 | } | 145 | } |
146 | EXPORT_SYMBOL_GPL(crypto_salsa20_setkey); | ||
176 | 147 | ||
177 | static int encrypt(struct blkcipher_desc *desc, | 148 | static int salsa20_crypt(struct skcipher_request *req) |
178 | struct scatterlist *dst, struct scatterlist *src, | ||
179 | unsigned int nbytes) | ||
180 | { | 149 | { |
181 | struct blkcipher_walk walk; | 150 | struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); |
182 | struct crypto_blkcipher *tfm = desc->tfm; | 151 | const struct salsa20_ctx *ctx = crypto_skcipher_ctx(tfm); |
183 | struct salsa20_ctx *ctx = crypto_blkcipher_ctx(tfm); | 152 | struct skcipher_walk walk; |
153 | u32 state[16]; | ||
184 | int err; | 154 | int err; |
185 | 155 | ||
186 | blkcipher_walk_init(&walk, dst, src, nbytes); | 156 | err = skcipher_walk_virt(&walk, req, true); |
187 | err = blkcipher_walk_virt_block(desc, &walk, 64); | ||
188 | 157 | ||
189 | salsa20_ivsetup(ctx, walk.iv); | 158 | crypto_salsa20_init(state, ctx, walk.iv); |
190 | 159 | ||
191 | while (walk.nbytes >= 64) { | 160 | while (walk.nbytes > 0) { |
192 | salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, | 161 | unsigned int nbytes = walk.nbytes; |
193 | walk.src.virt.addr, | ||
194 | walk.nbytes - (walk.nbytes % 64)); | ||
195 | err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64); | ||
196 | } | ||
197 | 162 | ||
198 | if (walk.nbytes) { | 163 | if (nbytes < walk.total) |
199 | salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, | 164 | nbytes = round_down(nbytes, walk.stride); |
200 | walk.src.virt.addr, walk.nbytes); | 165 | |
201 | err = blkcipher_walk_done(desc, &walk, 0); | 166 | salsa20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr, |
167 | nbytes); | ||
168 | err = skcipher_walk_done(&walk, walk.nbytes - nbytes); | ||
202 | } | 169 | } |
203 | 170 | ||
204 | return err; | 171 | return err; |
205 | } | 172 | } |
206 | 173 | ||
207 | static struct crypto_alg alg = { | 174 | static struct skcipher_alg alg = { |
208 | .cra_name = "salsa20", | 175 | .base.cra_name = "salsa20", |
209 | .cra_driver_name = "salsa20-generic", | 176 | .base.cra_driver_name = "salsa20-generic", |
210 | .cra_priority = 100, | 177 | .base.cra_priority = 100, |
211 | .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, | 178 | .base.cra_blocksize = 1, |
212 | .cra_type = &crypto_blkcipher_type, | 179 | .base.cra_ctxsize = sizeof(struct salsa20_ctx), |
213 | .cra_blocksize = 1, | 180 | .base.cra_module = THIS_MODULE, |
214 | .cra_ctxsize = sizeof(struct salsa20_ctx), | 181 | |
215 | .cra_alignmask = 3, | 182 | .min_keysize = SALSA20_MIN_KEY_SIZE, |
216 | .cra_module = THIS_MODULE, | 183 | .max_keysize = SALSA20_MAX_KEY_SIZE, |
217 | .cra_u = { | 184 | .ivsize = SALSA20_IV_SIZE, |
218 | .blkcipher = { | 185 | .chunksize = SALSA20_BLOCK_SIZE, |
219 | .setkey = setkey, | 186 | .setkey = crypto_salsa20_setkey, |
220 | .encrypt = encrypt, | 187 | .encrypt = salsa20_crypt, |
221 | .decrypt = encrypt, | 188 | .decrypt = salsa20_crypt, |
222 | .min_keysize = SALSA20_MIN_KEY_SIZE, | ||
223 | .max_keysize = SALSA20_MAX_KEY_SIZE, | ||
224 | .ivsize = SALSA20_IV_SIZE, | ||
225 | } | ||
226 | } | ||
227 | }; | 189 | }; |
228 | 190 | ||
229 | static int __init salsa20_generic_mod_init(void) | 191 | static int __init salsa20_generic_mod_init(void) |
230 | { | 192 | { |
231 | return crypto_register_alg(&alg); | 193 | return crypto_register_skcipher(&alg); |
232 | } | 194 | } |
233 | 195 | ||
234 | static void __exit salsa20_generic_mod_fini(void) | 196 | static void __exit salsa20_generic_mod_fini(void) |
235 | { | 197 | { |
236 | crypto_unregister_alg(&alg); | 198 | crypto_unregister_skcipher(&alg); |
237 | } | 199 | } |
238 | 200 | ||
239 | module_init(salsa20_generic_mod_init); | 201 | module_init(salsa20_generic_mod_init); |
diff --git a/crypto/seqiv.c b/crypto/seqiv.c index 570b7d1aa0ca..39dbf2f7e5f5 100644 --- a/crypto/seqiv.c +++ b/crypto/seqiv.c | |||
@@ -144,8 +144,6 @@ static int seqiv_aead_decrypt(struct aead_request *req) | |||
144 | static int seqiv_aead_create(struct crypto_template *tmpl, struct rtattr **tb) | 144 | static int seqiv_aead_create(struct crypto_template *tmpl, struct rtattr **tb) |
145 | { | 145 | { |
146 | struct aead_instance *inst; | 146 | struct aead_instance *inst; |
147 | struct crypto_aead_spawn *spawn; | ||
148 | struct aead_alg *alg; | ||
149 | int err; | 147 | int err; |
150 | 148 | ||
151 | inst = aead_geniv_alloc(tmpl, tb, 0, 0); | 149 | inst = aead_geniv_alloc(tmpl, tb, 0, 0); |
@@ -153,9 +151,6 @@ static int seqiv_aead_create(struct crypto_template *tmpl, struct rtattr **tb) | |||
153 | if (IS_ERR(inst)) | 151 | if (IS_ERR(inst)) |
154 | return PTR_ERR(inst); | 152 | return PTR_ERR(inst); |
155 | 153 | ||
156 | spawn = aead_instance_ctx(inst); | ||
157 | alg = crypto_spawn_aead_alg(spawn); | ||
158 | |||
159 | err = -EINVAL; | 154 | err = -EINVAL; |
160 | if (inst->alg.ivsize != sizeof(u64)) | 155 | if (inst->alg.ivsize != sizeof(u64)) |
161 | goto free_inst; | 156 | goto free_inst; |
diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c index 7e8ed96236ce..a965b9d80559 100644 --- a/crypto/sha3_generic.c +++ b/crypto/sha3_generic.c | |||
@@ -5,6 +5,7 @@ | |||
5 | * http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf | 5 | * http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf |
6 | * | 6 | * |
7 | * SHA-3 code by Jeff Garzik <jeff@garzik.org> | 7 | * SHA-3 code by Jeff Garzik <jeff@garzik.org> |
8 | * Ard Biesheuvel <ard.biesheuvel@linaro.org> | ||
8 | * | 9 | * |
9 | * This program is free software; you can redistribute it and/or modify it | 10 | * This program is free software; you can redistribute it and/or modify it |
10 | * under the terms of the GNU General Public License as published by the Free | 11 | * under the terms of the GNU General Public License as published by the Free |
@@ -17,12 +18,10 @@ | |||
17 | #include <linux/module.h> | 18 | #include <linux/module.h> |
18 | #include <linux/types.h> | 19 | #include <linux/types.h> |
19 | #include <crypto/sha3.h> | 20 | #include <crypto/sha3.h> |
20 | #include <asm/byteorder.h> | 21 | #include <asm/unaligned.h> |
21 | 22 | ||
22 | #define KECCAK_ROUNDS 24 | 23 | #define KECCAK_ROUNDS 24 |
23 | 24 | ||
24 | #define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y)))) | ||
25 | |||
26 | static const u64 keccakf_rndc[24] = { | 25 | static const u64 keccakf_rndc[24] = { |
27 | 0x0000000000000001ULL, 0x0000000000008082ULL, 0x800000000000808aULL, | 26 | 0x0000000000000001ULL, 0x0000000000008082ULL, 0x800000000000808aULL, |
28 | 0x8000000080008000ULL, 0x000000000000808bULL, 0x0000000080000001ULL, | 27 | 0x8000000080008000ULL, 0x000000000000808bULL, 0x0000000080000001ULL, |
@@ -34,100 +33,133 @@ static const u64 keccakf_rndc[24] = { | |||
34 | 0x8000000000008080ULL, 0x0000000080000001ULL, 0x8000000080008008ULL | 33 | 0x8000000000008080ULL, 0x0000000080000001ULL, 0x8000000080008008ULL |
35 | }; | 34 | }; |
36 | 35 | ||
37 | static const int keccakf_rotc[24] = { | ||
38 | 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 2, 14, | ||
39 | 27, 41, 56, 8, 25, 43, 62, 18, 39, 61, 20, 44 | ||
40 | }; | ||
41 | |||
42 | static const int keccakf_piln[24] = { | ||
43 | 10, 7, 11, 17, 18, 3, 5, 16, 8, 21, 24, 4, | ||
44 | 15, 23, 19, 13, 12, 2, 20, 14, 22, 9, 6, 1 | ||
45 | }; | ||
46 | |||
47 | /* update the state with given number of rounds */ | 36 | /* update the state with given number of rounds */ |
48 | 37 | ||
49 | static void keccakf(u64 st[25]) | 38 | static void __attribute__((__optimize__("O3"))) keccakf(u64 st[25]) |
50 | { | 39 | { |
51 | int i, j, round; | 40 | u64 t[5], tt, bc[5]; |
52 | u64 t, bc[5]; | 41 | int round; |
53 | 42 | ||
54 | for (round = 0; round < KECCAK_ROUNDS; round++) { | 43 | for (round = 0; round < KECCAK_ROUNDS; round++) { |
55 | 44 | ||
56 | /* Theta */ | 45 | /* Theta */ |
57 | for (i = 0; i < 5; i++) | 46 | bc[0] = st[0] ^ st[5] ^ st[10] ^ st[15] ^ st[20]; |
58 | bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15] | 47 | bc[1] = st[1] ^ st[6] ^ st[11] ^ st[16] ^ st[21]; |
59 | ^ st[i + 20]; | 48 | bc[2] = st[2] ^ st[7] ^ st[12] ^ st[17] ^ st[22]; |
60 | 49 | bc[3] = st[3] ^ st[8] ^ st[13] ^ st[18] ^ st[23]; | |
61 | for (i = 0; i < 5; i++) { | 50 | bc[4] = st[4] ^ st[9] ^ st[14] ^ st[19] ^ st[24]; |
62 | t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1); | 51 | |
63 | for (j = 0; j < 25; j += 5) | 52 | t[0] = bc[4] ^ rol64(bc[1], 1); |
64 | st[j + i] ^= t; | 53 | t[1] = bc[0] ^ rol64(bc[2], 1); |
65 | } | 54 | t[2] = bc[1] ^ rol64(bc[3], 1); |
55 | t[3] = bc[2] ^ rol64(bc[4], 1); | ||
56 | t[4] = bc[3] ^ rol64(bc[0], 1); | ||
57 | |||
58 | st[0] ^= t[0]; | ||
66 | 59 | ||
67 | /* Rho Pi */ | 60 | /* Rho Pi */ |
68 | t = st[1]; | 61 | tt = st[1]; |
69 | for (i = 0; i < 24; i++) { | 62 | st[ 1] = rol64(st[ 6] ^ t[1], 44); |
70 | j = keccakf_piln[i]; | 63 | st[ 6] = rol64(st[ 9] ^ t[4], 20); |
71 | bc[0] = st[j]; | 64 | st[ 9] = rol64(st[22] ^ t[2], 61); |
72 | st[j] = ROTL64(t, keccakf_rotc[i]); | 65 | st[22] = rol64(st[14] ^ t[4], 39); |
73 | t = bc[0]; | 66 | st[14] = rol64(st[20] ^ t[0], 18); |
74 | } | 67 | st[20] = rol64(st[ 2] ^ t[2], 62); |
68 | st[ 2] = rol64(st[12] ^ t[2], 43); | ||
69 | st[12] = rol64(st[13] ^ t[3], 25); | ||
70 | st[13] = rol64(st[19] ^ t[4], 8); | ||
71 | st[19] = rol64(st[23] ^ t[3], 56); | ||
72 | st[23] = rol64(st[15] ^ t[0], 41); | ||
73 | st[15] = rol64(st[ 4] ^ t[4], 27); | ||
74 | st[ 4] = rol64(st[24] ^ t[4], 14); | ||
75 | st[24] = rol64(st[21] ^ t[1], 2); | ||
76 | st[21] = rol64(st[ 8] ^ t[3], 55); | ||
77 | st[ 8] = rol64(st[16] ^ t[1], 45); | ||
78 | st[16] = rol64(st[ 5] ^ t[0], 36); | ||
79 | st[ 5] = rol64(st[ 3] ^ t[3], 28); | ||
80 | st[ 3] = rol64(st[18] ^ t[3], 21); | ||
81 | st[18] = rol64(st[17] ^ t[2], 15); | ||
82 | st[17] = rol64(st[11] ^ t[1], 10); | ||
83 | st[11] = rol64(st[ 7] ^ t[2], 6); | ||
84 | st[ 7] = rol64(st[10] ^ t[0], 3); | ||
85 | st[10] = rol64( tt ^ t[1], 1); | ||
75 | 86 | ||
76 | /* Chi */ | 87 | /* Chi */ |
77 | for (j = 0; j < 25; j += 5) { | 88 | bc[ 0] = ~st[ 1] & st[ 2]; |
78 | for (i = 0; i < 5; i++) | 89 | bc[ 1] = ~st[ 2] & st[ 3]; |
79 | bc[i] = st[j + i]; | 90 | bc[ 2] = ~st[ 3] & st[ 4]; |
80 | for (i = 0; i < 5; i++) | 91 | bc[ 3] = ~st[ 4] & st[ 0]; |
81 | st[j + i] ^= (~bc[(i + 1) % 5]) & | 92 | bc[ 4] = ~st[ 0] & st[ 1]; |
82 | bc[(i + 2) % 5]; | 93 | st[ 0] ^= bc[ 0]; |
83 | } | 94 | st[ 1] ^= bc[ 1]; |
95 | st[ 2] ^= bc[ 2]; | ||
96 | st[ 3] ^= bc[ 3]; | ||
97 | st[ 4] ^= bc[ 4]; | ||
98 | |||
99 | bc[ 0] = ~st[ 6] & st[ 7]; | ||
100 | bc[ 1] = ~st[ 7] & st[ 8]; | ||
101 | bc[ 2] = ~st[ 8] & st[ 9]; | ||
102 | bc[ 3] = ~st[ 9] & st[ 5]; | ||
103 | bc[ 4] = ~st[ 5] & st[ 6]; | ||
104 | st[ 5] ^= bc[ 0]; | ||
105 | st[ 6] ^= bc[ 1]; | ||
106 | st[ 7] ^= bc[ 2]; | ||
107 | st[ 8] ^= bc[ 3]; | ||
108 | st[ 9] ^= bc[ 4]; | ||
109 | |||
110 | bc[ 0] = ~st[11] & st[12]; | ||
111 | bc[ 1] = ~st[12] & st[13]; | ||
112 | bc[ 2] = ~st[13] & st[14]; | ||
113 | bc[ 3] = ~st[14] & st[10]; | ||
114 | bc[ 4] = ~st[10] & st[11]; | ||
115 | st[10] ^= bc[ 0]; | ||
116 | st[11] ^= bc[ 1]; | ||
117 | st[12] ^= bc[ 2]; | ||
118 | st[13] ^= bc[ 3]; | ||
119 | st[14] ^= bc[ 4]; | ||
120 | |||
121 | bc[ 0] = ~st[16] & st[17]; | ||
122 | bc[ 1] = ~st[17] & st[18]; | ||
123 | bc[ 2] = ~st[18] & st[19]; | ||
124 | bc[ 3] = ~st[19] & st[15]; | ||
125 | bc[ 4] = ~st[15] & st[16]; | ||
126 | st[15] ^= bc[ 0]; | ||
127 | st[16] ^= bc[ 1]; | ||
128 | st[17] ^= bc[ 2]; | ||
129 | st[18] ^= bc[ 3]; | ||
130 | st[19] ^= bc[ 4]; | ||
131 | |||
132 | bc[ 0] = ~st[21] & st[22]; | ||
133 | bc[ 1] = ~st[22] & st[23]; | ||
134 | bc[ 2] = ~st[23] & st[24]; | ||
135 | bc[ 3] = ~st[24] & st[20]; | ||
136 | bc[ 4] = ~st[20] & st[21]; | ||
137 | st[20] ^= bc[ 0]; | ||
138 | st[21] ^= bc[ 1]; | ||
139 | st[22] ^= bc[ 2]; | ||
140 | st[23] ^= bc[ 3]; | ||
141 | st[24] ^= bc[ 4]; | ||
84 | 142 | ||
85 | /* Iota */ | 143 | /* Iota */ |
86 | st[0] ^= keccakf_rndc[round]; | 144 | st[0] ^= keccakf_rndc[round]; |
87 | } | 145 | } |
88 | } | 146 | } |
89 | 147 | ||
90 | static void sha3_init(struct sha3_state *sctx, unsigned int digest_sz) | 148 | int crypto_sha3_init(struct shash_desc *desc) |
91 | { | ||
92 | memset(sctx, 0, sizeof(*sctx)); | ||
93 | sctx->md_len = digest_sz; | ||
94 | sctx->rsiz = 200 - 2 * digest_sz; | ||
95 | sctx->rsizw = sctx->rsiz / 8; | ||
96 | } | ||
97 | |||
98 | static int sha3_224_init(struct shash_desc *desc) | ||
99 | { | ||
100 | struct sha3_state *sctx = shash_desc_ctx(desc); | ||
101 | |||
102 | sha3_init(sctx, SHA3_224_DIGEST_SIZE); | ||
103 | return 0; | ||
104 | } | ||
105 | |||
106 | static int sha3_256_init(struct shash_desc *desc) | ||
107 | { | 149 | { |
108 | struct sha3_state *sctx = shash_desc_ctx(desc); | 150 | struct sha3_state *sctx = shash_desc_ctx(desc); |
151 | unsigned int digest_size = crypto_shash_digestsize(desc->tfm); | ||
109 | 152 | ||
110 | sha3_init(sctx, SHA3_256_DIGEST_SIZE); | 153 | sctx->rsiz = 200 - 2 * digest_size; |
111 | return 0; | 154 | sctx->rsizw = sctx->rsiz / 8; |
112 | } | 155 | sctx->partial = 0; |
113 | |||
114 | static int sha3_384_init(struct shash_desc *desc) | ||
115 | { | ||
116 | struct sha3_state *sctx = shash_desc_ctx(desc); | ||
117 | |||
118 | sha3_init(sctx, SHA3_384_DIGEST_SIZE); | ||
119 | return 0; | ||
120 | } | ||
121 | |||
122 | static int sha3_512_init(struct shash_desc *desc) | ||
123 | { | ||
124 | struct sha3_state *sctx = shash_desc_ctx(desc); | ||
125 | 156 | ||
126 | sha3_init(sctx, SHA3_512_DIGEST_SIZE); | 157 | memset(sctx->st, 0, sizeof(sctx->st)); |
127 | return 0; | 158 | return 0; |
128 | } | 159 | } |
160 | EXPORT_SYMBOL(crypto_sha3_init); | ||
129 | 161 | ||
130 | static int sha3_update(struct shash_desc *desc, const u8 *data, | 162 | int crypto_sha3_update(struct shash_desc *desc, const u8 *data, |
131 | unsigned int len) | 163 | unsigned int len) |
132 | { | 164 | { |
133 | struct sha3_state *sctx = shash_desc_ctx(desc); | 165 | struct sha3_state *sctx = shash_desc_ctx(desc); |
@@ -149,7 +181,7 @@ static int sha3_update(struct shash_desc *desc, const u8 *data, | |||
149 | unsigned int i; | 181 | unsigned int i; |
150 | 182 | ||
151 | for (i = 0; i < sctx->rsizw; i++) | 183 | for (i = 0; i < sctx->rsizw; i++) |
152 | sctx->st[i] ^= ((u64 *) src)[i]; | 184 | sctx->st[i] ^= get_unaligned_le64(src + 8 * i); |
153 | keccakf(sctx->st); | 185 | keccakf(sctx->st); |
154 | 186 | ||
155 | done += sctx->rsiz; | 187 | done += sctx->rsiz; |
@@ -163,125 +195,89 @@ static int sha3_update(struct shash_desc *desc, const u8 *data, | |||
163 | 195 | ||
164 | return 0; | 196 | return 0; |
165 | } | 197 | } |
198 | EXPORT_SYMBOL(crypto_sha3_update); | ||
166 | 199 | ||
167 | static int sha3_final(struct shash_desc *desc, u8 *out) | 200 | int crypto_sha3_final(struct shash_desc *desc, u8 *out) |
168 | { | 201 | { |
169 | struct sha3_state *sctx = shash_desc_ctx(desc); | 202 | struct sha3_state *sctx = shash_desc_ctx(desc); |
170 | unsigned int i, inlen = sctx->partial; | 203 | unsigned int i, inlen = sctx->partial; |
204 | unsigned int digest_size = crypto_shash_digestsize(desc->tfm); | ||
205 | __le64 *digest = (__le64 *)out; | ||
171 | 206 | ||
172 | sctx->buf[inlen++] = 0x06; | 207 | sctx->buf[inlen++] = 0x06; |
173 | memset(sctx->buf + inlen, 0, sctx->rsiz - inlen); | 208 | memset(sctx->buf + inlen, 0, sctx->rsiz - inlen); |
174 | sctx->buf[sctx->rsiz - 1] |= 0x80; | 209 | sctx->buf[sctx->rsiz - 1] |= 0x80; |
175 | 210 | ||
176 | for (i = 0; i < sctx->rsizw; i++) | 211 | for (i = 0; i < sctx->rsizw; i++) |
177 | sctx->st[i] ^= ((u64 *) sctx->buf)[i]; | 212 | sctx->st[i] ^= get_unaligned_le64(sctx->buf + 8 * i); |
178 | 213 | ||
179 | keccakf(sctx->st); | 214 | keccakf(sctx->st); |
180 | 215 | ||
181 | for (i = 0; i < sctx->rsizw; i++) | 216 | for (i = 0; i < digest_size / 8; i++) |
182 | sctx->st[i] = cpu_to_le64(sctx->st[i]); | 217 | put_unaligned_le64(sctx->st[i], digest++); |
183 | 218 | ||
184 | memcpy(out, sctx->st, sctx->md_len); | 219 | if (digest_size & 4) |
220 | put_unaligned_le32(sctx->st[i], (__le32 *)digest); | ||
185 | 221 | ||
186 | memset(sctx, 0, sizeof(*sctx)); | 222 | memset(sctx, 0, sizeof(*sctx)); |
187 | return 0; | 223 | return 0; |
188 | } | 224 | } |
189 | 225 | EXPORT_SYMBOL(crypto_sha3_final); | |
190 | static struct shash_alg sha3_224 = { | 226 | |
191 | .digestsize = SHA3_224_DIGEST_SIZE, | 227 | static struct shash_alg algs[] = { { |
192 | .init = sha3_224_init, | 228 | .digestsize = SHA3_224_DIGEST_SIZE, |
193 | .update = sha3_update, | 229 | .init = crypto_sha3_init, |
194 | .final = sha3_final, | 230 | .update = crypto_sha3_update, |
195 | .descsize = sizeof(struct sha3_state), | 231 | .final = crypto_sha3_final, |
196 | .base = { | 232 | .descsize = sizeof(struct sha3_state), |
197 | .cra_name = "sha3-224", | 233 | .base.cra_name = "sha3-224", |
198 | .cra_driver_name = "sha3-224-generic", | 234 | .base.cra_driver_name = "sha3-224-generic", |
199 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, | 235 | .base.cra_flags = CRYPTO_ALG_TYPE_SHASH, |
200 | .cra_blocksize = SHA3_224_BLOCK_SIZE, | 236 | .base.cra_blocksize = SHA3_224_BLOCK_SIZE, |
201 | .cra_module = THIS_MODULE, | 237 | .base.cra_module = THIS_MODULE, |
202 | } | 238 | }, { |
203 | }; | 239 | .digestsize = SHA3_256_DIGEST_SIZE, |
204 | 240 | .init = crypto_sha3_init, | |
205 | static struct shash_alg sha3_256 = { | 241 | .update = crypto_sha3_update, |
206 | .digestsize = SHA3_256_DIGEST_SIZE, | 242 | .final = crypto_sha3_final, |
207 | .init = sha3_256_init, | 243 | .descsize = sizeof(struct sha3_state), |
208 | .update = sha3_update, | 244 | .base.cra_name = "sha3-256", |
209 | .final = sha3_final, | 245 | .base.cra_driver_name = "sha3-256-generic", |
210 | .descsize = sizeof(struct sha3_state), | 246 | .base.cra_flags = CRYPTO_ALG_TYPE_SHASH, |
211 | .base = { | 247 | .base.cra_blocksize = SHA3_256_BLOCK_SIZE, |
212 | .cra_name = "sha3-256", | 248 | .base.cra_module = THIS_MODULE, |
213 | .cra_driver_name = "sha3-256-generic", | 249 | }, { |
214 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, | 250 | .digestsize = SHA3_384_DIGEST_SIZE, |
215 | .cra_blocksize = SHA3_256_BLOCK_SIZE, | 251 | .init = crypto_sha3_init, |
216 | .cra_module = THIS_MODULE, | 252 | .update = crypto_sha3_update, |
217 | } | 253 | .final = crypto_sha3_final, |
218 | }; | 254 | .descsize = sizeof(struct sha3_state), |
219 | 255 | .base.cra_name = "sha3-384", | |
220 | static struct shash_alg sha3_384 = { | 256 | .base.cra_driver_name = "sha3-384-generic", |
221 | .digestsize = SHA3_384_DIGEST_SIZE, | 257 | .base.cra_flags = CRYPTO_ALG_TYPE_SHASH, |
222 | .init = sha3_384_init, | 258 | .base.cra_blocksize = SHA3_384_BLOCK_SIZE, |
223 | .update = sha3_update, | 259 | .base.cra_module = THIS_MODULE, |
224 | .final = sha3_final, | 260 | }, { |
225 | .descsize = sizeof(struct sha3_state), | 261 | .digestsize = SHA3_512_DIGEST_SIZE, |
226 | .base = { | 262 | .init = crypto_sha3_init, |
227 | .cra_name = "sha3-384", | 263 | .update = crypto_sha3_update, |
228 | .cra_driver_name = "sha3-384-generic", | 264 | .final = crypto_sha3_final, |
229 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, | 265 | .descsize = sizeof(struct sha3_state), |
230 | .cra_blocksize = SHA3_384_BLOCK_SIZE, | 266 | .base.cra_name = "sha3-512", |
231 | .cra_module = THIS_MODULE, | 267 | .base.cra_driver_name = "sha3-512-generic", |
232 | } | 268 | .base.cra_flags = CRYPTO_ALG_TYPE_SHASH, |
233 | }; | 269 | .base.cra_blocksize = SHA3_512_BLOCK_SIZE, |
234 | 270 | .base.cra_module = THIS_MODULE, | |
235 | static struct shash_alg sha3_512 = { | 271 | } }; |
236 | .digestsize = SHA3_512_DIGEST_SIZE, | ||
237 | .init = sha3_512_init, | ||
238 | .update = sha3_update, | ||
239 | .final = sha3_final, | ||
240 | .descsize = sizeof(struct sha3_state), | ||
241 | .base = { | ||
242 | .cra_name = "sha3-512", | ||
243 | .cra_driver_name = "sha3-512-generic", | ||
244 | .cra_flags = CRYPTO_ALG_TYPE_SHASH, | ||
245 | .cra_blocksize = SHA3_512_BLOCK_SIZE, | ||
246 | .cra_module = THIS_MODULE, | ||
247 | } | ||
248 | }; | ||
249 | 272 | ||
250 | static int __init sha3_generic_mod_init(void) | 273 | static int __init sha3_generic_mod_init(void) |
251 | { | 274 | { |
252 | int ret; | 275 | return crypto_register_shashes(algs, ARRAY_SIZE(algs)); |
253 | |||
254 | ret = crypto_register_shash(&sha3_224); | ||
255 | if (ret < 0) | ||
256 | goto err_out; | ||
257 | ret = crypto_register_shash(&sha3_256); | ||
258 | if (ret < 0) | ||
259 | goto err_out_224; | ||
260 | ret = crypto_register_shash(&sha3_384); | ||
261 | if (ret < 0) | ||
262 | goto err_out_256; | ||
263 | ret = crypto_register_shash(&sha3_512); | ||
264 | if (ret < 0) | ||
265 | goto err_out_384; | ||
266 | |||
267 | return 0; | ||
268 | |||
269 | err_out_384: | ||
270 | crypto_unregister_shash(&sha3_384); | ||
271 | err_out_256: | ||
272 | crypto_unregister_shash(&sha3_256); | ||
273 | err_out_224: | ||
274 | crypto_unregister_shash(&sha3_224); | ||
275 | err_out: | ||
276 | return ret; | ||
277 | } | 276 | } |
278 | 277 | ||
279 | static void __exit sha3_generic_mod_fini(void) | 278 | static void __exit sha3_generic_mod_fini(void) |
280 | { | 279 | { |
281 | crypto_unregister_shash(&sha3_224); | 280 | crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); |
282 | crypto_unregister_shash(&sha3_256); | ||
283 | crypto_unregister_shash(&sha3_384); | ||
284 | crypto_unregister_shash(&sha3_512); | ||
285 | } | 281 | } |
286 | 282 | ||
287 | module_init(sha3_generic_mod_init); | 283 | module_init(sha3_generic_mod_init); |
diff --git a/crypto/shash.c b/crypto/shash.c index e849d3ee2e27..5d732c6bb4b2 100644 --- a/crypto/shash.c +++ b/crypto/shash.c | |||
@@ -58,11 +58,18 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, | |||
58 | { | 58 | { |
59 | struct shash_alg *shash = crypto_shash_alg(tfm); | 59 | struct shash_alg *shash = crypto_shash_alg(tfm); |
60 | unsigned long alignmask = crypto_shash_alignmask(tfm); | 60 | unsigned long alignmask = crypto_shash_alignmask(tfm); |
61 | int err; | ||
61 | 62 | ||
62 | if ((unsigned long)key & alignmask) | 63 | if ((unsigned long)key & alignmask) |
63 | return shash_setkey_unaligned(tfm, key, keylen); | 64 | err = shash_setkey_unaligned(tfm, key, keylen); |
65 | else | ||
66 | err = shash->setkey(tfm, key, keylen); | ||
67 | |||
68 | if (err) | ||
69 | return err; | ||
64 | 70 | ||
65 | return shash->setkey(tfm, key, keylen); | 71 | crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
72 | return 0; | ||
66 | } | 73 | } |
67 | EXPORT_SYMBOL_GPL(crypto_shash_setkey); | 74 | EXPORT_SYMBOL_GPL(crypto_shash_setkey); |
68 | 75 | ||
@@ -181,6 +188,9 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data, | |||
181 | struct shash_alg *shash = crypto_shash_alg(tfm); | 188 | struct shash_alg *shash = crypto_shash_alg(tfm); |
182 | unsigned long alignmask = crypto_shash_alignmask(tfm); | 189 | unsigned long alignmask = crypto_shash_alignmask(tfm); |
183 | 190 | ||
191 | if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) | ||
192 | return -ENOKEY; | ||
193 | |||
184 | if (((unsigned long)data | (unsigned long)out) & alignmask) | 194 | if (((unsigned long)data | (unsigned long)out) & alignmask) |
185 | return shash_digest_unaligned(desc, data, len, out); | 195 | return shash_digest_unaligned(desc, data, len, out); |
186 | 196 | ||
@@ -360,7 +370,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm) | |||
360 | crt->digest = shash_async_digest; | 370 | crt->digest = shash_async_digest; |
361 | crt->setkey = shash_async_setkey; | 371 | crt->setkey = shash_async_setkey; |
362 | 372 | ||
363 | crt->has_setkey = alg->setkey != shash_no_setkey; | 373 | crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) & |
374 | CRYPTO_TFM_NEED_KEY); | ||
364 | 375 | ||
365 | if (alg->export) | 376 | if (alg->export) |
366 | crt->export = shash_async_export; | 377 | crt->export = shash_async_export; |
@@ -375,8 +386,14 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm) | |||
375 | static int crypto_shash_init_tfm(struct crypto_tfm *tfm) | 386 | static int crypto_shash_init_tfm(struct crypto_tfm *tfm) |
376 | { | 387 | { |
377 | struct crypto_shash *hash = __crypto_shash_cast(tfm); | 388 | struct crypto_shash *hash = __crypto_shash_cast(tfm); |
389 | struct shash_alg *alg = crypto_shash_alg(hash); | ||
390 | |||
391 | hash->descsize = alg->descsize; | ||
392 | |||
393 | if (crypto_shash_alg_has_setkey(alg) && | ||
394 | !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) | ||
395 | crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY); | ||
378 | 396 | ||
379 | hash->descsize = crypto_shash_alg(hash)->descsize; | ||
380 | return 0; | 397 | return 0; |
381 | } | 398 | } |
382 | 399 | ||
diff --git a/crypto/simd.c b/crypto/simd.c index 88203370a62f..208226d7f908 100644 --- a/crypto/simd.c +++ b/crypto/simd.c | |||
@@ -19,9 +19,7 @@ | |||
19 | * GNU General Public License for more details. | 19 | * GNU General Public License for more details. |
20 | * | 20 | * |
21 | * You should have received a copy of the GNU General Public License | 21 | * You should have received a copy of the GNU General Public License |
22 | * along with this program; if not, write to the Free Software | 22 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
23 | * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 | ||
24 | * USA | ||
25 | * | 23 | * |
26 | */ | 24 | */ |
27 | 25 | ||
diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 11af5fd6a443..0fe2a2923ad0 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c | |||
@@ -598,8 +598,11 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm, | |||
598 | err = crypto_blkcipher_setkey(blkcipher, key, keylen); | 598 | err = crypto_blkcipher_setkey(blkcipher, key, keylen); |
599 | crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) & | 599 | crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) & |
600 | CRYPTO_TFM_RES_MASK); | 600 | CRYPTO_TFM_RES_MASK); |
601 | if (err) | ||
602 | return err; | ||
601 | 603 | ||
602 | return err; | 604 | crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
605 | return 0; | ||
603 | } | 606 | } |
604 | 607 | ||
605 | static int skcipher_crypt_blkcipher(struct skcipher_request *req, | 608 | static int skcipher_crypt_blkcipher(struct skcipher_request *req, |
@@ -674,6 +677,9 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm) | |||
674 | skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher); | 677 | skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher); |
675 | skcipher->keysize = calg->cra_blkcipher.max_keysize; | 678 | skcipher->keysize = calg->cra_blkcipher.max_keysize; |
676 | 679 | ||
680 | if (skcipher->keysize) | ||
681 | crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); | ||
682 | |||
677 | return 0; | 683 | return 0; |
678 | } | 684 | } |
679 | 685 | ||
@@ -692,8 +698,11 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm, | |||
692 | crypto_skcipher_set_flags(tfm, | 698 | crypto_skcipher_set_flags(tfm, |
693 | crypto_ablkcipher_get_flags(ablkcipher) & | 699 | crypto_ablkcipher_get_flags(ablkcipher) & |
694 | CRYPTO_TFM_RES_MASK); | 700 | CRYPTO_TFM_RES_MASK); |
701 | if (err) | ||
702 | return err; | ||
695 | 703 | ||
696 | return err; | 704 | crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
705 | return 0; | ||
697 | } | 706 | } |
698 | 707 | ||
699 | static int skcipher_crypt_ablkcipher(struct skcipher_request *req, | 708 | static int skcipher_crypt_ablkcipher(struct skcipher_request *req, |
@@ -767,6 +776,9 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) | |||
767 | sizeof(struct ablkcipher_request); | 776 | sizeof(struct ablkcipher_request); |
768 | skcipher->keysize = calg->cra_ablkcipher.max_keysize; | 777 | skcipher->keysize = calg->cra_ablkcipher.max_keysize; |
769 | 778 | ||
779 | if (skcipher->keysize) | ||
780 | crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); | ||
781 | |||
770 | return 0; | 782 | return 0; |
771 | } | 783 | } |
772 | 784 | ||
@@ -796,6 +808,7 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, | |||
796 | { | 808 | { |
797 | struct skcipher_alg *cipher = crypto_skcipher_alg(tfm); | 809 | struct skcipher_alg *cipher = crypto_skcipher_alg(tfm); |
798 | unsigned long alignmask = crypto_skcipher_alignmask(tfm); | 810 | unsigned long alignmask = crypto_skcipher_alignmask(tfm); |
811 | int err; | ||
799 | 812 | ||
800 | if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) { | 813 | if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) { |
801 | crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); | 814 | crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); |
@@ -803,9 +816,15 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, | |||
803 | } | 816 | } |
804 | 817 | ||
805 | if ((unsigned long)key & alignmask) | 818 | if ((unsigned long)key & alignmask) |
806 | return skcipher_setkey_unaligned(tfm, key, keylen); | 819 | err = skcipher_setkey_unaligned(tfm, key, keylen); |
820 | else | ||
821 | err = cipher->setkey(tfm, key, keylen); | ||
822 | |||
823 | if (err) | ||
824 | return err; | ||
807 | 825 | ||
808 | return cipher->setkey(tfm, key, keylen); | 826 | crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); |
827 | return 0; | ||
809 | } | 828 | } |
810 | 829 | ||
811 | static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm) | 830 | static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm) |
@@ -834,6 +853,9 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm) | |||
834 | skcipher->ivsize = alg->ivsize; | 853 | skcipher->ivsize = alg->ivsize; |
835 | skcipher->keysize = alg->max_keysize; | 854 | skcipher->keysize = alg->max_keysize; |
836 | 855 | ||
856 | if (skcipher->keysize) | ||
857 | crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); | ||
858 | |||
837 | if (alg->exit) | 859 | if (alg->exit) |
838 | skcipher->base.exit = crypto_skcipher_exit_tfm; | 860 | skcipher->base.exit = crypto_skcipher_exit_tfm; |
839 | 861 | ||
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 9267cbdb14d2..14213a096fd2 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c | |||
@@ -67,6 +67,7 @@ static char *alg = NULL; | |||
67 | static u32 type; | 67 | static u32 type; |
68 | static u32 mask; | 68 | static u32 mask; |
69 | static int mode; | 69 | static int mode; |
70 | static u32 num_mb = 8; | ||
70 | static char *tvmem[TVMEMSIZE]; | 71 | static char *tvmem[TVMEMSIZE]; |
71 | 72 | ||
72 | static char *check[] = { | 73 | static char *check[] = { |
@@ -79,6 +80,66 @@ static char *check[] = { | |||
79 | NULL | 80 | NULL |
80 | }; | 81 | }; |
81 | 82 | ||
83 | static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 }; | ||
84 | static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 }; | ||
85 | |||
86 | #define XBUFSIZE 8 | ||
87 | #define MAX_IVLEN 32 | ||
88 | |||
89 | static int testmgr_alloc_buf(char *buf[XBUFSIZE]) | ||
90 | { | ||
91 | int i; | ||
92 | |||
93 | for (i = 0; i < XBUFSIZE; i++) { | ||
94 | buf[i] = (void *)__get_free_page(GFP_KERNEL); | ||
95 | if (!buf[i]) | ||
96 | goto err_free_buf; | ||
97 | } | ||
98 | |||
99 | return 0; | ||
100 | |||
101 | err_free_buf: | ||
102 | while (i-- > 0) | ||
103 | free_page((unsigned long)buf[i]); | ||
104 | |||
105 | return -ENOMEM; | ||
106 | } | ||
107 | |||
108 | static void testmgr_free_buf(char *buf[XBUFSIZE]) | ||
109 | { | ||
110 | int i; | ||
111 | |||
112 | for (i = 0; i < XBUFSIZE; i++) | ||
113 | free_page((unsigned long)buf[i]); | ||
114 | } | ||
115 | |||
116 | static void sg_init_aead(struct scatterlist *sg, char *xbuf[XBUFSIZE], | ||
117 | unsigned int buflen, const void *assoc, | ||
118 | unsigned int aad_size) | ||
119 | { | ||
120 | int np = (buflen + PAGE_SIZE - 1)/PAGE_SIZE; | ||
121 | int k, rem; | ||
122 | |||
123 | if (np > XBUFSIZE) { | ||
124 | rem = PAGE_SIZE; | ||
125 | np = XBUFSIZE; | ||
126 | } else { | ||
127 | rem = buflen % PAGE_SIZE; | ||
128 | } | ||
129 | |||
130 | sg_init_table(sg, np + 1); | ||
131 | |||
132 | sg_set_buf(&sg[0], assoc, aad_size); | ||
133 | |||
134 | if (rem) | ||
135 | np--; | ||
136 | for (k = 0; k < np; k++) | ||
137 | sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE); | ||
138 | |||
139 | if (rem) | ||
140 | sg_set_buf(&sg[k + 1], xbuf[k], rem); | ||
141 | } | ||
142 | |||
82 | static inline int do_one_aead_op(struct aead_request *req, int ret) | 143 | static inline int do_one_aead_op(struct aead_request *req, int ret) |
83 | { | 144 | { |
84 | struct crypto_wait *wait = req->base.data; | 145 | struct crypto_wait *wait = req->base.data; |
@@ -86,6 +147,298 @@ static inline int do_one_aead_op(struct aead_request *req, int ret) | |||
86 | return crypto_wait_req(ret, wait); | 147 | return crypto_wait_req(ret, wait); |
87 | } | 148 | } |
88 | 149 | ||
150 | struct test_mb_aead_data { | ||
151 | struct scatterlist sg[XBUFSIZE]; | ||
152 | struct scatterlist sgout[XBUFSIZE]; | ||
153 | struct aead_request *req; | ||
154 | struct crypto_wait wait; | ||
155 | char *xbuf[XBUFSIZE]; | ||
156 | char *xoutbuf[XBUFSIZE]; | ||
157 | char *axbuf[XBUFSIZE]; | ||
158 | }; | ||
159 | |||
160 | static int do_mult_aead_op(struct test_mb_aead_data *data, int enc, | ||
161 | u32 num_mb) | ||
162 | { | ||
163 | int i, rc[num_mb], err = 0; | ||
164 | |||
165 | /* Fire up a bunch of concurrent requests */ | ||
166 | for (i = 0; i < num_mb; i++) { | ||
167 | if (enc == ENCRYPT) | ||
168 | rc[i] = crypto_aead_encrypt(data[i].req); | ||
169 | else | ||
170 | rc[i] = crypto_aead_decrypt(data[i].req); | ||
171 | } | ||
172 | |||
173 | /* Wait for all requests to finish */ | ||
174 | for (i = 0; i < num_mb; i++) { | ||
175 | rc[i] = crypto_wait_req(rc[i], &data[i].wait); | ||
176 | |||
177 | if (rc[i]) { | ||
178 | pr_info("concurrent request %d error %d\n", i, rc[i]); | ||
179 | err = rc[i]; | ||
180 | } | ||
181 | } | ||
182 | |||
183 | return err; | ||
184 | } | ||
185 | |||
186 | static int test_mb_aead_jiffies(struct test_mb_aead_data *data, int enc, | ||
187 | int blen, int secs, u32 num_mb) | ||
188 | { | ||
189 | unsigned long start, end; | ||
190 | int bcount; | ||
191 | int ret; | ||
192 | |||
193 | for (start = jiffies, end = start + secs * HZ, bcount = 0; | ||
194 | time_before(jiffies, end); bcount++) { | ||
195 | ret = do_mult_aead_op(data, enc, num_mb); | ||
196 | if (ret) | ||
197 | return ret; | ||
198 | } | ||
199 | |||
200 | pr_cont("%d operations in %d seconds (%ld bytes)\n", | ||
201 | bcount * num_mb, secs, (long)bcount * blen * num_mb); | ||
202 | return 0; | ||
203 | } | ||
204 | |||
205 | static int test_mb_aead_cycles(struct test_mb_aead_data *data, int enc, | ||
206 | int blen, u32 num_mb) | ||
207 | { | ||
208 | unsigned long cycles = 0; | ||
209 | int ret = 0; | ||
210 | int i; | ||
211 | |||
212 | /* Warm-up run. */ | ||
213 | for (i = 0; i < 4; i++) { | ||
214 | ret = do_mult_aead_op(data, enc, num_mb); | ||
215 | if (ret) | ||
216 | goto out; | ||
217 | } | ||
218 | |||
219 | /* The real thing. */ | ||
220 | for (i = 0; i < 8; i++) { | ||
221 | cycles_t start, end; | ||
222 | |||
223 | start = get_cycles(); | ||
224 | ret = do_mult_aead_op(data, enc, num_mb); | ||
225 | end = get_cycles(); | ||
226 | |||
227 | if (ret) | ||
228 | goto out; | ||
229 | |||
230 | cycles += end - start; | ||
231 | } | ||
232 | |||
233 | out: | ||
234 | if (ret == 0) | ||
235 | pr_cont("1 operation in %lu cycles (%d bytes)\n", | ||
236 | (cycles + 4) / (8 * num_mb), blen); | ||
237 | |||
238 | return ret; | ||
239 | } | ||
240 | |||
241 | static void test_mb_aead_speed(const char *algo, int enc, int secs, | ||
242 | struct aead_speed_template *template, | ||
243 | unsigned int tcount, u8 authsize, | ||
244 | unsigned int aad_size, u8 *keysize, u32 num_mb) | ||
245 | { | ||
246 | struct test_mb_aead_data *data; | ||
247 | struct crypto_aead *tfm; | ||
248 | unsigned int i, j, iv_len; | ||
249 | const char *key; | ||
250 | const char *e; | ||
251 | void *assoc; | ||
252 | u32 *b_size; | ||
253 | char *iv; | ||
254 | int ret; | ||
255 | |||
256 | |||
257 | if (aad_size >= PAGE_SIZE) { | ||
258 | pr_err("associate data length (%u) too big\n", aad_size); | ||
259 | return; | ||
260 | } | ||
261 | |||
262 | iv = kzalloc(MAX_IVLEN, GFP_KERNEL); | ||
263 | if (!iv) | ||
264 | return; | ||
265 | |||
266 | if (enc == ENCRYPT) | ||
267 | e = "encryption"; | ||
268 | else | ||
269 | e = "decryption"; | ||
270 | |||
271 | data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); | ||
272 | if (!data) | ||
273 | goto out_free_iv; | ||
274 | |||
275 | tfm = crypto_alloc_aead(algo, 0, 0); | ||
276 | if (IS_ERR(tfm)) { | ||
277 | pr_err("failed to load transform for %s: %ld\n", | ||
278 | algo, PTR_ERR(tfm)); | ||
279 | goto out_free_data; | ||
280 | } | ||
281 | |||
282 | ret = crypto_aead_setauthsize(tfm, authsize); | ||
283 | |||
284 | for (i = 0; i < num_mb; ++i) | ||
285 | if (testmgr_alloc_buf(data[i].xbuf)) { | ||
286 | while (i--) | ||
287 | testmgr_free_buf(data[i].xbuf); | ||
288 | goto out_free_tfm; | ||
289 | } | ||
290 | |||
291 | for (i = 0; i < num_mb; ++i) | ||
292 | if (testmgr_alloc_buf(data[i].axbuf)) { | ||
293 | while (i--) | ||
294 | testmgr_free_buf(data[i].axbuf); | ||
295 | goto out_free_xbuf; | ||
296 | } | ||
297 | |||
298 | for (i = 0; i < num_mb; ++i) | ||
299 | if (testmgr_alloc_buf(data[i].xoutbuf)) { | ||
300 | while (i--) | ||
301 | testmgr_free_buf(data[i].xoutbuf); | ||
302 | goto out_free_axbuf; | ||
303 | } | ||
304 | |||
305 | for (i = 0; i < num_mb; ++i) { | ||
306 | data[i].req = aead_request_alloc(tfm, GFP_KERNEL); | ||
307 | if (!data[i].req) { | ||
308 | pr_err("alg: skcipher: Failed to allocate request for %s\n", | ||
309 | algo); | ||
310 | while (i--) | ||
311 | aead_request_free(data[i].req); | ||
312 | goto out_free_xoutbuf; | ||
313 | } | ||
314 | } | ||
315 | |||
316 | for (i = 0; i < num_mb; ++i) { | ||
317 | crypto_init_wait(&data[i].wait); | ||
318 | aead_request_set_callback(data[i].req, | ||
319 | CRYPTO_TFM_REQ_MAY_BACKLOG, | ||
320 | crypto_req_done, &data[i].wait); | ||
321 | } | ||
322 | |||
323 | pr_info("\ntesting speed of multibuffer %s (%s) %s\n", algo, | ||
324 | get_driver_name(crypto_aead, tfm), e); | ||
325 | |||
326 | i = 0; | ||
327 | do { | ||
328 | b_size = aead_sizes; | ||
329 | do { | ||
330 | if (*b_size + authsize > XBUFSIZE * PAGE_SIZE) { | ||
331 | pr_err("template (%u) too big for buffer (%lu)\n", | ||
332 | authsize + *b_size, | ||
333 | XBUFSIZE * PAGE_SIZE); | ||
334 | goto out; | ||
335 | } | ||
336 | |||
337 | pr_info("test %u (%d bit key, %d byte blocks): ", i, | ||
338 | *keysize * 8, *b_size); | ||
339 | |||
340 | /* Set up tfm global state, i.e. the key */ | ||
341 | |||
342 | memset(tvmem[0], 0xff, PAGE_SIZE); | ||
343 | key = tvmem[0]; | ||
344 | for (j = 0; j < tcount; j++) { | ||
345 | if (template[j].klen == *keysize) { | ||
346 | key = template[j].key; | ||
347 | break; | ||
348 | } | ||
349 | } | ||
350 | |||
351 | crypto_aead_clear_flags(tfm, ~0); | ||
352 | |||
353 | ret = crypto_aead_setkey(tfm, key, *keysize); | ||
354 | if (ret) { | ||
355 | pr_err("setkey() failed flags=%x\n", | ||
356 | crypto_aead_get_flags(tfm)); | ||
357 | goto out; | ||
358 | } | ||
359 | |||
360 | iv_len = crypto_aead_ivsize(tfm); | ||
361 | if (iv_len) | ||
362 | memset(iv, 0xff, iv_len); | ||
363 | |||
364 | /* Now setup per request stuff, i.e. buffers */ | ||
365 | |||
366 | for (j = 0; j < num_mb; ++j) { | ||
367 | struct test_mb_aead_data *cur = &data[j]; | ||
368 | |||
369 | assoc = cur->axbuf[0]; | ||
370 | memset(assoc, 0xff, aad_size); | ||
371 | |||
372 | sg_init_aead(cur->sg, cur->xbuf, | ||
373 | *b_size + (enc ? 0 : authsize), | ||
374 | assoc, aad_size); | ||
375 | |||
376 | sg_init_aead(cur->sgout, cur->xoutbuf, | ||
377 | *b_size + (enc ? authsize : 0), | ||
378 | assoc, aad_size); | ||
379 | |||
380 | aead_request_set_ad(cur->req, aad_size); | ||
381 | |||
382 | if (!enc) { | ||
383 | |||
384 | aead_request_set_crypt(cur->req, | ||
385 | cur->sgout, | ||
386 | cur->sg, | ||
387 | *b_size, iv); | ||
388 | ret = crypto_aead_encrypt(cur->req); | ||
389 | ret = do_one_aead_op(cur->req, ret); | ||
390 | |||
391 | if (ret) { | ||
392 | pr_err("calculating auth failed failed (%d)\n", | ||
393 | ret); | ||
394 | break; | ||
395 | } | ||
396 | } | ||
397 | |||
398 | aead_request_set_crypt(cur->req, cur->sg, | ||
399 | cur->sgout, *b_size + | ||
400 | (enc ? 0 : authsize), | ||
401 | iv); | ||
402 | |||
403 | } | ||
404 | |||
405 | if (secs) | ||
406 | ret = test_mb_aead_jiffies(data, enc, *b_size, | ||
407 | secs, num_mb); | ||
408 | else | ||
409 | ret = test_mb_aead_cycles(data, enc, *b_size, | ||
410 | num_mb); | ||
411 | |||
412 | if (ret) { | ||
413 | pr_err("%s() failed return code=%d\n", e, ret); | ||
414 | break; | ||
415 | } | ||
416 | b_size++; | ||
417 | i++; | ||
418 | } while (*b_size); | ||
419 | keysize++; | ||
420 | } while (*keysize); | ||
421 | |||
422 | out: | ||
423 | for (i = 0; i < num_mb; ++i) | ||
424 | aead_request_free(data[i].req); | ||
425 | out_free_xoutbuf: | ||
426 | for (i = 0; i < num_mb; ++i) | ||
427 | testmgr_free_buf(data[i].xoutbuf); | ||
428 | out_free_axbuf: | ||
429 | for (i = 0; i < num_mb; ++i) | ||
430 | testmgr_free_buf(data[i].axbuf); | ||
431 | out_free_xbuf: | ||
432 | for (i = 0; i < num_mb; ++i) | ||
433 | testmgr_free_buf(data[i].xbuf); | ||
434 | out_free_tfm: | ||
435 | crypto_free_aead(tfm); | ||
436 | out_free_data: | ||
437 | kfree(data); | ||
438 | out_free_iv: | ||
439 | kfree(iv); | ||
440 | } | ||
441 | |||
89 | static int test_aead_jiffies(struct aead_request *req, int enc, | 442 | static int test_aead_jiffies(struct aead_request *req, int enc, |
90 | int blen, int secs) | 443 | int blen, int secs) |
91 | { | 444 | { |
@@ -151,60 +504,6 @@ out: | |||
151 | return ret; | 504 | return ret; |
152 | } | 505 | } |
153 | 506 | ||
154 | static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 }; | ||
155 | static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 }; | ||
156 | |||
157 | #define XBUFSIZE 8 | ||
158 | #define MAX_IVLEN 32 | ||
159 | |||
160 | static int testmgr_alloc_buf(char *buf[XBUFSIZE]) | ||
161 | { | ||
162 | int i; | ||
163 | |||
164 | for (i = 0; i < XBUFSIZE; i++) { | ||
165 | buf[i] = (void *)__get_free_page(GFP_KERNEL); | ||
166 | if (!buf[i]) | ||
167 | goto err_free_buf; | ||
168 | } | ||
169 | |||
170 | return 0; | ||
171 | |||
172 | err_free_buf: | ||
173 | while (i-- > 0) | ||
174 | free_page((unsigned long)buf[i]); | ||
175 | |||
176 | return -ENOMEM; | ||
177 | } | ||
178 | |||
179 | static void testmgr_free_buf(char *buf[XBUFSIZE]) | ||
180 | { | ||
181 | int i; | ||
182 | |||
183 | for (i = 0; i < XBUFSIZE; i++) | ||
184 | free_page((unsigned long)buf[i]); | ||
185 | } | ||
186 | |||
187 | static void sg_init_aead(struct scatterlist *sg, char *xbuf[XBUFSIZE], | ||
188 | unsigned int buflen) | ||
189 | { | ||
190 | int np = (buflen + PAGE_SIZE - 1)/PAGE_SIZE; | ||
191 | int k, rem; | ||
192 | |||
193 | if (np > XBUFSIZE) { | ||
194 | rem = PAGE_SIZE; | ||
195 | np = XBUFSIZE; | ||
196 | } else { | ||
197 | rem = buflen % PAGE_SIZE; | ||
198 | } | ||
199 | |||
200 | sg_init_table(sg, np + 1); | ||
201 | np--; | ||
202 | for (k = 0; k < np; k++) | ||
203 | sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE); | ||
204 | |||
205 | sg_set_buf(&sg[k + 1], xbuf[k], rem); | ||
206 | } | ||
207 | |||
208 | static void test_aead_speed(const char *algo, int enc, unsigned int secs, | 507 | static void test_aead_speed(const char *algo, int enc, unsigned int secs, |
209 | struct aead_speed_template *template, | 508 | struct aead_speed_template *template, |
210 | unsigned int tcount, u8 authsize, | 509 | unsigned int tcount, u8 authsize, |
@@ -316,19 +615,37 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, | |||
316 | goto out; | 615 | goto out; |
317 | } | 616 | } |
318 | 617 | ||
319 | sg_init_aead(sg, xbuf, | 618 | sg_init_aead(sg, xbuf, *b_size + (enc ? 0 : authsize), |
320 | *b_size + (enc ? 0 : authsize)); | 619 | assoc, aad_size); |
321 | 620 | ||
322 | sg_init_aead(sgout, xoutbuf, | 621 | sg_init_aead(sgout, xoutbuf, |
323 | *b_size + (enc ? authsize : 0)); | 622 | *b_size + (enc ? authsize : 0), assoc, |
623 | aad_size); | ||
324 | 624 | ||
325 | sg_set_buf(&sg[0], assoc, aad_size); | 625 | aead_request_set_ad(req, aad_size); |
326 | sg_set_buf(&sgout[0], assoc, aad_size); | 626 | |
627 | if (!enc) { | ||
628 | |||
629 | /* | ||
630 | * For decryption we need a proper auth so | ||
631 | * we do the encryption path once with buffers | ||
632 | * reversed (input <-> output) to calculate it | ||
633 | */ | ||
634 | aead_request_set_crypt(req, sgout, sg, | ||
635 | *b_size, iv); | ||
636 | ret = do_one_aead_op(req, | ||
637 | crypto_aead_encrypt(req)); | ||
638 | |||
639 | if (ret) { | ||
640 | pr_err("calculating auth failed failed (%d)\n", | ||
641 | ret); | ||
642 | break; | ||
643 | } | ||
644 | } | ||
327 | 645 | ||
328 | aead_request_set_crypt(req, sg, sgout, | 646 | aead_request_set_crypt(req, sg, sgout, |
329 | *b_size + (enc ? 0 : authsize), | 647 | *b_size + (enc ? 0 : authsize), |
330 | iv); | 648 | iv); |
331 | aead_request_set_ad(req, aad_size); | ||
332 | 649 | ||
333 | if (secs) | 650 | if (secs) |
334 | ret = test_aead_jiffies(req, enc, *b_size, | 651 | ret = test_aead_jiffies(req, enc, *b_size, |
@@ -381,24 +698,98 @@ static inline int do_one_ahash_op(struct ahash_request *req, int ret) | |||
381 | } | 698 | } |
382 | 699 | ||
383 | struct test_mb_ahash_data { | 700 | struct test_mb_ahash_data { |
384 | struct scatterlist sg[TVMEMSIZE]; | 701 | struct scatterlist sg[XBUFSIZE]; |
385 | char result[64]; | 702 | char result[64]; |
386 | struct ahash_request *req; | 703 | struct ahash_request *req; |
387 | struct crypto_wait wait; | 704 | struct crypto_wait wait; |
388 | char *xbuf[XBUFSIZE]; | 705 | char *xbuf[XBUFSIZE]; |
389 | }; | 706 | }; |
390 | 707 | ||
391 | static void test_mb_ahash_speed(const char *algo, unsigned int sec, | 708 | static inline int do_mult_ahash_op(struct test_mb_ahash_data *data, u32 num_mb) |
392 | struct hash_speed *speed) | 709 | { |
710 | int i, rc[num_mb], err = 0; | ||
711 | |||
712 | /* Fire up a bunch of concurrent requests */ | ||
713 | for (i = 0; i < num_mb; i++) | ||
714 | rc[i] = crypto_ahash_digest(data[i].req); | ||
715 | |||
716 | /* Wait for all requests to finish */ | ||
717 | for (i = 0; i < num_mb; i++) { | ||
718 | rc[i] = crypto_wait_req(rc[i], &data[i].wait); | ||
719 | |||
720 | if (rc[i]) { | ||
721 | pr_info("concurrent request %d error %d\n", i, rc[i]); | ||
722 | err = rc[i]; | ||
723 | } | ||
724 | } | ||
725 | |||
726 | return err; | ||
727 | } | ||
728 | |||
729 | static int test_mb_ahash_jiffies(struct test_mb_ahash_data *data, int blen, | ||
730 | int secs, u32 num_mb) | ||
731 | { | ||
732 | unsigned long start, end; | ||
733 | int bcount; | ||
734 | int ret; | ||
735 | |||
736 | for (start = jiffies, end = start + secs * HZ, bcount = 0; | ||
737 | time_before(jiffies, end); bcount++) { | ||
738 | ret = do_mult_ahash_op(data, num_mb); | ||
739 | if (ret) | ||
740 | return ret; | ||
741 | } | ||
742 | |||
743 | pr_cont("%d operations in %d seconds (%ld bytes)\n", | ||
744 | bcount * num_mb, secs, (long)bcount * blen * num_mb); | ||
745 | return 0; | ||
746 | } | ||
747 | |||
748 | static int test_mb_ahash_cycles(struct test_mb_ahash_data *data, int blen, | ||
749 | u32 num_mb) | ||
750 | { | ||
751 | unsigned long cycles = 0; | ||
752 | int ret = 0; | ||
753 | int i; | ||
754 | |||
755 | /* Warm-up run. */ | ||
756 | for (i = 0; i < 4; i++) { | ||
757 | ret = do_mult_ahash_op(data, num_mb); | ||
758 | if (ret) | ||
759 | goto out; | ||
760 | } | ||
761 | |||
762 | /* The real thing. */ | ||
763 | for (i = 0; i < 8; i++) { | ||
764 | cycles_t start, end; | ||
765 | |||
766 | start = get_cycles(); | ||
767 | ret = do_mult_ahash_op(data, num_mb); | ||
768 | end = get_cycles(); | ||
769 | |||
770 | if (ret) | ||
771 | goto out; | ||
772 | |||
773 | cycles += end - start; | ||
774 | } | ||
775 | |||
776 | out: | ||
777 | if (ret == 0) | ||
778 | pr_cont("1 operation in %lu cycles (%d bytes)\n", | ||
779 | (cycles + 4) / (8 * num_mb), blen); | ||
780 | |||
781 | return ret; | ||
782 | } | ||
783 | |||
784 | static void test_mb_ahash_speed(const char *algo, unsigned int secs, | ||
785 | struct hash_speed *speed, u32 num_mb) | ||
393 | { | 786 | { |
394 | struct test_mb_ahash_data *data; | 787 | struct test_mb_ahash_data *data; |
395 | struct crypto_ahash *tfm; | 788 | struct crypto_ahash *tfm; |
396 | unsigned long start, end; | ||
397 | unsigned long cycles; | ||
398 | unsigned int i, j, k; | 789 | unsigned int i, j, k; |
399 | int ret; | 790 | int ret; |
400 | 791 | ||
401 | data = kzalloc(sizeof(*data) * 8, GFP_KERNEL); | 792 | data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); |
402 | if (!data) | 793 | if (!data) |
403 | return; | 794 | return; |
404 | 795 | ||
@@ -409,7 +800,7 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, | |||
409 | goto free_data; | 800 | goto free_data; |
410 | } | 801 | } |
411 | 802 | ||
412 | for (i = 0; i < 8; ++i) { | 803 | for (i = 0; i < num_mb; ++i) { |
413 | if (testmgr_alloc_buf(data[i].xbuf)) | 804 | if (testmgr_alloc_buf(data[i].xbuf)) |
414 | goto out; | 805 | goto out; |
415 | 806 | ||
@@ -424,7 +815,12 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, | |||
424 | 815 | ||
425 | ahash_request_set_callback(data[i].req, 0, crypto_req_done, | 816 | ahash_request_set_callback(data[i].req, 0, crypto_req_done, |
426 | &data[i].wait); | 817 | &data[i].wait); |
427 | test_hash_sg_init(data[i].sg); | 818 | |
819 | sg_init_table(data[i].sg, XBUFSIZE); | ||
820 | for (j = 0; j < XBUFSIZE; j++) { | ||
821 | sg_set_buf(data[i].sg + j, data[i].xbuf[j], PAGE_SIZE); | ||
822 | memset(data[i].xbuf[j], 0xff, PAGE_SIZE); | ||
823 | } | ||
428 | } | 824 | } |
429 | 825 | ||
430 | pr_info("\ntesting speed of multibuffer %s (%s)\n", algo, | 826 | pr_info("\ntesting speed of multibuffer %s (%s)\n", algo, |
@@ -435,16 +831,16 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, | |||
435 | if (speed[i].blen != speed[i].plen) | 831 | if (speed[i].blen != speed[i].plen) |
436 | continue; | 832 | continue; |
437 | 833 | ||
438 | if (speed[i].blen > TVMEMSIZE * PAGE_SIZE) { | 834 | if (speed[i].blen > XBUFSIZE * PAGE_SIZE) { |
439 | pr_err("template (%u) too big for tvmem (%lu)\n", | 835 | pr_err("template (%u) too big for tvmem (%lu)\n", |
440 | speed[i].blen, TVMEMSIZE * PAGE_SIZE); | 836 | speed[i].blen, XBUFSIZE * PAGE_SIZE); |
441 | goto out; | 837 | goto out; |
442 | } | 838 | } |
443 | 839 | ||
444 | if (speed[i].klen) | 840 | if (speed[i].klen) |
445 | crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen); | 841 | crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen); |
446 | 842 | ||
447 | for (k = 0; k < 8; k++) | 843 | for (k = 0; k < num_mb; k++) |
448 | ahash_request_set_crypt(data[k].req, data[k].sg, | 844 | ahash_request_set_crypt(data[k].req, data[k].sg, |
449 | data[k].result, speed[i].blen); | 845 | data[k].result, speed[i].blen); |
450 | 846 | ||
@@ -453,34 +849,12 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, | |||
453 | i, speed[i].blen, speed[i].plen, | 849 | i, speed[i].blen, speed[i].plen, |
454 | speed[i].blen / speed[i].plen); | 850 | speed[i].blen / speed[i].plen); |
455 | 851 | ||
456 | start = get_cycles(); | 852 | if (secs) |
457 | 853 | ret = test_mb_ahash_jiffies(data, speed[i].blen, secs, | |
458 | for (k = 0; k < 8; k++) { | 854 | num_mb); |
459 | ret = crypto_ahash_digest(data[k].req); | 855 | else |
460 | if (ret == -EINPROGRESS) { | 856 | ret = test_mb_ahash_cycles(data, speed[i].blen, num_mb); |
461 | ret = 0; | ||
462 | continue; | ||
463 | } | ||
464 | |||
465 | if (ret) | ||
466 | break; | ||
467 | |||
468 | crypto_req_done(&data[k].req->base, 0); | ||
469 | } | ||
470 | |||
471 | for (j = 0; j < k; j++) { | ||
472 | struct crypto_wait *wait = &data[j].wait; | ||
473 | int wait_ret; | ||
474 | |||
475 | wait_ret = crypto_wait_req(-EINPROGRESS, wait); | ||
476 | if (wait_ret) | ||
477 | ret = wait_ret; | ||
478 | } | ||
479 | 857 | ||
480 | end = get_cycles(); | ||
481 | cycles = end - start; | ||
482 | pr_cont("%6lu cycles/operation, %4lu cycles/byte\n", | ||
483 | cycles, cycles / (8 * speed[i].blen)); | ||
484 | 858 | ||
485 | if (ret) { | 859 | if (ret) { |
486 | pr_err("At least one hashing failed ret=%d\n", ret); | 860 | pr_err("At least one hashing failed ret=%d\n", ret); |
@@ -489,10 +863,10 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, | |||
489 | } | 863 | } |
490 | 864 | ||
491 | out: | 865 | out: |
492 | for (k = 0; k < 8; ++k) | 866 | for (k = 0; k < num_mb; ++k) |
493 | ahash_request_free(data[k].req); | 867 | ahash_request_free(data[k].req); |
494 | 868 | ||
495 | for (k = 0; k < 8; ++k) | 869 | for (k = 0; k < num_mb; ++k) |
496 | testmgr_free_buf(data[k].xbuf); | 870 | testmgr_free_buf(data[k].xbuf); |
497 | 871 | ||
498 | crypto_free_ahash(tfm); | 872 | crypto_free_ahash(tfm); |
@@ -736,6 +1110,254 @@ static void test_hash_speed(const char *algo, unsigned int secs, | |||
736 | return test_ahash_speed_common(algo, secs, speed, CRYPTO_ALG_ASYNC); | 1110 | return test_ahash_speed_common(algo, secs, speed, CRYPTO_ALG_ASYNC); |
737 | } | 1111 | } |
738 | 1112 | ||
1113 | struct test_mb_skcipher_data { | ||
1114 | struct scatterlist sg[XBUFSIZE]; | ||
1115 | struct skcipher_request *req; | ||
1116 | struct crypto_wait wait; | ||
1117 | char *xbuf[XBUFSIZE]; | ||
1118 | }; | ||
1119 | |||
1120 | static int do_mult_acipher_op(struct test_mb_skcipher_data *data, int enc, | ||
1121 | u32 num_mb) | ||
1122 | { | ||
1123 | int i, rc[num_mb], err = 0; | ||
1124 | |||
1125 | /* Fire up a bunch of concurrent requests */ | ||
1126 | for (i = 0; i < num_mb; i++) { | ||
1127 | if (enc == ENCRYPT) | ||
1128 | rc[i] = crypto_skcipher_encrypt(data[i].req); | ||
1129 | else | ||
1130 | rc[i] = crypto_skcipher_decrypt(data[i].req); | ||
1131 | } | ||
1132 | |||
1133 | /* Wait for all requests to finish */ | ||
1134 | for (i = 0; i < num_mb; i++) { | ||
1135 | rc[i] = crypto_wait_req(rc[i], &data[i].wait); | ||
1136 | |||
1137 | if (rc[i]) { | ||
1138 | pr_info("concurrent request %d error %d\n", i, rc[i]); | ||
1139 | err = rc[i]; | ||
1140 | } | ||
1141 | } | ||
1142 | |||
1143 | return err; | ||
1144 | } | ||
1145 | |||
1146 | static int test_mb_acipher_jiffies(struct test_mb_skcipher_data *data, int enc, | ||
1147 | int blen, int secs, u32 num_mb) | ||
1148 | { | ||
1149 | unsigned long start, end; | ||
1150 | int bcount; | ||
1151 | int ret; | ||
1152 | |||
1153 | for (start = jiffies, end = start + secs * HZ, bcount = 0; | ||
1154 | time_before(jiffies, end); bcount++) { | ||
1155 | ret = do_mult_acipher_op(data, enc, num_mb); | ||
1156 | if (ret) | ||
1157 | return ret; | ||
1158 | } | ||
1159 | |||
1160 | pr_cont("%d operations in %d seconds (%ld bytes)\n", | ||
1161 | bcount * num_mb, secs, (long)bcount * blen * num_mb); | ||
1162 | return 0; | ||
1163 | } | ||
1164 | |||
1165 | static int test_mb_acipher_cycles(struct test_mb_skcipher_data *data, int enc, | ||
1166 | int blen, u32 num_mb) | ||
1167 | { | ||
1168 | unsigned long cycles = 0; | ||
1169 | int ret = 0; | ||
1170 | int i; | ||
1171 | |||
1172 | /* Warm-up run. */ | ||
1173 | for (i = 0; i < 4; i++) { | ||
1174 | ret = do_mult_acipher_op(data, enc, num_mb); | ||
1175 | if (ret) | ||
1176 | goto out; | ||
1177 | } | ||
1178 | |||
1179 | /* The real thing. */ | ||
1180 | for (i = 0; i < 8; i++) { | ||
1181 | cycles_t start, end; | ||
1182 | |||
1183 | start = get_cycles(); | ||
1184 | ret = do_mult_acipher_op(data, enc, num_mb); | ||
1185 | end = get_cycles(); | ||
1186 | |||
1187 | if (ret) | ||
1188 | goto out; | ||
1189 | |||
1190 | cycles += end - start; | ||
1191 | } | ||
1192 | |||
1193 | out: | ||
1194 | if (ret == 0) | ||
1195 | pr_cont("1 operation in %lu cycles (%d bytes)\n", | ||
1196 | (cycles + 4) / (8 * num_mb), blen); | ||
1197 | |||
1198 | return ret; | ||
1199 | } | ||
1200 | |||
1201 | static void test_mb_skcipher_speed(const char *algo, int enc, int secs, | ||
1202 | struct cipher_speed_template *template, | ||
1203 | unsigned int tcount, u8 *keysize, u32 num_mb) | ||
1204 | { | ||
1205 | struct test_mb_skcipher_data *data; | ||
1206 | struct crypto_skcipher *tfm; | ||
1207 | unsigned int i, j, iv_len; | ||
1208 | const char *key; | ||
1209 | const char *e; | ||
1210 | u32 *b_size; | ||
1211 | char iv[128]; | ||
1212 | int ret; | ||
1213 | |||
1214 | if (enc == ENCRYPT) | ||
1215 | e = "encryption"; | ||
1216 | else | ||
1217 | e = "decryption"; | ||
1218 | |||
1219 | data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); | ||
1220 | if (!data) | ||
1221 | return; | ||
1222 | |||
1223 | tfm = crypto_alloc_skcipher(algo, 0, 0); | ||
1224 | if (IS_ERR(tfm)) { | ||
1225 | pr_err("failed to load transform for %s: %ld\n", | ||
1226 | algo, PTR_ERR(tfm)); | ||
1227 | goto out_free_data; | ||
1228 | } | ||
1229 | |||
1230 | for (i = 0; i < num_mb; ++i) | ||
1231 | if (testmgr_alloc_buf(data[i].xbuf)) { | ||
1232 | while (i--) | ||
1233 | testmgr_free_buf(data[i].xbuf); | ||
1234 | goto out_free_tfm; | ||
1235 | } | ||
1236 | |||
1237 | |||
1238 | for (i = 0; i < num_mb; ++i) | ||
1239 | if (testmgr_alloc_buf(data[i].xbuf)) { | ||
1240 | while (i--) | ||
1241 | testmgr_free_buf(data[i].xbuf); | ||
1242 | goto out_free_tfm; | ||
1243 | } | ||
1244 | |||
1245 | |||
1246 | for (i = 0; i < num_mb; ++i) { | ||
1247 | data[i].req = skcipher_request_alloc(tfm, GFP_KERNEL); | ||
1248 | if (!data[i].req) { | ||
1249 | pr_err("alg: skcipher: Failed to allocate request for %s\n", | ||
1250 | algo); | ||
1251 | while (i--) | ||
1252 | skcipher_request_free(data[i].req); | ||
1253 | goto out_free_xbuf; | ||
1254 | } | ||
1255 | } | ||
1256 | |||
1257 | for (i = 0; i < num_mb; ++i) { | ||
1258 | skcipher_request_set_callback(data[i].req, | ||
1259 | CRYPTO_TFM_REQ_MAY_BACKLOG, | ||
1260 | crypto_req_done, &data[i].wait); | ||
1261 | crypto_init_wait(&data[i].wait); | ||
1262 | } | ||
1263 | |||
1264 | pr_info("\ntesting speed of multibuffer %s (%s) %s\n", algo, | ||
1265 | get_driver_name(crypto_skcipher, tfm), e); | ||
1266 | |||
1267 | i = 0; | ||
1268 | do { | ||
1269 | b_size = block_sizes; | ||
1270 | do { | ||
1271 | if (*b_size > XBUFSIZE * PAGE_SIZE) { | ||
1272 | pr_err("template (%u) too big for buffer (%lu)\n", | ||
1273 | *b_size, XBUFSIZE * PAGE_SIZE); | ||
1274 | goto out; | ||
1275 | } | ||
1276 | |||
1277 | pr_info("test %u (%d bit key, %d byte blocks): ", i, | ||
1278 | *keysize * 8, *b_size); | ||
1279 | |||
1280 | /* Set up tfm global state, i.e. the key */ | ||
1281 | |||
1282 | memset(tvmem[0], 0xff, PAGE_SIZE); | ||
1283 | key = tvmem[0]; | ||
1284 | for (j = 0; j < tcount; j++) { | ||
1285 | if (template[j].klen == *keysize) { | ||
1286 | key = template[j].key; | ||
1287 | break; | ||
1288 | } | ||
1289 | } | ||
1290 | |||
1291 | crypto_skcipher_clear_flags(tfm, ~0); | ||
1292 | |||
1293 | ret = crypto_skcipher_setkey(tfm, key, *keysize); | ||
1294 | if (ret) { | ||
1295 | pr_err("setkey() failed flags=%x\n", | ||
1296 | crypto_skcipher_get_flags(tfm)); | ||
1297 | goto out; | ||
1298 | } | ||
1299 | |||
1300 | iv_len = crypto_skcipher_ivsize(tfm); | ||
1301 | if (iv_len) | ||
1302 | memset(&iv, 0xff, iv_len); | ||
1303 | |||
1304 | /* Now setup per request stuff, i.e. buffers */ | ||
1305 | |||
1306 | for (j = 0; j < num_mb; ++j) { | ||
1307 | struct test_mb_skcipher_data *cur = &data[j]; | ||
1308 | unsigned int k = *b_size; | ||
1309 | unsigned int pages = DIV_ROUND_UP(k, PAGE_SIZE); | ||
1310 | unsigned int p = 0; | ||
1311 | |||
1312 | sg_init_table(cur->sg, pages); | ||
1313 | |||
1314 | while (k > PAGE_SIZE) { | ||
1315 | sg_set_buf(cur->sg + p, cur->xbuf[p], | ||
1316 | PAGE_SIZE); | ||
1317 | memset(cur->xbuf[p], 0xff, PAGE_SIZE); | ||
1318 | p++; | ||
1319 | k -= PAGE_SIZE; | ||
1320 | } | ||
1321 | |||
1322 | sg_set_buf(cur->sg + p, cur->xbuf[p], k); | ||
1323 | memset(cur->xbuf[p], 0xff, k); | ||
1324 | |||
1325 | skcipher_request_set_crypt(cur->req, cur->sg, | ||
1326 | cur->sg, *b_size, | ||
1327 | iv); | ||
1328 | } | ||
1329 | |||
1330 | if (secs) | ||
1331 | ret = test_mb_acipher_jiffies(data, enc, | ||
1332 | *b_size, secs, | ||
1333 | num_mb); | ||
1334 | else | ||
1335 | ret = test_mb_acipher_cycles(data, enc, | ||
1336 | *b_size, num_mb); | ||
1337 | |||
1338 | if (ret) { | ||
1339 | pr_err("%s() failed flags=%x\n", e, | ||
1340 | crypto_skcipher_get_flags(tfm)); | ||
1341 | break; | ||
1342 | } | ||
1343 | b_size++; | ||
1344 | i++; | ||
1345 | } while (*b_size); | ||
1346 | keysize++; | ||
1347 | } while (*keysize); | ||
1348 | |||
1349 | out: | ||
1350 | for (i = 0; i < num_mb; ++i) | ||
1351 | skcipher_request_free(data[i].req); | ||
1352 | out_free_xbuf: | ||
1353 | for (i = 0; i < num_mb; ++i) | ||
1354 | testmgr_free_buf(data[i].xbuf); | ||
1355 | out_free_tfm: | ||
1356 | crypto_free_skcipher(tfm); | ||
1357 | out_free_data: | ||
1358 | kfree(data); | ||
1359 | } | ||
1360 | |||
739 | static inline int do_one_acipher_op(struct skcipher_request *req, int ret) | 1361 | static inline int do_one_acipher_op(struct skcipher_request *req, int ret) |
740 | { | 1362 | { |
741 | struct crypto_wait *wait = req->base.data; | 1363 | struct crypto_wait *wait = req->base.data; |
@@ -1557,16 +2179,24 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) | |||
1557 | NULL, 0, 16, 16, aead_speed_template_20); | 2179 | NULL, 0, 16, 16, aead_speed_template_20); |
1558 | test_aead_speed("gcm(aes)", ENCRYPT, sec, | 2180 | test_aead_speed("gcm(aes)", ENCRYPT, sec, |
1559 | NULL, 0, 16, 8, speed_template_16_24_32); | 2181 | NULL, 0, 16, 8, speed_template_16_24_32); |
2182 | test_aead_speed("rfc4106(gcm(aes))", DECRYPT, sec, | ||
2183 | NULL, 0, 16, 16, aead_speed_template_20); | ||
2184 | test_aead_speed("gcm(aes)", DECRYPT, sec, | ||
2185 | NULL, 0, 16, 8, speed_template_16_24_32); | ||
1560 | break; | 2186 | break; |
1561 | 2187 | ||
1562 | case 212: | 2188 | case 212: |
1563 | test_aead_speed("rfc4309(ccm(aes))", ENCRYPT, sec, | 2189 | test_aead_speed("rfc4309(ccm(aes))", ENCRYPT, sec, |
1564 | NULL, 0, 16, 16, aead_speed_template_19); | 2190 | NULL, 0, 16, 16, aead_speed_template_19); |
2191 | test_aead_speed("rfc4309(ccm(aes))", DECRYPT, sec, | ||
2192 | NULL, 0, 16, 16, aead_speed_template_19); | ||
1565 | break; | 2193 | break; |
1566 | 2194 | ||
1567 | case 213: | 2195 | case 213: |
1568 | test_aead_speed("rfc7539esp(chacha20,poly1305)", ENCRYPT, sec, | 2196 | test_aead_speed("rfc7539esp(chacha20,poly1305)", ENCRYPT, sec, |
1569 | NULL, 0, 16, 8, aead_speed_template_36); | 2197 | NULL, 0, 16, 8, aead_speed_template_36); |
2198 | test_aead_speed("rfc7539esp(chacha20,poly1305)", DECRYPT, sec, | ||
2199 | NULL, 0, 16, 8, aead_speed_template_36); | ||
1570 | break; | 2200 | break; |
1571 | 2201 | ||
1572 | case 214: | 2202 | case 214: |
@@ -1574,6 +2204,33 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) | |||
1574 | speed_template_32); | 2204 | speed_template_32); |
1575 | break; | 2205 | break; |
1576 | 2206 | ||
2207 | case 215: | ||
2208 | test_mb_aead_speed("rfc4106(gcm(aes))", ENCRYPT, sec, NULL, | ||
2209 | 0, 16, 16, aead_speed_template_20, num_mb); | ||
2210 | test_mb_aead_speed("gcm(aes)", ENCRYPT, sec, NULL, 0, 16, 8, | ||
2211 | speed_template_16_24_32, num_mb); | ||
2212 | test_mb_aead_speed("rfc4106(gcm(aes))", DECRYPT, sec, NULL, | ||
2213 | 0, 16, 16, aead_speed_template_20, num_mb); | ||
2214 | test_mb_aead_speed("gcm(aes)", DECRYPT, sec, NULL, 0, 16, 8, | ||
2215 | speed_template_16_24_32, num_mb); | ||
2216 | break; | ||
2217 | |||
2218 | case 216: | ||
2219 | test_mb_aead_speed("rfc4309(ccm(aes))", ENCRYPT, sec, NULL, 0, | ||
2220 | 16, 16, aead_speed_template_19, num_mb); | ||
2221 | test_mb_aead_speed("rfc4309(ccm(aes))", DECRYPT, sec, NULL, 0, | ||
2222 | 16, 16, aead_speed_template_19, num_mb); | ||
2223 | break; | ||
2224 | |||
2225 | case 217: | ||
2226 | test_mb_aead_speed("rfc7539esp(chacha20,poly1305)", ENCRYPT, | ||
2227 | sec, NULL, 0, 16, 8, aead_speed_template_36, | ||
2228 | num_mb); | ||
2229 | test_mb_aead_speed("rfc7539esp(chacha20,poly1305)", DECRYPT, | ||
2230 | sec, NULL, 0, 16, 8, aead_speed_template_36, | ||
2231 | num_mb); | ||
2232 | break; | ||
2233 | |||
1577 | case 300: | 2234 | case 300: |
1578 | if (alg) { | 2235 | if (alg) { |
1579 | test_hash_speed(alg, sec, generic_hash_speed_template); | 2236 | test_hash_speed(alg, sec, generic_hash_speed_template); |
@@ -1778,19 +2435,23 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) | |||
1778 | if (mode > 400 && mode < 500) break; | 2435 | if (mode > 400 && mode < 500) break; |
1779 | /* fall through */ | 2436 | /* fall through */ |
1780 | case 422: | 2437 | case 422: |
1781 | test_mb_ahash_speed("sha1", sec, generic_hash_speed_template); | 2438 | test_mb_ahash_speed("sha1", sec, generic_hash_speed_template, |
2439 | num_mb); | ||
1782 | if (mode > 400 && mode < 500) break; | 2440 | if (mode > 400 && mode < 500) break; |
1783 | /* fall through */ | 2441 | /* fall through */ |
1784 | case 423: | 2442 | case 423: |
1785 | test_mb_ahash_speed("sha256", sec, generic_hash_speed_template); | 2443 | test_mb_ahash_speed("sha256", sec, generic_hash_speed_template, |
2444 | num_mb); | ||
1786 | if (mode > 400 && mode < 500) break; | 2445 | if (mode > 400 && mode < 500) break; |
1787 | /* fall through */ | 2446 | /* fall through */ |
1788 | case 424: | 2447 | case 424: |
1789 | test_mb_ahash_speed("sha512", sec, generic_hash_speed_template); | 2448 | test_mb_ahash_speed("sha512", sec, generic_hash_speed_template, |
2449 | num_mb); | ||
1790 | if (mode > 400 && mode < 500) break; | 2450 | if (mode > 400 && mode < 500) break; |
1791 | /* fall through */ | 2451 | /* fall through */ |
1792 | case 425: | 2452 | case 425: |
1793 | test_mb_ahash_speed("sm3", sec, generic_hash_speed_template); | 2453 | test_mb_ahash_speed("sm3", sec, generic_hash_speed_template, |
2454 | num_mb); | ||
1794 | if (mode > 400 && mode < 500) break; | 2455 | if (mode > 400 && mode < 500) break; |
1795 | /* fall through */ | 2456 | /* fall through */ |
1796 | case 499: | 2457 | case 499: |
@@ -2008,6 +2669,218 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) | |||
2008 | speed_template_8_32); | 2669 | speed_template_8_32); |
2009 | break; | 2670 | break; |
2010 | 2671 | ||
2672 | case 600: | ||
2673 | test_mb_skcipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0, | ||
2674 | speed_template_16_24_32, num_mb); | ||
2675 | test_mb_skcipher_speed("ecb(aes)", DECRYPT, sec, NULL, 0, | ||
2676 | speed_template_16_24_32, num_mb); | ||
2677 | test_mb_skcipher_speed("cbc(aes)", ENCRYPT, sec, NULL, 0, | ||
2678 | speed_template_16_24_32, num_mb); | ||
2679 | test_mb_skcipher_speed("cbc(aes)", DECRYPT, sec, NULL, 0, | ||
2680 | speed_template_16_24_32, num_mb); | ||
2681 | test_mb_skcipher_speed("lrw(aes)", ENCRYPT, sec, NULL, 0, | ||
2682 | speed_template_32_40_48, num_mb); | ||
2683 | test_mb_skcipher_speed("lrw(aes)", DECRYPT, sec, NULL, 0, | ||
2684 | speed_template_32_40_48, num_mb); | ||
2685 | test_mb_skcipher_speed("xts(aes)", ENCRYPT, sec, NULL, 0, | ||
2686 | speed_template_32_64, num_mb); | ||
2687 | test_mb_skcipher_speed("xts(aes)", DECRYPT, sec, NULL, 0, | ||
2688 | speed_template_32_64, num_mb); | ||
2689 | test_mb_skcipher_speed("cts(cbc(aes))", ENCRYPT, sec, NULL, 0, | ||
2690 | speed_template_16_24_32, num_mb); | ||
2691 | test_mb_skcipher_speed("cts(cbc(aes))", DECRYPT, sec, NULL, 0, | ||
2692 | speed_template_16_24_32, num_mb); | ||
2693 | test_mb_skcipher_speed("ctr(aes)", ENCRYPT, sec, NULL, 0, | ||
2694 | speed_template_16_24_32, num_mb); | ||
2695 | test_mb_skcipher_speed("ctr(aes)", DECRYPT, sec, NULL, 0, | ||
2696 | speed_template_16_24_32, num_mb); | ||
2697 | test_mb_skcipher_speed("cfb(aes)", ENCRYPT, sec, NULL, 0, | ||
2698 | speed_template_16_24_32, num_mb); | ||
2699 | test_mb_skcipher_speed("cfb(aes)", DECRYPT, sec, NULL, 0, | ||
2700 | speed_template_16_24_32, num_mb); | ||
2701 | test_mb_skcipher_speed("ofb(aes)", ENCRYPT, sec, NULL, 0, | ||
2702 | speed_template_16_24_32, num_mb); | ||
2703 | test_mb_skcipher_speed("ofb(aes)", DECRYPT, sec, NULL, 0, | ||
2704 | speed_template_16_24_32, num_mb); | ||
2705 | test_mb_skcipher_speed("rfc3686(ctr(aes))", ENCRYPT, sec, NULL, | ||
2706 | 0, speed_template_20_28_36, num_mb); | ||
2707 | test_mb_skcipher_speed("rfc3686(ctr(aes))", DECRYPT, sec, NULL, | ||
2708 | 0, speed_template_20_28_36, num_mb); | ||
2709 | break; | ||
2710 | |||
2711 | case 601: | ||
2712 | test_mb_skcipher_speed("ecb(des3_ede)", ENCRYPT, sec, | ||
2713 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2714 | speed_template_24, num_mb); | ||
2715 | test_mb_skcipher_speed("ecb(des3_ede)", DECRYPT, sec, | ||
2716 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2717 | speed_template_24, num_mb); | ||
2718 | test_mb_skcipher_speed("cbc(des3_ede)", ENCRYPT, sec, | ||
2719 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2720 | speed_template_24, num_mb); | ||
2721 | test_mb_skcipher_speed("cbc(des3_ede)", DECRYPT, sec, | ||
2722 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2723 | speed_template_24, num_mb); | ||
2724 | test_mb_skcipher_speed("cfb(des3_ede)", ENCRYPT, sec, | ||
2725 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2726 | speed_template_24, num_mb); | ||
2727 | test_mb_skcipher_speed("cfb(des3_ede)", DECRYPT, sec, | ||
2728 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2729 | speed_template_24, num_mb); | ||
2730 | test_mb_skcipher_speed("ofb(des3_ede)", ENCRYPT, sec, | ||
2731 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2732 | speed_template_24, num_mb); | ||
2733 | test_mb_skcipher_speed("ofb(des3_ede)", DECRYPT, sec, | ||
2734 | des3_speed_template, DES3_SPEED_VECTORS, | ||
2735 | speed_template_24, num_mb); | ||
2736 | break; | ||
2737 | |||
2738 | case 602: | ||
2739 | test_mb_skcipher_speed("ecb(des)", ENCRYPT, sec, NULL, 0, | ||
2740 | speed_template_8, num_mb); | ||
2741 | test_mb_skcipher_speed("ecb(des)", DECRYPT, sec, NULL, 0, | ||
2742 | speed_template_8, num_mb); | ||
2743 | test_mb_skcipher_speed("cbc(des)", ENCRYPT, sec, NULL, 0, | ||
2744 | speed_template_8, num_mb); | ||
2745 | test_mb_skcipher_speed("cbc(des)", DECRYPT, sec, NULL, 0, | ||
2746 | speed_template_8, num_mb); | ||
2747 | test_mb_skcipher_speed("cfb(des)", ENCRYPT, sec, NULL, 0, | ||
2748 | speed_template_8, num_mb); | ||
2749 | test_mb_skcipher_speed("cfb(des)", DECRYPT, sec, NULL, 0, | ||
2750 | speed_template_8, num_mb); | ||
2751 | test_mb_skcipher_speed("ofb(des)", ENCRYPT, sec, NULL, 0, | ||
2752 | speed_template_8, num_mb); | ||
2753 | test_mb_skcipher_speed("ofb(des)", DECRYPT, sec, NULL, 0, | ||
2754 | speed_template_8, num_mb); | ||
2755 | break; | ||
2756 | |||
2757 | case 603: | ||
2758 | test_mb_skcipher_speed("ecb(serpent)", ENCRYPT, sec, NULL, 0, | ||
2759 | speed_template_16_32, num_mb); | ||
2760 | test_mb_skcipher_speed("ecb(serpent)", DECRYPT, sec, NULL, 0, | ||
2761 | speed_template_16_32, num_mb); | ||
2762 | test_mb_skcipher_speed("cbc(serpent)", ENCRYPT, sec, NULL, 0, | ||
2763 | speed_template_16_32, num_mb); | ||
2764 | test_mb_skcipher_speed("cbc(serpent)", DECRYPT, sec, NULL, 0, | ||
2765 | speed_template_16_32, num_mb); | ||
2766 | test_mb_skcipher_speed("ctr(serpent)", ENCRYPT, sec, NULL, 0, | ||
2767 | speed_template_16_32, num_mb); | ||
2768 | test_mb_skcipher_speed("ctr(serpent)", DECRYPT, sec, NULL, 0, | ||
2769 | speed_template_16_32, num_mb); | ||
2770 | test_mb_skcipher_speed("lrw(serpent)", ENCRYPT, sec, NULL, 0, | ||
2771 | speed_template_32_48, num_mb); | ||
2772 | test_mb_skcipher_speed("lrw(serpent)", DECRYPT, sec, NULL, 0, | ||
2773 | speed_template_32_48, num_mb); | ||
2774 | test_mb_skcipher_speed("xts(serpent)", ENCRYPT, sec, NULL, 0, | ||
2775 | speed_template_32_64, num_mb); | ||
2776 | test_mb_skcipher_speed("xts(serpent)", DECRYPT, sec, NULL, 0, | ||
2777 | speed_template_32_64, num_mb); | ||
2778 | break; | ||
2779 | |||
2780 | case 604: | ||
2781 | test_mb_skcipher_speed("ecb(twofish)", ENCRYPT, sec, NULL, 0, | ||
2782 | speed_template_16_24_32, num_mb); | ||
2783 | test_mb_skcipher_speed("ecb(twofish)", DECRYPT, sec, NULL, 0, | ||
2784 | speed_template_16_24_32, num_mb); | ||
2785 | test_mb_skcipher_speed("cbc(twofish)", ENCRYPT, sec, NULL, 0, | ||
2786 | speed_template_16_24_32, num_mb); | ||
2787 | test_mb_skcipher_speed("cbc(twofish)", DECRYPT, sec, NULL, 0, | ||
2788 | speed_template_16_24_32, num_mb); | ||
2789 | test_mb_skcipher_speed("ctr(twofish)", ENCRYPT, sec, NULL, 0, | ||
2790 | speed_template_16_24_32, num_mb); | ||
2791 | test_mb_skcipher_speed("ctr(twofish)", DECRYPT, sec, NULL, 0, | ||
2792 | speed_template_16_24_32, num_mb); | ||
2793 | test_mb_skcipher_speed("lrw(twofish)", ENCRYPT, sec, NULL, 0, | ||
2794 | speed_template_32_40_48, num_mb); | ||
2795 | test_mb_skcipher_speed("lrw(twofish)", DECRYPT, sec, NULL, 0, | ||
2796 | speed_template_32_40_48, num_mb); | ||
2797 | test_mb_skcipher_speed("xts(twofish)", ENCRYPT, sec, NULL, 0, | ||
2798 | speed_template_32_48_64, num_mb); | ||
2799 | test_mb_skcipher_speed("xts(twofish)", DECRYPT, sec, NULL, 0, | ||
2800 | speed_template_32_48_64, num_mb); | ||
2801 | break; | ||
2802 | |||
2803 | case 605: | ||
2804 | test_mb_skcipher_speed("ecb(arc4)", ENCRYPT, sec, NULL, 0, | ||
2805 | speed_template_8, num_mb); | ||
2806 | break; | ||
2807 | |||
2808 | case 606: | ||
2809 | test_mb_skcipher_speed("ecb(cast5)", ENCRYPT, sec, NULL, 0, | ||
2810 | speed_template_8_16, num_mb); | ||
2811 | test_mb_skcipher_speed("ecb(cast5)", DECRYPT, sec, NULL, 0, | ||
2812 | speed_template_8_16, num_mb); | ||
2813 | test_mb_skcipher_speed("cbc(cast5)", ENCRYPT, sec, NULL, 0, | ||
2814 | speed_template_8_16, num_mb); | ||
2815 | test_mb_skcipher_speed("cbc(cast5)", DECRYPT, sec, NULL, 0, | ||
2816 | speed_template_8_16, num_mb); | ||
2817 | test_mb_skcipher_speed("ctr(cast5)", ENCRYPT, sec, NULL, 0, | ||
2818 | speed_template_8_16, num_mb); | ||
2819 | test_mb_skcipher_speed("ctr(cast5)", DECRYPT, sec, NULL, 0, | ||
2820 | speed_template_8_16, num_mb); | ||
2821 | break; | ||
2822 | |||
2823 | case 607: | ||
2824 | test_mb_skcipher_speed("ecb(cast6)", ENCRYPT, sec, NULL, 0, | ||
2825 | speed_template_16_32, num_mb); | ||
2826 | test_mb_skcipher_speed("ecb(cast6)", DECRYPT, sec, NULL, 0, | ||
2827 | speed_template_16_32, num_mb); | ||
2828 | test_mb_skcipher_speed("cbc(cast6)", ENCRYPT, sec, NULL, 0, | ||
2829 | speed_template_16_32, num_mb); | ||
2830 | test_mb_skcipher_speed("cbc(cast6)", DECRYPT, sec, NULL, 0, | ||
2831 | speed_template_16_32, num_mb); | ||
2832 | test_mb_skcipher_speed("ctr(cast6)", ENCRYPT, sec, NULL, 0, | ||
2833 | speed_template_16_32, num_mb); | ||
2834 | test_mb_skcipher_speed("ctr(cast6)", DECRYPT, sec, NULL, 0, | ||
2835 | speed_template_16_32, num_mb); | ||
2836 | test_mb_skcipher_speed("lrw(cast6)", ENCRYPT, sec, NULL, 0, | ||
2837 | speed_template_32_48, num_mb); | ||
2838 | test_mb_skcipher_speed("lrw(cast6)", DECRYPT, sec, NULL, 0, | ||
2839 | speed_template_32_48, num_mb); | ||
2840 | test_mb_skcipher_speed("xts(cast6)", ENCRYPT, sec, NULL, 0, | ||
2841 | speed_template_32_64, num_mb); | ||
2842 | test_mb_skcipher_speed("xts(cast6)", DECRYPT, sec, NULL, 0, | ||
2843 | speed_template_32_64, num_mb); | ||
2844 | break; | ||
2845 | |||
2846 | case 608: | ||
2847 | test_mb_skcipher_speed("ecb(camellia)", ENCRYPT, sec, NULL, 0, | ||
2848 | speed_template_16_32, num_mb); | ||
2849 | test_mb_skcipher_speed("ecb(camellia)", DECRYPT, sec, NULL, 0, | ||
2850 | speed_template_16_32, num_mb); | ||
2851 | test_mb_skcipher_speed("cbc(camellia)", ENCRYPT, sec, NULL, 0, | ||
2852 | speed_template_16_32, num_mb); | ||
2853 | test_mb_skcipher_speed("cbc(camellia)", DECRYPT, sec, NULL, 0, | ||
2854 | speed_template_16_32, num_mb); | ||
2855 | test_mb_skcipher_speed("ctr(camellia)", ENCRYPT, sec, NULL, 0, | ||
2856 | speed_template_16_32, num_mb); | ||
2857 | test_mb_skcipher_speed("ctr(camellia)", DECRYPT, sec, NULL, 0, | ||
2858 | speed_template_16_32, num_mb); | ||
2859 | test_mb_skcipher_speed("lrw(camellia)", ENCRYPT, sec, NULL, 0, | ||
2860 | speed_template_32_48, num_mb); | ||
2861 | test_mb_skcipher_speed("lrw(camellia)", DECRYPT, sec, NULL, 0, | ||
2862 | speed_template_32_48, num_mb); | ||
2863 | test_mb_skcipher_speed("xts(camellia)", ENCRYPT, sec, NULL, 0, | ||
2864 | speed_template_32_64, num_mb); | ||
2865 | test_mb_skcipher_speed("xts(camellia)", DECRYPT, sec, NULL, 0, | ||
2866 | speed_template_32_64, num_mb); | ||
2867 | break; | ||
2868 | |||
2869 | case 609: | ||
2870 | test_mb_skcipher_speed("ecb(blowfish)", ENCRYPT, sec, NULL, 0, | ||
2871 | speed_template_8_32, num_mb); | ||
2872 | test_mb_skcipher_speed("ecb(blowfish)", DECRYPT, sec, NULL, 0, | ||
2873 | speed_template_8_32, num_mb); | ||
2874 | test_mb_skcipher_speed("cbc(blowfish)", ENCRYPT, sec, NULL, 0, | ||
2875 | speed_template_8_32, num_mb); | ||
2876 | test_mb_skcipher_speed("cbc(blowfish)", DECRYPT, sec, NULL, 0, | ||
2877 | speed_template_8_32, num_mb); | ||
2878 | test_mb_skcipher_speed("ctr(blowfish)", ENCRYPT, sec, NULL, 0, | ||
2879 | speed_template_8_32, num_mb); | ||
2880 | test_mb_skcipher_speed("ctr(blowfish)", DECRYPT, sec, NULL, 0, | ||
2881 | speed_template_8_32, num_mb); | ||
2882 | break; | ||
2883 | |||
2011 | case 1000: | 2884 | case 1000: |
2012 | test_available(); | 2885 | test_available(); |
2013 | break; | 2886 | break; |
@@ -2069,6 +2942,8 @@ module_param(mode, int, 0); | |||
2069 | module_param(sec, uint, 0); | 2942 | module_param(sec, uint, 0); |
2070 | MODULE_PARM_DESC(sec, "Length in seconds of speed tests " | 2943 | MODULE_PARM_DESC(sec, "Length in seconds of speed tests " |
2071 | "(defaults to zero which uses CPU cycles instead)"); | 2944 | "(defaults to zero which uses CPU cycles instead)"); |
2945 | module_param(num_mb, uint, 0000); | ||
2946 | MODULE_PARM_DESC(num_mb, "Number of concurrent requests to be used in mb speed tests (defaults to 8)"); | ||
2072 | 2947 | ||
2073 | MODULE_LICENSE("GPL"); | 2948 | MODULE_LICENSE("GPL"); |
2074 | MODULE_DESCRIPTION("Quick & dirty crypto testing module"); | 2949 | MODULE_DESCRIPTION("Quick & dirty crypto testing module"); |
diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 29d7020b8826..d5e23a142a04 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c | |||
@@ -177,6 +177,18 @@ static void testmgr_free_buf(char *buf[XBUFSIZE]) | |||
177 | free_page((unsigned long)buf[i]); | 177 | free_page((unsigned long)buf[i]); |
178 | } | 178 | } |
179 | 179 | ||
180 | static int ahash_guard_result(char *result, char c, int size) | ||
181 | { | ||
182 | int i; | ||
183 | |||
184 | for (i = 0; i < size; i++) { | ||
185 | if (result[i] != c) | ||
186 | return -EINVAL; | ||
187 | } | ||
188 | |||
189 | return 0; | ||
190 | } | ||
191 | |||
180 | static int ahash_partial_update(struct ahash_request **preq, | 192 | static int ahash_partial_update(struct ahash_request **preq, |
181 | struct crypto_ahash *tfm, const struct hash_testvec *template, | 193 | struct crypto_ahash *tfm, const struct hash_testvec *template, |
182 | void *hash_buff, int k, int temp, struct scatterlist *sg, | 194 | void *hash_buff, int k, int temp, struct scatterlist *sg, |
@@ -185,7 +197,8 @@ static int ahash_partial_update(struct ahash_request **preq, | |||
185 | char *state; | 197 | char *state; |
186 | struct ahash_request *req; | 198 | struct ahash_request *req; |
187 | int statesize, ret = -EINVAL; | 199 | int statesize, ret = -EINVAL; |
188 | const char guard[] = { 0x00, 0xba, 0xad, 0x00 }; | 200 | static const unsigned char guard[] = { 0x00, 0xba, 0xad, 0x00 }; |
201 | int digestsize = crypto_ahash_digestsize(tfm); | ||
189 | 202 | ||
190 | req = *preq; | 203 | req = *preq; |
191 | statesize = crypto_ahash_statesize( | 204 | statesize = crypto_ahash_statesize( |
@@ -196,12 +209,19 @@ static int ahash_partial_update(struct ahash_request **preq, | |||
196 | goto out_nostate; | 209 | goto out_nostate; |
197 | } | 210 | } |
198 | memcpy(state + statesize, guard, sizeof(guard)); | 211 | memcpy(state + statesize, guard, sizeof(guard)); |
212 | memset(result, 1, digestsize); | ||
199 | ret = crypto_ahash_export(req, state); | 213 | ret = crypto_ahash_export(req, state); |
200 | WARN_ON(memcmp(state + statesize, guard, sizeof(guard))); | 214 | WARN_ON(memcmp(state + statesize, guard, sizeof(guard))); |
201 | if (ret) { | 215 | if (ret) { |
202 | pr_err("alg: hash: Failed to export() for %s\n", algo); | 216 | pr_err("alg: hash: Failed to export() for %s\n", algo); |
203 | goto out; | 217 | goto out; |
204 | } | 218 | } |
219 | ret = ahash_guard_result(result, 1, digestsize); | ||
220 | if (ret) { | ||
221 | pr_err("alg: hash: Failed, export used req->result for %s\n", | ||
222 | algo); | ||
223 | goto out; | ||
224 | } | ||
205 | ahash_request_free(req); | 225 | ahash_request_free(req); |
206 | req = ahash_request_alloc(tfm, GFP_KERNEL); | 226 | req = ahash_request_alloc(tfm, GFP_KERNEL); |
207 | if (!req) { | 227 | if (!req) { |
@@ -221,6 +241,12 @@ static int ahash_partial_update(struct ahash_request **preq, | |||
221 | pr_err("alg: hash: Failed to import() for %s\n", algo); | 241 | pr_err("alg: hash: Failed to import() for %s\n", algo); |
222 | goto out; | 242 | goto out; |
223 | } | 243 | } |
244 | ret = ahash_guard_result(result, 1, digestsize); | ||
245 | if (ret) { | ||
246 | pr_err("alg: hash: Failed, import used req->result for %s\n", | ||
247 | algo); | ||
248 | goto out; | ||
249 | } | ||
224 | ret = crypto_wait_req(crypto_ahash_update(req), wait); | 250 | ret = crypto_wait_req(crypto_ahash_update(req), wait); |
225 | if (ret) | 251 | if (ret) |
226 | goto out; | 252 | goto out; |
@@ -316,18 +342,31 @@ static int __test_hash(struct crypto_ahash *tfm, | |||
316 | goto out; | 342 | goto out; |
317 | } | 343 | } |
318 | } else { | 344 | } else { |
345 | memset(result, 1, digest_size); | ||
319 | ret = crypto_wait_req(crypto_ahash_init(req), &wait); | 346 | ret = crypto_wait_req(crypto_ahash_init(req), &wait); |
320 | if (ret) { | 347 | if (ret) { |
321 | pr_err("alg: hash: init failed on test %d " | 348 | pr_err("alg: hash: init failed on test %d " |
322 | "for %s: ret=%d\n", j, algo, -ret); | 349 | "for %s: ret=%d\n", j, algo, -ret); |
323 | goto out; | 350 | goto out; |
324 | } | 351 | } |
352 | ret = ahash_guard_result(result, 1, digest_size); | ||
353 | if (ret) { | ||
354 | pr_err("alg: hash: init failed on test %d " | ||
355 | "for %s: used req->result\n", j, algo); | ||
356 | goto out; | ||
357 | } | ||
325 | ret = crypto_wait_req(crypto_ahash_update(req), &wait); | 358 | ret = crypto_wait_req(crypto_ahash_update(req), &wait); |
326 | if (ret) { | 359 | if (ret) { |
327 | pr_err("alg: hash: update failed on test %d " | 360 | pr_err("alg: hash: update failed on test %d " |
328 | "for %s: ret=%d\n", j, algo, -ret); | 361 | "for %s: ret=%d\n", j, algo, -ret); |
329 | goto out; | 362 | goto out; |
330 | } | 363 | } |
364 | ret = ahash_guard_result(result, 1, digest_size); | ||
365 | if (ret) { | ||
366 | pr_err("alg: hash: update failed on test %d " | ||
367 | "for %s: used req->result\n", j, algo); | ||
368 | goto out; | ||
369 | } | ||
331 | ret = crypto_wait_req(crypto_ahash_final(req), &wait); | 370 | ret = crypto_wait_req(crypto_ahash_final(req), &wait); |
332 | if (ret) { | 371 | if (ret) { |
333 | pr_err("alg: hash: final failed on test %d " | 372 | pr_err("alg: hash: final failed on test %d " |
diff --git a/crypto/testmgr.h b/crypto/testmgr.h index a714b6293959..6044f6906bd6 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h | |||
@@ -1052,6 +1052,142 @@ static const struct hash_testvec sha3_224_tv_template[] = { | |||
1052 | "\xc9\xfd\x55\x74\x49\x44\x79\xba" | 1052 | "\xc9\xfd\x55\x74\x49\x44\x79\xba" |
1053 | "\x5c\x7e\x7a\xb7\x6e\xf2\x64\xea" | 1053 | "\x5c\x7e\x7a\xb7\x6e\xf2\x64\xea" |
1054 | "\xd0\xfc\xce\x33", | 1054 | "\xd0\xfc\xce\x33", |
1055 | .np = 2, | ||
1056 | .tap = { 28, 28 }, | ||
1057 | }, { | ||
1058 | .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" | ||
1059 | "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" | ||
1060 | "\xec\x60\xf7\x8e\x02\x99\x30\xc7" | ||
1061 | "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" | ||
1062 | "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" | ||
1063 | "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" | ||
1064 | "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" | ||
1065 | "\x03\x77\x0e\xa5\x19\xb0\x47\xde" | ||
1066 | "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" | ||
1067 | "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" | ||
1068 | "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" | ||
1069 | "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" | ||
1070 | "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" | ||
1071 | "\x69\x00\x97\x0b\xa2\x39\xd0\x44" | ||
1072 | "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" | ||
1073 | "\x4d\xe4\x58\xef\x86\x1d\x91\x28" | ||
1074 | "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" | ||
1075 | "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" | ||
1076 | "\x80\x17\xae\x22\xb9\x50\xe7\x5b" | ||
1077 | "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" | ||
1078 | "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" | ||
1079 | "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" | ||
1080 | "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" | ||
1081 | "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" | ||
1082 | "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" | ||
1083 | "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" | ||
1084 | "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" | ||
1085 | "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" | ||
1086 | "\xae\x45\xdc\x50\xe7\x7e\x15\x89" | ||
1087 | "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" | ||
1088 | "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" | ||
1089 | "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" | ||
1090 | "\x53\xea\x81\x18\x8c\x23\xba\x2e" | ||
1091 | "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" | ||
1092 | "\x37\xce\x42\xd9\x70\x07\x7b\x12" | ||
1093 | "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" | ||
1094 | "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" | ||
1095 | "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" | ||
1096 | "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" | ||
1097 | "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" | ||
1098 | "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" | ||
1099 | "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" | ||
1100 | "\x81\x18\xaf\x23\xba\x51\xe8\x5c" | ||
1101 | "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" | ||
1102 | "\x65\xfc\x70\x07\x9e\x12\xa9\x40" | ||
1103 | "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" | ||
1104 | "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" | ||
1105 | "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" | ||
1106 | "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" | ||
1107 | "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" | ||
1108 | "\xee\x62\xf9\x90\x04\x9b\x32\xc9" | ||
1109 | "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" | ||
1110 | "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" | ||
1111 | "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" | ||
1112 | "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" | ||
1113 | "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" | ||
1114 | "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" | ||
1115 | "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" | ||
1116 | "\x38\xcf\x43\xda\x71\x08\x7c\x13" | ||
1117 | "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" | ||
1118 | "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" | ||
1119 | "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" | ||
1120 | "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" | ||
1121 | "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" | ||
1122 | "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" | ||
1123 | "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" | ||
1124 | "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" | ||
1125 | "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" | ||
1126 | "\x66\xfd\x71\x08\x9f\x13\xaa\x41" | ||
1127 | "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" | ||
1128 | "\x27\xbe\x55\xec\x60\xf7\x8e\x02" | ||
1129 | "\x99\x30\xc7\x3b\xd2\x69\x00\x74" | ||
1130 | "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" | ||
1131 | "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" | ||
1132 | "\xef\x63\xfa\x91\x05\x9c\x33\xca" | ||
1133 | "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" | ||
1134 | "\xb0\x47\xde\x52\xe9\x80\x17\x8b" | ||
1135 | "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" | ||
1136 | "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" | ||
1137 | "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" | ||
1138 | "\x55\xec\x83\x1a\x8e\x25\xbc\x30" | ||
1139 | "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" | ||
1140 | "\x39\xd0\x44\xdb\x72\x09\x7d\x14" | ||
1141 | "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" | ||
1142 | "\x1d\x91\x28\xbf\x33\xca\x61\xf8" | ||
1143 | "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" | ||
1144 | "\xde\x75\x0c\x80\x17\xae\x22\xb9" | ||
1145 | "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" | ||
1146 | "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" | ||
1147 | "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" | ||
1148 | "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" | ||
1149 | "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" | ||
1150 | "\x67\xfe\x72\x09\xa0\x14\xab\x42" | ||
1151 | "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" | ||
1152 | "\x28\xbf\x56\xed\x61\xf8\x8f\x03" | ||
1153 | "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" | ||
1154 | "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" | ||
1155 | "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" | ||
1156 | "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" | ||
1157 | "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" | ||
1158 | "\xb1\x48\xdf\x53\xea\x81\x18\x8c" | ||
1159 | "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" | ||
1160 | "\x95\x09\xa0\x37\xce\x42\xd9\x70" | ||
1161 | "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" | ||
1162 | "\x56\xed\x84\x1b\x8f\x26\xbd\x31" | ||
1163 | "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" | ||
1164 | "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" | ||
1165 | "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" | ||
1166 | "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" | ||
1167 | "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" | ||
1168 | "\xdf\x76\x0d\x81\x18\xaf\x23\xba" | ||
1169 | "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" | ||
1170 | "\xc3\x37\xce\x65\xfc\x70\x07\x9e" | ||
1171 | "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" | ||
1172 | "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" | ||
1173 | "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" | ||
1174 | "\x68\xff\x73\x0a\xa1\x15\xac\x43" | ||
1175 | "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" | ||
1176 | "\x29\xc0\x57\xee\x62\xf9\x90\x04" | ||
1177 | "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" | ||
1178 | "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" | ||
1179 | "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" | ||
1180 | "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" | ||
1181 | "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" | ||
1182 | "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" | ||
1183 | "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" | ||
1184 | "\x96\x0a\xa1\x38\xcf\x43\xda\x71" | ||
1185 | "\x08\x7c\x13\xaa\x1e\xb5\x4c", | ||
1186 | .psize = 1023, | ||
1187 | .digest = "\x7d\x0f\x2f\xb7\x65\x3b\xa7\x26" | ||
1188 | "\xc3\x88\x20\x71\x15\x06\xe8\x2d" | ||
1189 | "\xa3\x92\x44\xab\x3e\xe7\xff\x86" | ||
1190 | "\xb6\x79\x10\x72", | ||
1055 | }, | 1191 | }, |
1056 | }; | 1192 | }; |
1057 | 1193 | ||
@@ -1077,6 +1213,142 @@ static const struct hash_testvec sha3_256_tv_template[] = { | |||
1077 | "\x49\x10\x03\x76\xa8\x23\x5e\x2c" | 1213 | "\x49\x10\x03\x76\xa8\x23\x5e\x2c" |
1078 | "\x82\xe1\xb9\x99\x8a\x99\x9e\x21" | 1214 | "\x82\xe1\xb9\x99\x8a\x99\x9e\x21" |
1079 | "\xdb\x32\xdd\x97\x49\x6d\x33\x76", | 1215 | "\xdb\x32\xdd\x97\x49\x6d\x33\x76", |
1216 | .np = 2, | ||
1217 | .tap = { 28, 28 }, | ||
1218 | }, { | ||
1219 | .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" | ||
1220 | "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" | ||
1221 | "\xec\x60\xf7\x8e\x02\x99\x30\xc7" | ||
1222 | "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" | ||
1223 | "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" | ||
1224 | "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" | ||
1225 | "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" | ||
1226 | "\x03\x77\x0e\xa5\x19\xb0\x47\xde" | ||
1227 | "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" | ||
1228 | "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" | ||
1229 | "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" | ||
1230 | "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" | ||
1231 | "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" | ||
1232 | "\x69\x00\x97\x0b\xa2\x39\xd0\x44" | ||
1233 | "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" | ||
1234 | "\x4d\xe4\x58\xef\x86\x1d\x91\x28" | ||
1235 | "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" | ||
1236 | "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" | ||
1237 | "\x80\x17\xae\x22\xb9\x50\xe7\x5b" | ||
1238 | "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" | ||
1239 | "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" | ||
1240 | "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" | ||
1241 | "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" | ||
1242 | "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" | ||
1243 | "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" | ||
1244 | "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" | ||
1245 | "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" | ||
1246 | "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" | ||
1247 | "\xae\x45\xdc\x50\xe7\x7e\x15\x89" | ||
1248 | "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" | ||
1249 | "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" | ||
1250 | "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" | ||
1251 | "\x53\xea\x81\x18\x8c\x23\xba\x2e" | ||
1252 | "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" | ||
1253 | "\x37\xce\x42\xd9\x70\x07\x7b\x12" | ||
1254 | "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" | ||
1255 | "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" | ||
1256 | "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" | ||
1257 | "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" | ||
1258 | "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" | ||
1259 | "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" | ||
1260 | "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" | ||
1261 | "\x81\x18\xaf\x23\xba\x51\xe8\x5c" | ||
1262 | "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" | ||
1263 | "\x65\xfc\x70\x07\x9e\x12\xa9\x40" | ||
1264 | "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" | ||
1265 | "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" | ||
1266 | "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" | ||
1267 | "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" | ||
1268 | "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" | ||
1269 | "\xee\x62\xf9\x90\x04\x9b\x32\xc9" | ||
1270 | "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" | ||
1271 | "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" | ||
1272 | "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" | ||
1273 | "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" | ||
1274 | "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" | ||
1275 | "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" | ||
1276 | "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" | ||
1277 | "\x38\xcf\x43\xda\x71\x08\x7c\x13" | ||
1278 | "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" | ||
1279 | "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" | ||
1280 | "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" | ||
1281 | "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" | ||
1282 | "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" | ||
1283 | "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" | ||
1284 | "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" | ||
1285 | "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" | ||
1286 | "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" | ||
1287 | "\x66\xfd\x71\x08\x9f\x13\xaa\x41" | ||
1288 | "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" | ||
1289 | "\x27\xbe\x55\xec\x60\xf7\x8e\x02" | ||
1290 | "\x99\x30\xc7\x3b\xd2\x69\x00\x74" | ||
1291 | "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" | ||
1292 | "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" | ||
1293 | "\xef\x63\xfa\x91\x05\x9c\x33\xca" | ||
1294 | "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" | ||
1295 | "\xb0\x47\xde\x52\xe9\x80\x17\x8b" | ||
1296 | "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" | ||
1297 | "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" | ||
1298 | "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" | ||
1299 | "\x55\xec\x83\x1a\x8e\x25\xbc\x30" | ||
1300 | "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" | ||
1301 | "\x39\xd0\x44\xdb\x72\x09\x7d\x14" | ||
1302 | "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" | ||
1303 | "\x1d\x91\x28\xbf\x33\xca\x61\xf8" | ||
1304 | "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" | ||
1305 | "\xde\x75\x0c\x80\x17\xae\x22\xb9" | ||
1306 | "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" | ||
1307 | "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" | ||
1308 | "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" | ||
1309 | "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" | ||
1310 | "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" | ||
1311 | "\x67\xfe\x72\x09\xa0\x14\xab\x42" | ||
1312 | "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" | ||
1313 | "\x28\xbf\x56\xed\x61\xf8\x8f\x03" | ||
1314 | "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" | ||
1315 | "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" | ||
1316 | "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" | ||
1317 | "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" | ||
1318 | "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" | ||
1319 | "\xb1\x48\xdf\x53\xea\x81\x18\x8c" | ||
1320 | "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" | ||
1321 | "\x95\x09\xa0\x37\xce\x42\xd9\x70" | ||
1322 | "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" | ||
1323 | "\x56\xed\x84\x1b\x8f\x26\xbd\x31" | ||
1324 | "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" | ||
1325 | "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" | ||
1326 | "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" | ||
1327 | "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" | ||
1328 | "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" | ||
1329 | "\xdf\x76\x0d\x81\x18\xaf\x23\xba" | ||
1330 | "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" | ||
1331 | "\xc3\x37\xce\x65\xfc\x70\x07\x9e" | ||
1332 | "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" | ||
1333 | "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" | ||
1334 | "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" | ||
1335 | "\x68\xff\x73\x0a\xa1\x15\xac\x43" | ||
1336 | "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" | ||
1337 | "\x29\xc0\x57\xee\x62\xf9\x90\x04" | ||
1338 | "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" | ||
1339 | "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" | ||
1340 | "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" | ||
1341 | "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" | ||
1342 | "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" | ||
1343 | "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" | ||
1344 | "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" | ||
1345 | "\x96\x0a\xa1\x38\xcf\x43\xda\x71" | ||
1346 | "\x08\x7c\x13\xaa\x1e\xb5\x4c", | ||
1347 | .psize = 1023, | ||
1348 | .digest = "\xde\x41\x04\xbd\xda\xda\xd9\x71" | ||
1349 | "\xf7\xfa\x80\xf5\xea\x11\x03\xb1" | ||
1350 | "\x3b\x6a\xbc\x5f\xb9\x66\x26\xf7" | ||
1351 | "\x8a\x97\xbb\xf2\x07\x08\x38\x30", | ||
1080 | }, | 1352 | }, |
1081 | }; | 1353 | }; |
1082 | 1354 | ||
@@ -1109,6 +1381,144 @@ static const struct hash_testvec sha3_384_tv_template[] = { | |||
1109 | "\x9b\xfd\xbc\x32\xb9\xd4\xad\x5a" | 1381 | "\x9b\xfd\xbc\x32\xb9\xd4\xad\x5a" |
1110 | "\xa0\x4a\x1f\x07\x6e\x62\xfe\xa1" | 1382 | "\xa0\x4a\x1f\x07\x6e\x62\xfe\xa1" |
1111 | "\x9e\xef\x51\xac\xd0\x65\x7c\x22", | 1383 | "\x9e\xef\x51\xac\xd0\x65\x7c\x22", |
1384 | .np = 2, | ||
1385 | .tap = { 28, 28 }, | ||
1386 | }, { | ||
1387 | .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" | ||
1388 | "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" | ||
1389 | "\xec\x60\xf7\x8e\x02\x99\x30\xc7" | ||
1390 | "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" | ||
1391 | "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" | ||
1392 | "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" | ||
1393 | "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" | ||
1394 | "\x03\x77\x0e\xa5\x19\xb0\x47\xde" | ||
1395 | "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" | ||
1396 | "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" | ||
1397 | "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" | ||
1398 | "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" | ||
1399 | "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" | ||
1400 | "\x69\x00\x97\x0b\xa2\x39\xd0\x44" | ||
1401 | "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" | ||
1402 | "\x4d\xe4\x58\xef\x86\x1d\x91\x28" | ||
1403 | "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" | ||
1404 | "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" | ||
1405 | "\x80\x17\xae\x22\xb9\x50\xe7\x5b" | ||
1406 | "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" | ||
1407 | "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" | ||
1408 | "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" | ||
1409 | "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" | ||
1410 | "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" | ||
1411 | "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" | ||
1412 | "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" | ||
1413 | "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" | ||
1414 | "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" | ||
1415 | "\xae\x45\xdc\x50\xe7\x7e\x15\x89" | ||
1416 | "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" | ||
1417 | "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" | ||
1418 | "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" | ||
1419 | "\x53\xea\x81\x18\x8c\x23\xba\x2e" | ||
1420 | "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" | ||
1421 | "\x37\xce\x42\xd9\x70\x07\x7b\x12" | ||
1422 | "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" | ||
1423 | "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" | ||
1424 | "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" | ||
1425 | "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" | ||
1426 | "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" | ||
1427 | "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" | ||
1428 | "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" | ||
1429 | "\x81\x18\xaf\x23\xba\x51\xe8\x5c" | ||
1430 | "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" | ||
1431 | "\x65\xfc\x70\x07\x9e\x12\xa9\x40" | ||
1432 | "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" | ||
1433 | "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" | ||
1434 | "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" | ||
1435 | "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" | ||
1436 | "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" | ||
1437 | "\xee\x62\xf9\x90\x04\x9b\x32\xc9" | ||
1438 | "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" | ||
1439 | "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" | ||
1440 | "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" | ||
1441 | "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" | ||
1442 | "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" | ||
1443 | "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" | ||
1444 | "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" | ||
1445 | "\x38\xcf\x43\xda\x71\x08\x7c\x13" | ||
1446 | "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" | ||
1447 | "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" | ||
1448 | "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" | ||
1449 | "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" | ||
1450 | "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" | ||
1451 | "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" | ||
1452 | "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" | ||
1453 | "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" | ||
1454 | "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" | ||
1455 | "\x66\xfd\x71\x08\x9f\x13\xaa\x41" | ||
1456 | "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" | ||
1457 | "\x27\xbe\x55\xec\x60\xf7\x8e\x02" | ||
1458 | "\x99\x30\xc7\x3b\xd2\x69\x00\x74" | ||
1459 | "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" | ||
1460 | "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" | ||
1461 | "\xef\x63\xfa\x91\x05\x9c\x33\xca" | ||
1462 | "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" | ||
1463 | "\xb0\x47\xde\x52\xe9\x80\x17\x8b" | ||
1464 | "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" | ||
1465 | "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" | ||
1466 | "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" | ||
1467 | "\x55\xec\x83\x1a\x8e\x25\xbc\x30" | ||
1468 | "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" | ||
1469 | "\x39\xd0\x44\xdb\x72\x09\x7d\x14" | ||
1470 | "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" | ||
1471 | "\x1d\x91\x28\xbf\x33\xca\x61\xf8" | ||
1472 | "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" | ||
1473 | "\xde\x75\x0c\x80\x17\xae\x22\xb9" | ||
1474 | "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" | ||
1475 | "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" | ||
1476 | "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" | ||
1477 | "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" | ||
1478 | "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" | ||
1479 | "\x67\xfe\x72\x09\xa0\x14\xab\x42" | ||
1480 | "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" | ||
1481 | "\x28\xbf\x56\xed\x61\xf8\x8f\x03" | ||
1482 | "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" | ||
1483 | "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" | ||
1484 | "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" | ||
1485 | "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" | ||
1486 | "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" | ||
1487 | "\xb1\x48\xdf\x53\xea\x81\x18\x8c" | ||
1488 | "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" | ||
1489 | "\x95\x09\xa0\x37\xce\x42\xd9\x70" | ||
1490 | "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" | ||
1491 | "\x56\xed\x84\x1b\x8f\x26\xbd\x31" | ||
1492 | "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" | ||
1493 | "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" | ||
1494 | "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" | ||
1495 | "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" | ||
1496 | "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" | ||
1497 | "\xdf\x76\x0d\x81\x18\xaf\x23\xba" | ||
1498 | "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" | ||
1499 | "\xc3\x37\xce\x65\xfc\x70\x07\x9e" | ||
1500 | "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" | ||
1501 | "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" | ||
1502 | "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" | ||
1503 | "\x68\xff\x73\x0a\xa1\x15\xac\x43" | ||
1504 | "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" | ||
1505 | "\x29\xc0\x57\xee\x62\xf9\x90\x04" | ||
1506 | "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" | ||
1507 | "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" | ||
1508 | "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" | ||
1509 | "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" | ||
1510 | "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" | ||
1511 | "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" | ||
1512 | "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" | ||
1513 | "\x96\x0a\xa1\x38\xcf\x43\xda\x71" | ||
1514 | "\x08\x7c\x13\xaa\x1e\xb5\x4c", | ||
1515 | .psize = 1023, | ||
1516 | .digest = "\x1b\x19\x4d\x8f\xd5\x36\x87\x71" | ||
1517 | "\xcf\xca\x30\x85\x9b\xc1\x25\xc7" | ||
1518 | "\x00\xcb\x73\x8a\x8e\xd4\xfe\x2b" | ||
1519 | "\x1a\xa2\xdc\x2e\x41\xfd\x52\x51" | ||
1520 | "\xd2\x21\xae\x2d\xc7\xae\x8c\x40" | ||
1521 | "\xb9\xe6\x56\x48\x03\xcd\x88\x6b", | ||
1112 | }, | 1522 | }, |
1113 | }; | 1523 | }; |
1114 | 1524 | ||
@@ -1147,6 +1557,146 @@ static const struct hash_testvec sha3_512_tv_template[] = { | |||
1147 | "\xba\x1b\x0d\x8d\xc7\x8c\x08\x63" | 1557 | "\xba\x1b\x0d\x8d\xc7\x8c\x08\x63" |
1148 | "\x46\xb5\x33\xb4\x9c\x03\x0d\x99" | 1558 | "\x46\xb5\x33\xb4\x9c\x03\x0d\x99" |
1149 | "\xa2\x7d\xaf\x11\x39\xd6\xe7\x5e", | 1559 | "\xa2\x7d\xaf\x11\x39\xd6\xe7\x5e", |
1560 | .np = 2, | ||
1561 | .tap = { 28, 28 }, | ||
1562 | }, { | ||
1563 | .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" | ||
1564 | "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" | ||
1565 | "\xec\x60\xf7\x8e\x02\x99\x30\xc7" | ||
1566 | "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" | ||
1567 | "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" | ||
1568 | "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" | ||
1569 | "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" | ||
1570 | "\x03\x77\x0e\xa5\x19\xb0\x47\xde" | ||
1571 | "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" | ||
1572 | "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" | ||
1573 | "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" | ||
1574 | "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" | ||
1575 | "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" | ||
1576 | "\x69\x00\x97\x0b\xa2\x39\xd0\x44" | ||
1577 | "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" | ||
1578 | "\x4d\xe4\x58\xef\x86\x1d\x91\x28" | ||
1579 | "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" | ||
1580 | "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" | ||
1581 | "\x80\x17\xae\x22\xb9\x50\xe7\x5b" | ||
1582 | "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" | ||
1583 | "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" | ||
1584 | "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" | ||
1585 | "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" | ||
1586 | "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" | ||
1587 | "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" | ||
1588 | "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" | ||
1589 | "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" | ||
1590 | "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" | ||
1591 | "\xae\x45\xdc\x50\xe7\x7e\x15\x89" | ||
1592 | "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" | ||
1593 | "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" | ||
1594 | "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" | ||
1595 | "\x53\xea\x81\x18\x8c\x23\xba\x2e" | ||
1596 | "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" | ||
1597 | "\x37\xce\x42\xd9\x70\x07\x7b\x12" | ||
1598 | "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" | ||
1599 | "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" | ||
1600 | "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" | ||
1601 | "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" | ||
1602 | "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" | ||
1603 | "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" | ||
1604 | "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" | ||
1605 | "\x81\x18\xaf\x23\xba\x51\xe8\x5c" | ||
1606 | "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" | ||
1607 | "\x65\xfc\x70\x07\x9e\x12\xa9\x40" | ||
1608 | "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" | ||
1609 | "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" | ||
1610 | "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" | ||
1611 | "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" | ||
1612 | "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" | ||
1613 | "\xee\x62\xf9\x90\x04\x9b\x32\xc9" | ||
1614 | "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" | ||
1615 | "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" | ||
1616 | "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" | ||
1617 | "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" | ||
1618 | "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" | ||
1619 | "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" | ||
1620 | "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" | ||
1621 | "\x38\xcf\x43\xda\x71\x08\x7c\x13" | ||
1622 | "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" | ||
1623 | "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" | ||
1624 | "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" | ||
1625 | "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" | ||
1626 | "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" | ||
1627 | "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" | ||
1628 | "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" | ||
1629 | "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" | ||
1630 | "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" | ||
1631 | "\x66\xfd\x71\x08\x9f\x13\xaa\x41" | ||
1632 | "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" | ||
1633 | "\x27\xbe\x55\xec\x60\xf7\x8e\x02" | ||
1634 | "\x99\x30\xc7\x3b\xd2\x69\x00\x74" | ||
1635 | "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" | ||
1636 | "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" | ||
1637 | "\xef\x63\xfa\x91\x05\x9c\x33\xca" | ||
1638 | "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" | ||
1639 | "\xb0\x47\xde\x52\xe9\x80\x17\x8b" | ||
1640 | "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" | ||
1641 | "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" | ||
1642 | "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" | ||
1643 | "\x55\xec\x83\x1a\x8e\x25\xbc\x30" | ||
1644 | "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" | ||
1645 | "\x39\xd0\x44\xdb\x72\x09\x7d\x14" | ||
1646 | "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" | ||
1647 | "\x1d\x91\x28\xbf\x33\xca\x61\xf8" | ||
1648 | "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" | ||
1649 | "\xde\x75\x0c\x80\x17\xae\x22\xb9" | ||
1650 | "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" | ||
1651 | "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" | ||
1652 | "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" | ||
1653 | "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" | ||
1654 | "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" | ||
1655 | "\x67\xfe\x72\x09\xa0\x14\xab\x42" | ||
1656 | "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" | ||
1657 | "\x28\xbf\x56\xed\x61\xf8\x8f\x03" | ||
1658 | "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" | ||
1659 | "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" | ||
1660 | "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" | ||
1661 | "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" | ||
1662 | "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" | ||
1663 | "\xb1\x48\xdf\x53\xea\x81\x18\x8c" | ||
1664 | "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" | ||
1665 | "\x95\x09\xa0\x37\xce\x42\xd9\x70" | ||
1666 | "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" | ||
1667 | "\x56\xed\x84\x1b\x8f\x26\xbd\x31" | ||
1668 | "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" | ||
1669 | "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" | ||
1670 | "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" | ||
1671 | "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" | ||
1672 | "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" | ||
1673 | "\xdf\x76\x0d\x81\x18\xaf\x23\xba" | ||
1674 | "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" | ||
1675 | "\xc3\x37\xce\x65\xfc\x70\x07\x9e" | ||
1676 | "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" | ||
1677 | "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" | ||
1678 | "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" | ||
1679 | "\x68\xff\x73\x0a\xa1\x15\xac\x43" | ||
1680 | "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" | ||
1681 | "\x29\xc0\x57\xee\x62\xf9\x90\x04" | ||
1682 | "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" | ||
1683 | "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" | ||
1684 | "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" | ||
1685 | "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" | ||
1686 | "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" | ||
1687 | "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" | ||
1688 | "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" | ||
1689 | "\x96\x0a\xa1\x38\xcf\x43\xda\x71" | ||
1690 | "\x08\x7c\x13\xaa\x1e\xb5\x4c", | ||
1691 | .psize = 1023, | ||
1692 | .digest = "\x59\xda\x30\xe3\x90\xe4\x3d\xde" | ||
1693 | "\xf0\xc6\x42\x17\xd7\xb2\x26\x47" | ||
1694 | "\x90\x28\xa6\x84\xe8\x49\x7a\x86" | ||
1695 | "\xd6\xb8\x9e\xf8\x07\x59\x21\x03" | ||
1696 | "\xad\xd2\xed\x48\xa3\xb9\xa5\xf0" | ||
1697 | "\xb3\xae\x02\x2b\xb8\xaf\xc3\x3b" | ||
1698 | "\xd6\xb0\x8f\xcb\x76\x8b\xa7\x41" | ||
1699 | "\x32\xc2\x8e\x50\x91\x86\x90\xfb", | ||
1150 | }, | 1700 | }, |
1151 | }; | 1701 | }; |
1152 | 1702 | ||
diff --git a/crypto/twofish_common.c b/crypto/twofish_common.c index 5f62c4f9f6e0..f3a0dd25f871 100644 --- a/crypto/twofish_common.c +++ b/crypto/twofish_common.c | |||
@@ -24,9 +24,8 @@ | |||
24 | * GNU General Public License for more details. | 24 | * GNU General Public License for more details. |
25 | * | 25 | * |
26 | * You should have received a copy of the GNU General Public License | 26 | * You should have received a copy of the GNU General Public License |
27 | * along with this program; if not, write to the Free Software | 27 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
28 | * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 | 28 | * |
29 | * USA | ||
30 | * | 29 | * |
31 | * This code is a "clean room" implementation, written from the paper | 30 | * This code is a "clean room" implementation, written from the paper |
32 | * _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey, | 31 | * _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey, |
diff --git a/crypto/twofish_generic.c b/crypto/twofish_generic.c index ebf7a3efb572..07e62433fbfb 100644 --- a/crypto/twofish_generic.c +++ b/crypto/twofish_generic.c | |||
@@ -23,9 +23,8 @@ | |||
23 | * GNU General Public License for more details. | 23 | * GNU General Public License for more details. |
24 | * | 24 | * |
25 | * You should have received a copy of the GNU General Public License | 25 | * You should have received a copy of the GNU General Public License |
26 | * along with this program; if not, write to the Free Software | 26 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
27 | * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 | 27 | * |
28 | * USA | ||
29 | * | 28 | * |
30 | * This code is a "clean room" implementation, written from the paper | 29 | * This code is a "clean room" implementation, written from the paper |
31 | * _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey, | 30 | * _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey, |
diff --git a/crypto/xcbc.c b/crypto/xcbc.c index df90b332554c..25c75af50d3f 100644 --- a/crypto/xcbc.c +++ b/crypto/xcbc.c | |||
@@ -12,8 +12,7 @@ | |||
12 | * GNU General Public License for more details. | 12 | * GNU General Public License for more details. |
13 | * | 13 | * |
14 | * You should have received a copy of the GNU General Public License | 14 | * You should have received a copy of the GNU General Public License |
15 | * along with this program; if not, write to the Free Software | 15 | * along with this program. If not, see <http://www.gnu.org/licenses/>. |
16 | * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | ||
17 | * | 16 | * |
18 | * Author: | 17 | * Author: |
19 | * Kazunori Miyazawa <miyazawa@linux-ipv6.org> | 18 | * Kazunori Miyazawa <miyazawa@linux-ipv6.org> |