summaryrefslogtreecommitdiff
path: root/crypto/lrw.c
AgeCommit message (Collapse)Author
2019-05-06Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "API: - Add support for AEAD in simd - Add fuzz testing to testmgr - Add panic_on_fail module parameter to testmgr - Use per-CPU struct instead multiple variables in scompress - Change verify API for akcipher Algorithms: - Convert x86 AEAD algorithms over to simd - Forbid 2-key 3DES in FIPS mode - Add EC-RDSA (GOST 34.10) algorithm Drivers: - Set output IV with ctr-aes in crypto4xx - Set output IV in rockchip - Fix potential length overflow with hashing in sun4i-ss - Fix computation error with ctr in vmx - Add SM4 protected keys support in ccree - Remove long-broken mxc-scc driver - Add rfc4106(gcm(aes)) cipher support in cavium/nitrox" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (179 commits) crypto: ccree - use a proper le32 type for le32 val crypto: ccree - remove set but not used variable 'du_size' crypto: ccree - Make cc_sec_disable static crypto: ccree - fix spelling mistake "protedcted" -> "protected" crypto: caam/qi2 - generate hash keys in-place crypto: caam/qi2 - fix DMA mapping of stack memory crypto: caam/qi2 - fix zero-length buffer DMA mapping crypto: stm32/cryp - update to return iv_out crypto: stm32/cryp - remove request mutex protection crypto: stm32/cryp - add weak key check for DES crypto: atmel - remove set but not used variable 'alg_name' crypto: picoxcell - Use dev_get_drvdata() crypto: crypto4xx - get rid of redundant using_sd variable crypto: crypto4xx - use sync skcipher for fallback crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues crypto: crypto4xx - fix ctr-aes missing output IV crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA crypto: ux500 - use ccflags-y instead of CFLAGS_<basename>.o crypto: ccree - handle tee fips error during power management resume crypto: ccree - add function to handle cryptocell tee fips error ...
2019-04-18crypto: run initcalls for generic implementations earlierEric Biggers
Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: lrw - don't access already-freed walk.ivEric Biggers
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: lrw - Fix atomic sleep when walking skcipherHerbert Xu
When we perform a walk in the completion function, we need to ensure that it is atomic. Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-05crypto: lrw - fix rebase error after out of bounds fixArd Biesheuvel
Due to an unfortunate interaction between commit fbe1a850b3b1 ("crypto: lrw - Fix out-of bounds access on counter overflow") and commit c778f96bf347 ("crypto: lrw - Optimize tweak computation"), we ended up with a version of next_index() that always returns 127. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Do not use auxiliary bufferOndrej Mosnacek
This patch simplifies the LRW template to recompute the LRW tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) The results show that the new code has about the same performance as the old code. For 512-byte message it seems to be even slightly faster, but that might be just noise. Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 197 196 lrw(aes) 320 64 200 197 lrw(aes) 384 64 203 199 lrw(aes) 256 512 385 380 lrw(aes) 320 512 401 395 lrw(aes) 384 512 415 415 lrw(aes) 256 4096 1869 1846 lrw(aes) 320 4096 2080 1981 lrw(aes) 384 4096 2160 2109 lrw(aes) 256 16384 7077 7127 lrw(aes) 320 16384 7807 7766 lrw(aes) 384 16384 8108 8357 lrw(aes) 256 32768 14111 14454 lrw(aes) 320 32768 15268 15082 lrw(aes) 384 32768 16581 16250 [1] https://lkml.org/lkml/2018/8/23/1315 Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Optimize tweak computationOndrej Mosnacek
This patch rewrites the tweak computation to a slightly simpler method that performs less bswaps. Based on performance measurements the new code seems to provide slightly better performance than the old one. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 204 286 lrw(aes) 320 64 227 203 lrw(aes) 384 64 208 204 lrw(aes) 256 512 441 439 lrw(aes) 320 512 456 455 lrw(aes) 384 512 469 483 lrw(aes) 256 4096 2136 2190 lrw(aes) 320 4096 2161 2213 lrw(aes) 384 4096 2295 2369 lrw(aes) 256 16384 7692 7868 lrw(aes) 320 16384 8230 8691 lrw(aes) 384 16384 8971 8813 lrw(aes) 256 32768 15336 15560 lrw(aes) 320 32768 16410 16346 lrw(aes) 384 32768 18023 17465 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Fix out-of bounds access on counter overflowOndrej Mosnacek
When the LRW block counter overflows, the current implementation returns 128 as the index to the precomputed multiplication table, which has 128 entries. This patch fixes it to return the correct value (127). Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") Cc: <stable@vger.kernel.org> # 2.6.20+ Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()Eric Biggers
All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: lrw - Free rctx->ext with kzfreeHerbert Xu
The buffer rctx->ext contains potentially sensitive data and should be freed with kzfree. Cc: <stable@vger.kernel.org> Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: lrw - remove lrw_crypt()Eric Biggers
Now that all users of lrw_crypt() have been removed in favor of the LRW template wrapping an ECB mode algorithm, remove lrw_crypt(). Also remove crypto/lrw.h as that is no longer needed either; and fold 'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into setkey(), and lrw_free_table() into exit_tfm(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-03crypto: remove redundant backlog checks on EBUSYGilad Ben-Yossef
Now that -EBUSY return code only indicates backlog queueing we can safely remove the now redundant check for the CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-10-12crypto: lrw - Check for incorrect cipher nameChristophe Jaillet
If the cipher name does not start with 'ecb(' we should bail out, as done in the 'create()' function in 'crypto/xts.c'. Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-10-12crypto: lrw - Fix an error handling path in 'create()'Christophe Jaillet
All error handling paths 'goto err_drop_spawn' except this one. In order to avoid some resources leak, we should do it as well here. Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-10crypto: lrw - Fix use-after-free on EINPROGRESSHerbert Xu
When we get an EINPROGRESS completion in lrw, we will end up marking the request as done and freeing it. This then blows up when the request is really completed as we've already freed the memory. Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-24crypto: xts,lrw - fix out-of-bounds write after kmalloc failureEric Biggers
In the generic XTS and LRW algorithms, for input data > 128 bytes, a temporary buffer is allocated to hold the values to be XOR'ed with the data before and after encryption or decryption. If the allocation fails, the fixed-size buffer embedded in the request buffer is meant to be used as a fallback --- resulting in more calls to the ECB algorithm, but still producing the correct result. However, we weren't correctly limiting subreq->cryptlen in this case, resulting in pre_crypt() overrunning the embedded buffer. Fix this by setting subreq->cryptlen correctly. Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher") Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Cc: stable@vger.kernel.org # v4.10+ Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28crypto: lrw - Convert to skcipherHerbert Xu
This patch converts lrw over to the skcipher interface. It also optimises the implementation to be based on ECB instead of the underlying cipher. For compatibility the existing naming scheme of lrw(aes) is maintained as opposed to the more obvious one of lrw(ecb(aes)). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-11-26crypto: include crypto- module prefix in templateKees Cook
This adds the module loading prefix "crypto-" to the template lookup as well. For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly includes the "crypto-" prefix at every level, correctly rejecting "vfat": net-pf-38 algif-hash crypto-vfat(blowfish) crypto-vfat(blowfish)-all crypto-vfat Reported-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-09crypto: lrw - add interface for parallelized cipher implementionsJussi Kivilinna
Export gf128mul table initialization routines and add lrw_crypt() function that can be used by cipher implementations that can benefit from parallelized cipher operations. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-09crypto: lrw - split gf128mul table initialization from setkeyJussi Kivilinna
Split gf128mul initialization from setkey so that it can be used outside lrw-module. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-09crypto: lrw - use blocksize constantJussi Kivilinna
LRW has fixed blocksize of 16. Define LRW_BLOCK_SIZE and use in place of crypto_cipher_blocksize(). Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-09crypto: lrw - fix memleakJussi Kivilinna
LRW module leaks child cipher memory when init_tfm() fails because of child block size not being 16. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-17crypto: lrw - Fix big endian supportHerbert Xu
It turns out that LRW has never worked properly on big endian. This was never discussed because nobody actually used it that way. In fact, it was only discovered when Geert Uytterhoeven loaded it through tcrypt which failed the test on it. The fix is straightforward, on big endian the to find the nth bit we should be grouping them by words instead of bytes. So setbit128_bbe should xor with 128 - BITS_PER_LONG instead of 128 - BITS_PER_BYTE == 0x78. Tested-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21[CRYPTO] lrw: Replace all adds to big endians variables with be*_add_cpuMarcin Slusarz
replace all: big_endian_variable = cpu_to_beX(beX_to_cpu(big_endian_variable) + expression_in_cpu_byteorder); with: beX_add_cpu(&big_endian_variable, expression_in_cpu_byteorder); Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: Roel Kluin <12o3l@tiscali.nl> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-07Convert ERR_PTR(PTR_ERR(p)) instances to ERR_CAST(p)David Howells
Convert instances of ERR_PTR(PTR_ERR(p)) to ERR_CAST(p) using: perl -spi -e 's/ERR_PTR[(]PTR_ERR[(](.*)[)][)]/ERR_CAST(\1)/' `grep -rl 'ERR_PTR[(]*PTR_ERR' fs crypto net security` Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-02[CRYPTO] templates: Pass type/mask when creating instancesHerbert Xu
This patch passes the type/mask along when constructing instances of templates. This is in preparation for templates that may support multiple types of instances depending on what is requested. For example, the planned software async crypto driver will use this construct. For the moment this allows us to check whether the instance constructed is of the correct type and avoid returning success if the type does not match. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07[CRYPTO] api: Add type-safe spawnsHerbert Xu
This patch allows spawns of specific types (e.g., cipher) to be allocated. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06[CRYPTO] lrw: round --> lrw_roundDavid S. Miller
Fixes: crypto/lrw.c:99: warning: conflicting types for built-in function ‘round’ Signed-off-by: David S. Miller <davem@davemloft.net>
2006-12-06[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher modeRik Snel
Main module, this implements the Liskov Rivest Wagner block cipher mode in the new blockcipher API. The implementation is based on ecb.c. The LRW-32-AES specification I used can be found at: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf It implements the optimization specified as optional in the specification, and in addition it uses optimized multiplication routines from gf128mul.c. Since gf128mul.[ch] is not tested on bigendian, this cipher mode may currently fail badly on bigendian machines. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>