summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)Author
2019-05-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds
Pull networking updates from David Miller: "Highlights: 1) Support AES128-CCM ciphers in kTLS, from Vakul Garg. 2) Add fib_sync_mem to control the amount of dirty memory we allow to queue up between synchronize RCU calls, from David Ahern. 3) Make flow classifier more lockless, from Vlad Buslov. 4) Add PHY downshift support to aquantia driver, from Heiner Kallweit. 5) Add SKB cache for TCP rx and tx, from Eric Dumazet. This reduces contention on SLAB spinlocks in heavy RPC workloads. 6) Partial GSO offload support in XFRM, from Boris Pismenny. 7) Add fast link down support to ethtool, from Heiner Kallweit. 8) Use siphash for IP ID generator, from Eric Dumazet. 9) Pull nexthops even further out from ipv4/ipv6 routes and FIB entries, from David Ahern. 10) Move skb->xmit_more into a per-cpu variable, from Florian Westphal. 11) Improve eBPF verifier speed and increase maximum program size, from Alexei Starovoitov. 12) Eliminate per-bucket spinlocks in rhashtable, and instead use bit spinlocks. From Neil Brown. 13) Allow tunneling with GUE encap in ipvs, from Jacky Hu. 14) Improve link partner cap detection in generic PHY code, from Heiner Kallweit. 15) Add layer 2 encap support to bpf_skb_adjust_room(), from Alan Maguire. 16) Remove SKB list implementation assumptions in SCTP, your's truly. 17) Various cleanups, optimizations, and simplifications in r8169 driver. From Heiner Kallweit. 18) Add memory accounting on TX and RX path of SCTP, from Xin Long. 19) Switch PHY drivers over to use dynamic featue detection, from Heiner Kallweit. 20) Support flow steering without masking in dpaa2-eth, from Ioana Ciocoi. 21) Implement ndo_get_devlink_port in netdevsim driver, from Jiri Pirko. 22) Increase the strict parsing of current and future netlink attributes, also export such policies to userspace. From Johannes Berg. 23) Allow DSA tag drivers to be modular, from Andrew Lunn. 24) Remove legacy DSA probing support, also from Andrew Lunn. 25) Allow ll_temac driver to be used on non-x86 platforms, from Esben Haabendal. 26) Add a generic tracepoint for TX queue timeouts to ease debugging, from Cong Wang. 27) More indirect call optimizations, from Paolo Abeni" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1763 commits) cxgb4: Fix error path in cxgb4_init_module net: phy: improve pause mode reporting in phy_print_status dt-bindings: net: Fix a typo in the phy-mode list for ethernet bindings net: macb: Change interrupt and napi enable order in open net: ll_temac: Improve error message on error IRQ net/sched: remove block pointer from common offload structure net: ethernet: support of_get_mac_address new ERR_PTR error net: usb: smsc: fix warning reported by kbuild test robot staging: octeon-ethernet: Fix of_get_mac_address ERR_PTR check net: dsa: support of_get_mac_address new ERR_PTR error net: dsa: sja1105: Fix status initialization in sja1105_get_ethtool_stats vrf: sit mtu should not be updated when vrf netdev is the link net: dsa: Fix error cleanup path in dsa_init_module l2tp: Fix possible NULL pointer dereference taprio: add null check on sched_nest to avoid potential null pointer dereference net: mvpp2: cls: fix less than zero check on a u32 variable net_sched: sch_fq: handle non connected flows net_sched: sch_fq: do not assume EDT packets are ordered net: hns3: use devm_kcalloc when allocating desc_cb net: hns3: some cleanup for struct hns3_enet_ring ...
2019-05-06Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "API: - Add support for AEAD in simd - Add fuzz testing to testmgr - Add panic_on_fail module parameter to testmgr - Use per-CPU struct instead multiple variables in scompress - Change verify API for akcipher Algorithms: - Convert x86 AEAD algorithms over to simd - Forbid 2-key 3DES in FIPS mode - Add EC-RDSA (GOST 34.10) algorithm Drivers: - Set output IV with ctr-aes in crypto4xx - Set output IV in rockchip - Fix potential length overflow with hashing in sun4i-ss - Fix computation error with ctr in vmx - Add SM4 protected keys support in ccree - Remove long-broken mxc-scc driver - Add rfc4106(gcm(aes)) cipher support in cavium/nitrox" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (179 commits) crypto: ccree - use a proper le32 type for le32 val crypto: ccree - remove set but not used variable 'du_size' crypto: ccree - Make cc_sec_disable static crypto: ccree - fix spelling mistake "protedcted" -> "protected" crypto: caam/qi2 - generate hash keys in-place crypto: caam/qi2 - fix DMA mapping of stack memory crypto: caam/qi2 - fix zero-length buffer DMA mapping crypto: stm32/cryp - update to return iv_out crypto: stm32/cryp - remove request mutex protection crypto: stm32/cryp - add weak key check for DES crypto: atmel - remove set but not used variable 'alg_name' crypto: picoxcell - Use dev_get_drvdata() crypto: crypto4xx - get rid of redundant using_sd variable crypto: crypto4xx - use sync skcipher for fallback crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues crypto: crypto4xx - fix ctr-aes missing output IV crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA crypto: ux500 - use ccflags-y instead of CFLAGS_<basename>.o crypto: ccree - handle tee fips error during power management resume crypto: ccree - add function to handle cryptocell tee fips error ...
2019-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Three trivial overlapping conflicts. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-27netlink: make validation more configurable for future strictnessJohannes Berg
We currently have two levels of strict validation: 1) liberal (default) - undefined (type >= max) & NLA_UNSPEC attributes accepted - attribute length >= expected accepted - garbage at end of message accepted 2) strict (opt-in) - NLA_UNSPEC attributes accepted - attribute length >= expected accepted Split out parsing strictness into four different options: * TRAILING - check that there's no trailing data after parsing attributes (in message or nested) * MAXTYPE - reject attrs > max known type * UNSPEC - reject attributes with NLA_UNSPEC policy entries * STRICT_ATTRS - strictly validate attribute size The default for future things should be *everything*. The current *_strict() is a combination of TRAILING and MAXTYPE, and is renamed to _deprecated_strict(). The current regular parsing has none of this, and is renamed to *_parse_deprecated(). Additionally it allows us to selectively set one of the new flags even on old policies. Notably, the UNSPEC flag could be useful in this case, since it can be arranged (by filling in the policy) to not be an incompatible userspace ABI change, but would then going forward prevent forgetting attribute entries. Similar can apply to the POLICY flag. We end up with the following renames: * nla_parse -> nla_parse_deprecated * nla_parse_strict -> nla_parse_deprecated_strict * nlmsg_parse -> nlmsg_parse_deprecated * nlmsg_parse_strict -> nlmsg_parse_deprecated_strict * nla_parse_nested -> nla_parse_nested_deprecated * nla_validate_nested -> nla_validate_nested_deprecated Using spatch, of course: @@ expression TB, MAX, HEAD, LEN, POL, EXT; @@ -nla_parse(TB, MAX, HEAD, LEN, POL, EXT) +nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression TB, MAX, NLA, POL, EXT; @@ -nla_parse_nested(TB, MAX, NLA, POL, EXT) +nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT) @@ expression START, MAX, POL, EXT; @@ -nla_validate_nested(START, MAX, POL, EXT) +nla_validate_nested_deprecated(START, MAX, POL, EXT) @@ expression NLH, HDRLEN, MAX, POL, EXT; @@ -nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT) +nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT) For this patch, don't actually add the strict, non-renamed versions yet so that it breaks compile if I get it wrong. Also, while at it, make nla_validate and nla_parse go down to a common __nla_validate_parse() function to avoid code duplication. Ultimately, this allows us to have very strict validation for every new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the next patch, while existing things will continue to work as is. In effect then, this adds fully strict validation for any new command. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-25crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSAVitaly Chikunov
Fix undefined symbol issue in ecrdsa_generic module when ASN1 or OID_REGISTRY aren't enabled in the config by selecting these options for CRYPTO_ECRDSA. ERROR: "asn1_ber_decoder" [crypto/ecrdsa_generic.ko] undefined! ERROR: "look_up_OID" [crypto/ecrdsa_generic.ko] undefined! Reported-by: Randy Dunlap <rdunlap@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-25crypto: testmgr - add missing self test entries for protected keysGilad Ben-Yossef
Mark sm4 and missing aes using protected keys which are indetical to same algs with no HW protected keys as tested. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-25crypto: shash - remove shash_desc::flagsEric Biggers
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-25crypto: shash - remove useless crypto_yield() in shash_ahash_digest()Eric Biggers
The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-19crypto: ccm - fix incompatibility between "ccm" and "ccm_base"Eric Biggers
CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-19crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers
GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: shash - fix missed optimization in shash_ahash_digest()Eric Biggers
shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: cryptd - remove ability to instantiate ablkciphersEric Biggers
Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create algorithms with the deprecated "ablkcipher" type. This has been unused since commit 0e145b477dea ("crypto: ablk_helper - remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: scompress - initialize per-CPU variables on each CPUSebastian Andrzej Siewior
In commit 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables") I accidentally initialized multiple times the memory on a random CPU. I should have initialize the memory on every CPU like it has been done earlier. I didn't notice this because the scheduler didn't move the task to another CPU. Guenter managed to do that and the code crashed as expected. Allocate / free per-CPU memory on each CPU. Fixes: 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: run initcalls for generic implementations earlierEric Biggers
Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - fuzz AEADs against their generic implementationEric Biggers
When the extra crypto self-tests are enabled, test each AEAD algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - fuzz skciphers against their generic implementationEric Biggers
When the extra crypto self-tests are enabled, test each skcipher algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the skcipher_walk API, a bug in the LRW template, and an inconsistency in the cts implementations. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - fuzz hashes against their generic implementationEric Biggers
When the extra crypto self-tests are enabled, test each hash algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the x86 implementation of poly1305, bugs in crct10dif, and an inconsistency in cbcmac. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - add helpers for fuzzing against generic implementationEric Biggers
Add some helper functions in preparation for fuzz testing algorithms against their generic implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - identify test vectors by name rather than numberEric Biggers
In preparation for fuzz testing algorithms against their generic implementation, make error messages in testmgr identify test vectors by name rather than index. Built-in test vectors are simply "named" by their index in testmgr.h, as before. But (in later patches) generated test vectors will be given more descriptive names to help developers debug problems detected with them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: testmgr - expand ability to test for errorsEric Biggers
Update testmgr to support testing for specific errors from setkey() and digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs. This is useful because algorithms usually restrict the lengths or format of the message, key, and/or authentication tag in some way. And bad inputs should be tested too, not just good inputs. As part of this change, remove the ambiguously-named 'fail' flag and replace it with 'setkey_error = -EINVAL' for the only test vector that used it -- the DES weak key test vector. Note that this tightens the test to require -EINVAL rather than any error code, but AFAICS this won't cause any test failure. Other than that, these new fields aren't set on any test vectors yet. Later patches will do so. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: ecrdsa - add EC-RDSA test vectors to testmgrVitaly Chikunov
Add testmgr test vectors for EC-RDSA algorithm for every of five supported parameters (curves). Because there are no officially published test vectors for the curves, the vectors are generated by gost-engine. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithmVitaly Chikunov
Add Elliptic Curve Russian Digital Signature Algorithm (GOST R 34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since 2018 the CIS countries) cryptographic standard algorithms (called GOST algorithms). Only signature verification is supported, with intent to be used in the IMA. Summary of the changes: * crypto/Kconfig: - EC-RDSA is added into Public-key cryptography section. * crypto/Makefile: - ecrdsa objects are added. * crypto/asymmetric_keys/x509_cert_parser.c: - Recognize EC-RDSA and Streebog OIDs. * include/linux/oid_registry.h: - EC-RDSA OIDs are added to the enum. Also, a two currently not implemented curve OIDs are added for possible extension later (to not change numbering and grouping). * crypto/ecc.c: - Kenneth MacKay copyright date is updated to 2014, because vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his code from micro-ecc. - Functions needed for ecrdsa are EXPORT_SYMBOL'ed. - New functions: vli_is_negative - helper to determine sign of vli; vli_from_be64 - unpack big-endian array into vli (used for a signature); vli_from_le64 - unpack little-endian array into vli (used for a public key); vli_uadd, vli_usub - add/sub u64 value to/from vli (used for increment/decrement); mul_64_64 - optimized to use __int128 where appropriate, this speeds up point multiplication (and as a consequence signature verification) by the factor of 1.5-2; vli_umult - multiply vli by a small value (speeds up point multiplication by another factor of 1.5-2, depending on vli sizes); vli_mmod_special - module reduction for some form of Pseudo-Mersenne primes (used for the curves A); vli_mmod_special2 - module reduction for another form of Pseudo-Mersenne primes (used for the curves B); vli_mmod_barrett - module reduction using pre-computed value (used for the curve C); vli_mmod_slow - more general module reduction which is much slower (used when the modulus is subgroup order); vli_mod_mult_slow - modular multiplication; ecc_point_add - add two points; ecc_point_mult_shamir - add two points multiplied by scalars in one combined multiplication (this gives speed up by another factor 2 in compare to two separate multiplications). ecc_is_pubkey_valid_partial - additional samity check is added. - Updated vli_mmod_fast with non-strict heuristic to call optimal module reduction function depending on the prime value; - All computations for the previously defined (two NIST) curves should not unaffected. * crypto/ecc.h: - Newly exported functions are documented. * crypto/ecrdsa_defs.h - Five curves are defined. * crypto/ecrdsa.c: - Signature verification is implemented. * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1: - Templates for BER decoder for EC-RDSA parameters and public key. Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: ecc - make ecc into separate moduleVitaly Chikunov
ecc.c have algorithms that could be used togeter by ecdh and ecrdsa. Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and document to what seems appropriate. Move structs ecc_point and ecc_curve from ecc_curve_defs.h into ecc.h. No code changes. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: Kconfig - create Public-key cryptography sectionVitaly Chikunov
Group RSA, DH, and ECDH into Public-key cryptography config section. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18X.509: parse public key parameters from x509 for akcipherVitaly Chikunov
Some public key algorithms (like EC-DSA) keep in parameters field important data such as digest and curve OIDs (possibly more for different EC-DSA variants). Thus, just setting a public key (as for RSA) is not enough. Append parameters into the key stream for akcipher_set_{pub,priv}_key. Appended data is: (u32) algo OID, (u32) parameters length, parameters data. This does not affect current akcipher API nor RSA ciphers (they could ignore it). Idea of appending parameters to the key stream is by Herbert Xu. Cc: David Howells <dhowells@redhat.com> Cc: Denis Kenzior <denkenz@gmail.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18KEYS: do not kmemdup digest in {public,tpm}_key_verify_signatureVitaly Chikunov
Treat (struct public_key_signature)'s digest same as its signature (s). Since digest should be already in the kmalloc'd memory do not kmemdup digest value before calling {public,tpm}_key_verify_signature. Patch is split from the previous as suggested by Herbert Xu. Suggested-by: David Howells <dhowells@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: akcipher - new verify API for public key algorithmsVitaly Chikunov
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is using public key) signature to uncover message hash, which was then compared in upper level public_key_verify_signature() with the expected hash value, which itself was never passed into verify(). This approach was incompatible with EC-DSA family of algorithms, because, to verify a signature EC-DSA algorithm also needs a hash value as input; then it's used (together with a signature divided into halves `r||s') to produce a witness value, which is then compared with `r' to determine if the signature is correct. Thus, for EC-DSA, nor requirements of .verify() itself, nor its output expectations in public_key_verify_signature() wasn't sufficient. Make improved .verify() call which gets hash value as input and produce complete signature check without any output besides status. Now for the top level verification only crypto_akcipher_verify() needs to be called and its return value inspected. Make sure that `digest' is in kmalloc'd memory (in place of `output`) in {public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will be changed in the following commit. Cc: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: rsa - unimplement sign/verify for raw RSA backendsVitaly Chikunov
In preparation for new akcipher verify call remove sign/verify callbacks from RSA backends and make PKCS1 driver call encrypt/decrypt instead. This also complies with the well-known idea that raw RSA should never be used for sign/verify. It only should be used with proper padding scheme such as PKCS1 driver provides. Cc: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Cc: qat-linux@intel.com Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <gary.hook@amd.com> Cc: Horia Geantă <horia.geanta@nxp.com> Cc: Aymen Sghaier <aymen.sghaier@nxp.com> Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: akcipher - default implementations for request callbacksVitaly Chikunov
Because with the introduction of EC-RDSA and change in workings of RSA in regard to sign/verify, akcipher could have not all callbacks defined, check the presence of callbacks in crypto_register_akcipher() and provide default implementation if the callback is not implemented. This is suggested by Herbert Xu instead of checking the presence of the callback on every request. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: des_generic - Forbid 2-key in 3DES and add helpersHerbert Xu
This patch adds a requirement to the generic 3DES implementation such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode. We will also provide helpers that may be used by drivers that implement 3DES to make the same check. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: salsa20 - don't access already-freed walk.ivEric Biggers
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: lrw - don't access already-freed walk.ivEric Biggers
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: lrw - Fix atomic sleep when walking skcipherHerbert Xu
When we perform a walk in the completion function, we need to ensure that it is atomic. Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: xts - Fix atomic sleep when walking skcipherHerbert Xu
When we perform a walk in the completion function, we need to ensure that it is atomic. Reported-by: syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com Fixes: 78105c7e769b ("crypto: xts - Drop use of auxiliary buffer") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: x86/poly1305 - fix overflow during partial reductionEric Biggers
The x86_64 implementation of Poly1305 produces the wrong result on some inputs because poly1305_4block_avx2() incorrectly assumes that when partially reducing the accumulator, the bits carried from limb 'd4' to limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic which processes only one block at a time. However, it's not true for the AVX2 implementation, which processes 4 blocks at a time and therefore can produce intermediate limbs about 4x larger. Fix it by making the relevant calculations use 64-bit arithmetic rather than 32-bit. Note that most of the carries already used 64-bit arithmetic, but the d4 -> h0 carry was different for some reason. To be safe I also made the same change to the corresponding SSE2 code, though that only operates on 1 or 2 blocks at a time. I don't think it's really needed for poly1305_block_sse2(), but it doesn't hurt because it's already x86_64 code. It *might* be needed for poly1305_2block_sse2(), but overflows aren't easy to reproduce there. This bug was originally detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. But also add a test vector which reproduces it directly (in the AVX2 case). Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64") Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64") Cc: <stable@vger.kernel.org> # v4.3+ Cc: Martin Willi <martin@strongswan.org> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: testmgr - add panic_on_fail module parameterEric Biggers
Add a module parameter cryptomgr.panic_on_fail which causes the kernel to panic if any crypto self-tests fail. Use cases: - More easily detect crypto self-test failures by boot testing, e.g. on KernelCI. - Get a bug report if syzkaller manages to use the template system to instantiate an algorithm that fails its self-tests. The command-line option "fips=1" already does this, but it also makes other changes not wanted for general testing, such as disabling "unapproved" algorithms. panic_on_fail just does what it says. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: cts - don't support empty messagesEric Biggers
My patches to make testmgr fuzz algorithms against their generic implementation detected that the arm64 implementations of "cts(cbc(aes))" handle empty messages differently from the cts template. Namely, the arm64 implementations forbids (with -EINVAL) all messages shorter than the block size, including the empty message; but the cts template permits empty messages as a special case. No user should be CTS-encrypting/decrypting empty messages, but we need to keep the behavior consistent. Unfortunately, as noted in the source of OpenSSL's CTS implementation [1], there's no common specification for CTS. This makes it somewhat debatable what the behavior should be. However, all CTS specifications seem to agree that messages shorter than the block size are not allowed, and OpenSSL follows this in both CTS conventions it implements. It would also simplify the user-visible semantics to have empty messages no longer be a special case. Therefore, make the cts template return -EINVAL on *all* messages shorter than the block size, including the empty message. [1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: streebog - fix unaligned memory accessesEric Biggers
Don't cast the data buffer directly to streebog_uint512, as this violates alignment rules. Fixes: fe18957e8e87 ("crypto: streebog - add Streebog hash function") Cc: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: chacha20poly1305 - set cra_name correctlyEric Biggers
If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: skcipher - don't WARN on unprocessed data after slow walk stepEric Biggers
skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: crct10dif-generic - fix use via crypto_shash_digest()Eric Biggers
The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: aes - Use ___cacheline_aligned for aes dataAndi Kleen
cacheline_aligned is a special section. It cannot be const at the same time because it's not read-only. It doesn't give any MMU protection. Mark it ____cacheline_aligned to not place it in a special section, but just align it in .rodata Cc: herbert@gondor.apana.org.au Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: scompress - Use per-CPU struct instead multiple variablesSebastian Andrzej Siewior
Two per-CPU variables are allocated as pointer to per-CPU memory which then are used as scratch buffers. We could be smart about this and use instead a per-CPU struct which contains the pointers already and then we need to allocate just the scratch buffers. Add a lock to the struct. By doing so we can avoid the get_cpu() statement and gain lockdep coverage (if enabled) to ensure that the lock is always acquired in the right context. On non-preemptible kernels the lock vanishes. It is okay to use raw_cpu_ptr() in order to get a pointer to the struct since it is protected by the spinlock. The diffstat of this is negative and according to size scompress.o: text data bss dec hex filename 1847 160 24 2031 7ef dbg_before.o 1754 232 4 1990 7c6 dbg_after.o 1799 64 24 1887 75f no_dbg-before.o 1703 88 4 1795 703 no_dbg-after.o The overall size increase difference is also negative. The increase in the data section is only four bytes without lockdep. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: scompress - return proper error code for allocation failureSebastian Andrzej Siewior
If scomp_acomp_comp_decomp() fails to allocate memory for the destination then we never copy back the data we compressed. It is probably best to return an error code instead 0 in case of failure. I haven't found any user that is using acomp_request_set_params() without the `dst' buffer so there is probably no harm. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: fips - Grammar s/options/option/, s/to/the/Geert Uytterhoeven
Fixes: ccb778e1841ce04b ("crypto: api - Add fips_enable flag") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: Kconfig - fix typos AEGSI -> AEGISOndrej Mosnacek
Spotted while reviewind patches from Eric Biggers. Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: salsa20-generic - use crypto_xor_cpy()Eric Biggers
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: chacha-generic - use crypto_xor_cpy()Eric Biggers
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: testmgr - test the !may_use_simd() fallback codeEric Biggers
All crypto API algorithms are supposed to support the case where they are called in a context where SIMD instructions are unusable, e.g. IRQ context on some architectures. However, this isn't tested for by the self-tests, causing bugs to go undetected. Now that all algorithms have been converted to use crypto_simd_usable(), update the self-tests to test the no-SIMD case. First, a bool testvec_config::nosimd is added. When set, the crypto operation is executed with preemption disabled and with crypto_simd_usable() mocked out to return false on the current CPU. A bool test_sg_division::nosimd is also added. For hash algorithms it's honored by the corresponding ->update(). By setting just a subset of these bools, the case where some ->update()s are done in SIMD context and some are done in no-SIMD context is also tested. These bools are then randomly set by generate_random_testvec_config(). For now, all no-SIMD testing is limited to the extra crypto self-tests, because it might be a bit too invasive for the regular self-tests. But this could be changed later. This has already found bugs in the arm64 AES-GCM and ChaCha algorithms. This would have found some past bugs as well. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: simd - convert to use crypto_simd_usable()Eric Biggers
Replace all calls to may_use_simd() in the shared SIMD helpers with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>