summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)Author
2019-10-23crypto: sparc/aes - convert to skcipher APIEric Biggers
Convert the glue code for the SPARC64 AES opcodes implementations of AES-ECB, AES-CBC, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-18crypto: jitter - add header to fix buildwarningsBen Dooks
Fix the following build warnings by adding a header for the definitions shared between jitterentropy.c and jitterentropy-kcapi.c. Fixes the following: crypto/jitterentropy.c:445:5: warning: symbol 'jent_read_entropy' was not declared. Should it be static? crypto/jitterentropy.c:475:18: warning: symbol 'jent_entropy_collector_alloc' was not declared. Should it be static? crypto/jitterentropy.c:509:6: warning: symbol 'jent_entropy_collector_free' was not declared. Should it be static? crypto/jitterentropy.c:516:5: warning: symbol 'jent_entropy_init' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:59:6: warning: symbol 'jent_zalloc' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:64:6: warning: symbol 'jent_zfree' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:69:5: warning: symbol 'jent_fips_enabled' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:74:6: warning: symbol 'jent_panic' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:79:6: warning: symbol 'jent_memcpy' was not declared. Should it be static? crypto/jitterentropy-kcapi.c:93:6: warning: symbol 'jent_get_nstime' was not declared. Should it be static? Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Stephan Mueller <smueller@chronox.de Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-10crypto: user - fix memory leak in crypto_reportstatNavid Emamdoost
In crypto_reportstat, a new skb is created by nlmsg_new(). This skb is leaked if crypto_reportstat_alg() fails. Required release for skb is added. Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") Cc: <stable@vger.kernel.org> Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-10crypto: user - fix memory leak in crypto_reportNavid Emamdoost
In crypto_report, a new skb is created via nlmsg_new(). This skb should be released if crypto_report_alg() fails. Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Cc: <stable@vger.kernel.org> Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-10crypto: af_alg - cast ki_complete ternary op to intAyush Sawal
when libkcapi test is executed using HW accelerator, cipher operation return -74.Since af_alg_async_cb->ki_complete treat err as unsigned int, libkcapi receive 429467222 even though it expect -ve value. Hence its required to cast resultlen to int so that proper error is returned to libkcapi. AEAD one shot non-aligned test 2(libkcapi test) ./../bin/kcapi -x 10 -c "gcm(aes)" -i 7815d4b06ae50c9c56e87bd7 -k ea38ac0c9b9998c80e28fb496a2b88d9 -a "853f98a750098bec1aa7497e979e78098155c877879556bb51ddeb6374cbaefc" -t "c4ce58985b7203094be1d134c1b8ab0b" -q "b03692f86d1b8b39baf2abb255197c98" Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management") Cc: <stable@vger.kernel.org> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-10crypto: aegis128/simd - build 32-bit ARM for v8 architecture explicitlyArd Biesheuvel
Now that the Clang compiler has taken it upon itself to police the compiler command line, and reject combinations for arguments it views as incompatible, the AEGIS128 no longer builds correctly, and errors out like this: clang-10: warning: ignoring extension 'crypto' because the 'armv7-a' architecture does not support it [-Winvalid-command-line-argument] So let's switch to armv8-a instead, which matches the crypto-neon-fp-armv8 FPU profile we specify. Since neither were actually supported by GCC versions before 4.8, let's tighten the Kconfig dependencies as well so we won't run into errors when building with an ancient compiler. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reported-by: <ci_notify@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: jitter - fix commentsAlexander E. Patrakov
One should not say "ec can be NULL" and then dereference it. One cannot talk about the return value if the function returns void. Signed-off-by: Alexander E. Patrakov <patrakov@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: aegis128-neon - use Clang compatible cflags for ARMArd Biesheuvel
The next version of Clang will start policing compiler command line options, and will reject combinations of -march and -mfpu that it thinks are incompatible. This results in errors like clang-10: warning: ignoring extension 'crypto' because the 'armv7-a' architecture does not support it [-Winvalid-command-line-argument] /tmp/aegis128-neon-inner-5ee428.s: Assembler messages: /tmp/aegis128-neon-inner-5ee428.s:73: Error: selected processor does not support `aese.8 q2,q14' in ARM mode when buiding the SIMD aegis128 code for 32-bit ARM, given that the 'armv7-a' -march argument is considered to be compatible with the ARM crypto extensions. Instead, we should use armv8-a, which does allow the crypto extensions to be enabled. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: testmgr - Added testvectors for the rfc3686(ctr(sm4)) skcipherPascal van Leeuwen
Added testvectors for the rfc3686(ctr(sm4)) skcipher algorithm changes since v1: - nothing Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: testmgr - Added testvectors for the ofb(sm4) & cfb(sm4) skciphersPascal van Leeuwen
Added testvectors for the ofb(sm4) and cfb(sm4) skcipher algorithms changes since v1: - nothing Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: testmgr - Added testvectors for the hmac(sm3) ahashPascal van Leeuwen
Added testvectors for the hmac(sm3) ahash authentication algorithm changes since v1 & v2: -nothing Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: testmgr - add another gcm(aes) testcaseArd Biesheuvel
Add an additional gcm(aes) test case that triggers the code path in the new arm64 driver that deals with tail blocks whose size is not a multiple of the block size, and where the size of the preceding input is a multiple of 64 bytes. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: algif_skcipher - Use chunksize instead of blocksizeHerbert Xu
When algif_skcipher does a partial operation it always process data that is a multiple of blocksize. However, for algorithms such as CTR this is wrong because even though it can process any number of bytes overall, the partial block must come at the very end and not in the middle. This is exactly what chunksize is meant to describe so this patch changes blocksize to chunksize. Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-28Merge branch 'next-lockdown' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull kernel lockdown mode from James Morris: "This is the latest iteration of the kernel lockdown patchset, from Matthew Garrett, David Howells and others. From the original description: This patchset introduces an optional kernel lockdown feature, intended to strengthen the boundary between UID 0 and the kernel. When enabled, various pieces of kernel functionality are restricted. Applications that rely on low-level access to either hardware or the kernel may cease working as a result - therefore this should not be enabled without appropriate evaluation beforehand. The majority of mainstream distributions have been carrying variants of this patchset for many years now, so there's value in providing a doesn't meet every distribution requirement, but gets us much closer to not requiring external patches. There are two major changes since this was last proposed for mainline: - Separating lockdown from EFI secure boot. Background discussion is covered here: https://lwn.net/Articles/751061/ - Implementation as an LSM, with a default stackable lockdown LSM module. This allows the lockdown feature to be policy-driven, rather than encoding an implicit policy within the mechanism. The new locked_down LSM hook is provided to allow LSMs to make a policy decision around whether kernel functionality that would allow tampering with or examining the runtime state of the kernel should be permitted. The included lockdown LSM provides an implementation with a simple policy intended for general purpose use. This policy provides a coarse level of granularity, controllable via the kernel command line: lockdown={integrity|confidentiality} Enable the kernel lockdown feature. If set to integrity, kernel features that allow userland to modify the running kernel are disabled. If set to confidentiality, kernel features that allow userland to extract confidential information from the kernel are also disabled. This may also be controlled via /sys/kernel/security/lockdown and overriden by kernel configuration. New or existing LSMs may implement finer-grained controls of the lockdown features. Refer to the lockdown_reason documentation in include/linux/security.h for details. The lockdown feature has had signficant design feedback and review across many subsystems. This code has been in linux-next for some weeks, with a few fixes applied along the way. Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf when kernel lockdown is in confidentiality mode") is missing a Signed-off-by from its author. Matthew responded that he is providing this under category (c) of the DCO" * 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits) kexec: Fix file verification on S390 security: constify some arrays in lockdown LSM lockdown: Print current->comm in restriction messages efi: Restrict efivar_ssdt_load when the kernel is locked down tracefs: Restrict tracefs when the kernel is locked down debugfs: Restrict debugfs when the kernel is locked down kexec: Allow kexec_file() with appropriate IMA policy when locked down lockdown: Lock down perf when in confidentiality mode bpf: Restrict bpf when kernel lockdown is in confidentiality mode lockdown: Lock down tracing and perf kprobes when in confidentiality mode lockdown: Lock down /proc/kcore x86/mmiotrace: Lock down the testmmiotrace module lockdown: Lock down module params that specify hardware parameters (eg. ioport) lockdown: Lock down TIOCSSERIAL lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down acpi: Disable ACPI table override if the kernel is locked down acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down ACPI: Limit access to custom_method when the kernel is locked down x86/msr: Restrict MSR access when the kernel is locked down x86: Lock down IO port access when the kernel is locked down ...
2019-09-27Merge branch 'next-integrity' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity Pull integrity updates from Mimi Zohar: "The major feature in this time is IMA support for measuring and appraising appended file signatures. In addition are a couple of bug fixes and code cleanup to use struct_size(). In addition to the PE/COFF and IMA xattr signatures, the kexec kernel image may be signed with an appended signature, using the same scripts/sign-file tool that is used to sign kernel modules. Similarly, the initramfs may contain an appended signature. This contained a lot of refactoring of the existing appended signature verification code, so that IMA could retain the existing framework of calculating the file hash once, storing it in the IMA measurement list and extending the TPM, verifying the file's integrity based on a file hash or signature (eg. xattrs), and adding an audit record containing the file hash, all based on policy. (The IMA support for appended signatures patch set was posted and reviewed 11 times.) The support for appended signature paves the way for adding other signature verification methods, such as fs-verity, based on a single system-wide policy. The file hash used for verifying the signature and the signature, itself, can be included in the IMA measurement list" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity: ima: ima_api: Use struct_size() in kzalloc() ima: use struct_size() in kzalloc() sefltest/ima: support appended signatures (modsig) ima: Fix use after free in ima_read_modsig() MODSIGN: make new include file self contained ima: fix freeing ongoing ahash_request ima: always return negative code for error ima: Store the measurement again when appraising a modsig ima: Define ima-modsig template ima: Collect modsig ima: Implement support for module-style appended signatures ima: Factor xattr_verify() out of ima_appraise_measurement() ima: Add modsig appraise_type option for module-style appended signatures integrity: Select CONFIG_KEYS instead of depending on it PKCS#7: Introduce pkcs7_get_digest() PKCS#7: Refactor verify_pkcs7_signature() MODSIGN: Export module signature definitions ima: initialize the "template" field with the default template
2019-09-21Merge tag 'for-5.4/dm-changes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mike Snitzer: - crypto and DM crypt advances that allow the crypto API to reclaim implementation details that do not belong in DM crypt. The wrapper template for ESSIV generation that was factored out will also be used by fscrypt in the future. - Add root hash pkcs#7 signature verification to the DM verity target. - Add a new "clone" DM target that allows for efficient remote replication of a device. - Enhance DM bufio's cache to be tailored to each client based on use. Clients that make heavy use of the cache get more of it, and those that use less have reduced cache usage. - Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the version number of a DM target (even if the associated module isn't yet loaded). - Fix invalid memory access in DM zoned target. - Fix the max_discard_sectors limit advertised by the DM raid target; it was mistakenly storing the limit in bytes rather than sectors. - Small optimizations and cleanups in DM writecache target. - Various fixes and cleanups in DM core, DM raid1 and space map portion of DM persistent data library. * tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits) dm: introduce DM_GET_TARGET_VERSION dm bufio: introduce a global cache replacement dm bufio: remove old-style buffer cleanup dm bufio: introduce a global queue dm bufio: refactor adjust_total_allocated dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer dm: add clone target dm raid: fix updating of max_discard_sectors limit dm writecache: skip writecache_wait for pmem mode dm stats: use struct_size() helper dm crypt: omit parsing of the encapsulated cipher dm crypt: switch to ESSIV crypto API template crypto: essiv - create wrapper template for ESSIV generation dm space map common: remove check for impossible sm_find_free() return value dm raid1: use struct_size() with kzalloc() dm writecache: optimize performance by sorting the blocks for writeback_all dm writecache: add unlikely for getting two block with same LBA dm writecache: remove unused member pointer in writeback_struct dm zoned: fix invalid memory access dm verity: add root hash pkcs#7 signature verification ...
2019-09-13padata, pcrypt: take CPU hotplug lock internally in padata_alloc_possibleDaniel Jordan
With pcrypt's cpumask no longer used, take the CPU hotplug lock inside padata_alloc_possible. Useful later in the series for avoiding nested acquisition of the CPU hotplug lock in padata when padata_alloc_possible is allocating an unbound workqueue. Without this patch, this nested acquisition would happen later in the series: pcrypt_init_padata get_online_cpus alloc_padata_possible alloc_padata alloc_workqueue(WQ_UNBOUND) // later in the series alloc_and_link_pwqs apply_wqattrs_lock get_online_cpus // recursive rwsem acquisition Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-13crypto: pcrypt - remove padata cpumask notifierDaniel Jordan
Now that padata_do_parallel takes care of finding an alternate callback CPU, there's no need for pcrypt's callback cpumask, so remove it and the notifier callback that keeps it in sync. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-13padata: make padata_do_parallel find alternate callback CPUDaniel Jordan
padata_do_parallel currently returns -EINVAL if the callback CPU isn't in the callback cpumask. pcrypt tries to prevent this situation by keeping its own callback cpumask in sync with padata's and checks that the callback CPU it passes to padata is valid. Make padata handle this instead. padata_do_parallel now takes a pointer to the callback CPU and updates it for the caller if an alternate CPU is used. Overall behavior in terms of which callback CPUs are chosen stays the same. Prepares for removal of the padata cpumask notifier in pcrypt, which will fix a lockdep complaint about nested acquisition of the CPU hotplug lock later in the series. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-13padata: allocate workqueue internallyDaniel Jordan
Move workqueue allocation inside of padata to prepare for further changes to how padata uses workqueues. Guarantees the workqueue is created with max_active=1, which padata relies on to work correctly. No functional change. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: skcipher - Unmap pages after an external errorHerbert Xu
skcipher_walk_done may be called with an error by internal or external callers. For those internal callers we shouldn't unmap pages but for external callers we must unmap any pages that are in use. This patch distinguishes between the two cases by checking whether walk->nbytes is zero or not. For internal callers, we now set walk->nbytes to zero prior to the call. For external callers, walk->nbytes has always been non-zero (as zero is used to indicate the termination of a walk). Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-05crypto: sha256 - Merge crypto/sha256.h into crypto/sha.hHans de Goede
The generic sha256 implementation from lib/crypto/sha256.c uses data structs defined in crypto/sha.h, so lets move the function prototypes there too. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-03crypto: essiv - create wrapper template for ESSIV generationArd Biesheuvel
Implement a template that wraps a (skcipher,shash) or (aead,shash) tuple so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and move it into the crypto API. This will result in better test coverage, and will allow future changes to make the bare cipher interface internal to the crypto subsystem, in order to increase robustness of the API against misuse. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2019-08-30crypto: aegis128 - Fix -Wunused-const-variable warningYueHaibing
crypto/aegis.h:27:32: warning: crypto_aegis_const defined but not used [-Wunused-const-variable=] crypto_aegis_const is only used in aegis128-core.c, just move the definition over there. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-30crypto: essiv - add tests for essiv in cbc(aes)+sha256 modeArd Biesheuvel
Add a test vector for the ESSIV mode that is the most widely used, i.e., using cbc(aes) and sha256, in both skcipher and AEAD modes (the latter is used by tcrypt to encapsulate the authenc template or h/w instantiations of the same) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-30crypto: arm64/aegis128 - use explicit vector load for permute vectorsArd Biesheuvel
When building the new aegis128 NEON code in big endian mode, Clang complains about the const uint8x16_t permute vectors in the following way: crypto/aegis128-neon-inner.c:58:40: warning: vector initializers are not compatible with NEON intrinsics in big endian mode [-Wnonportable-vector-initialization] static const uint8x16_t shift_rows = { ^ crypto/aegis128-neon-inner.c:58:40: note: consider using vld1q_u8() to initialize a vector from memory, or vcombine_u8(vcreate_u8(), vcreate_u8()) to initialize from integer constants Since the same issue applies to the uint8x16x4_t loads of the AES Sbox, update those references as well. However, since GCC does not implement the vld1q_u8_x4() intrinsic, switch from IS_ENABLED() to a preprocessor conditional to conditionally include this code. Reported-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256_generic - Switch to the generic lib/crypto/sha256.c lib codeHans de Goede
Drop the duplicate generic sha256 (and sha224) implementation from crypto/sha256_generic.c and use the implementation from lib/crypto/sha256.c instead. "diff -u lib/crypto/sha256.c sha256_generic.c" shows that the core sha256_transform function from both implementations is identical and the other code is functionally identical too. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256 - Make lib/crypto/sha256.c suitable for generic useHans de Goede
Before this commit lib/crypto/sha256.c has only been used in the s390 and x86 purgatory code, make it suitable for generic use: * Export interesting symbols * Add -D__DISABLE_EXPORTS to CFLAGS_sha256.o for purgatory builds to avoid the exports for the purgatory builds * Add to lib/crypto/Makefile and crypto/Kconfig Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256_generic - Fix some coding style issuesHans de Goede
Add a bunch of missing spaces after commas and arround operators. Note the main goal of this is to make sha256_transform and its helpers identical in formatting too the duplcate implementation in lib/sha256.c, so that "diff -u" can be used to compare them to prove that no functional changes are made when further patches in this series consolidate the 2 implementations into 1. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des - remove now unused __des3_ede_setkey()Ard Biesheuvel
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des - split off DES library from generic DES cipher driverArd Biesheuvel
Another one for the cipher museum: split off DES core processing into a separate module so other drivers (mostly for crypto accelerators) can reuse the code without pulling in the generic DES cipher itself. This will also permit the cipher interface to be made private to the crypto API itself once we move the only user in the kernel (CIFS) to this library interface. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: 3des - move verification out of exported routineArd Biesheuvel
In preparation of moving the shared key expansion routine into the DES library, move the verification done by __des3_ede_setkey() into its callers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des/3des_ede - add new helpers to verify keysArd Biesheuvel
The recently added helper routine to perform key strength validation of triple DES keys is slightly inadequate, since it comes in two versions, neither of which are highly useful for anything other than skciphers (and many drivers still use the older blkcipher interfaces). So let's add a new helper and, considering that this is a helper function that is only intended to be used by crypto code itself, put it in a new des.h header under crypto/internal. While at it, implement a similar helper for single DES, so that we can start replacing the pattern of calling des_ekey() into a temp buffer that occurs in many drivers in drivers/crypto. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-19kexec_file: split KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCEJiri Bohac
This is a preparatory patch for kexec_file_load() lockdown. A locked down kernel needs to prevent unsigned kernel images from being loaded with kexec_file_load(). Currently, the only way to force the signature verification is compiling with KEXEC_VERIFY_SIG. This prevents loading usigned images even when the kernel is not locked down at runtime. This patch splits KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE. Analogous to the MODULE_SIG and MODULE_SIG_FORCE for modules, KEXEC_SIG turns on the signature verification but allows unsigned images to be loaded. KEXEC_SIG_FORCE disallows images without a valid signature. Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Matthew Garrett <mjg59@google.com> cc: kexec@lists.infradead.org Signed-off-by: James Morris <jmorris@namei.org>
2019-08-15crypto: arm64/aegis128 - implement plain NEON versionArd Biesheuvel
Provide a version of the core AES transform to the aegis128 SIMD code that does not rely on the special AES instructions, but uses plain NEON instructions instead. This allows the SIMD version of the aegis128 driver to be used on arm64 systems that do not implement those instructions (which are not mandatory in the architecture), such as the Raspberry Pi 3. Since GCC makes a mess of this when using the tbl/tbx intrinsics to perform the sbox substitution, preload the Sbox into v16..v31 in this case and use inline asm to emit the tbl/tbx instructions. Clang does not support this approach, nor does it require it, since it does a much better job at code generation, so there we use the intrinsics as usual. Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: aegis128 - provide a SIMD implementation based on NEON intrinsicsArd Biesheuvel
Provide an accelerated implementation of aegis128 by wiring up the SIMD hooks in the generic driver to an implementation based on NEON intrinsics, which can be compiled to both ARM and arm64 code. This results in a performance of 2.2 cycles per byte on Cortex-A53, which is a performance increase of ~11x compared to the generic code. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: aegis128 - add support for SIMD accelerationArd Biesheuvel
Add some plumbing to allow the AEGIS128 code to be built with SIMD routines for acceleration. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: xts - add support for ciphertext stealingArd Biesheuvel
Add support for the missing ciphertext stealing part of the XTS-AES specification, which permits inputs of any size >= the block size. Cc: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Cc: Ondrej Mosnacek <omosnace@redhat.com> Tested-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: aead - Do not allow authsize=0 if auth. alg has digestsize>0Pascal van Leeuwen
Return -EINVAL on an attempt to set the authsize to 0 with an auth. algorithm with a non-zero digestsize (i.e. anything but digest_null) as authenticating the data and then throwing away the result does not make any sense at all. The digestsize zero exception is for use with digest_null for testing purposes only. Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: streebog - remove two unused variablesYueHaibing
crypto/streebog_generic.c:162:17: warning: Pi defined but not used [-Wunused-const-variable=] crypto/streebog_generic.c:151:17: warning: Tau defined but not used [-Wunused-const-variable=] They are never used, so can be removed. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: aes-generic - remove unused variable 'rco_tab'YueHaibing
crypto/aes_generic.c:64:18: warning: rco_tab defined but not used [-Wunused-const-variable=] It is never used, so can be removed. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-15crypto: cryptd - Use refcount_t for refcountChuhong Yuan
Reference counters are preferred to use refcount_t instead of atomic_t. This is because the implementation of refcount_t can prevent overflows and detect possible use-after-free. So convert atomic_t ref counters to refcount_t. Signed-off-by: Chuhong Yuan <hslester96@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-09crypto: gcm - restrict assoclen for rfc4543Iuliana Prodan
Based on seqiv, IPsec ESP and rfc4543/rfc4106 the assoclen can be 16 or 20 bytes. From esp4/esp6, assoclen is sizeof IP Header. This includes spi, seq_no and extended seq_no, that is 8 or 12 bytes. In seqiv, to asscolen is added the IV size (8 bytes). Therefore, the assoclen, for rfc4543, should be restricted to 16 or 20 bytes, as for rfc4106. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Reviewed-by: Horia Geanta <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-09crypto: engine - Reduce default RT priorityPeter Zijlstra
The crypto engine initializes its kworker thread to FIFO-99 (when requesting RT priority), reduce this to FIFO-50. FIFO-99 is the very highest priority available to SCHED_FIFO and it not a suitable default; it would indicate the crypto work is the most important work on the machine. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-09crypto: gcm - helper functions for assoclen/authsize checkIuliana Prodan
Added inline helper functions to check authsize and assoclen for gcm, rfc4106 and rfc4543. These are used in the generic implementation of gcm, rfc4106 and rfc4543. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-05PKCS#7: Introduce pkcs7_get_digest()Thiago Jung Bauermann
IMA will need to access the digest of the PKCS7 message (as calculated by the kernel) before the signature is verified, so introduce pkcs7_get_digest() for that purpose. Also, modify pkcs7_digest() to detect when the digest was already calculated so that it doesn't have to do redundant work. Verifying that sinfo->sig->digest isn't NULL is sufficient because both places which allocate sinfo->sig (pkcs7_parse_message() and pkcs7_note_signed_info()) use kzalloc() so sig->digest is always initialized to zero. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Cc: David Howells <dhowells@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2019-08-02crypto: jitterentropy - build without sanitizerArnd Bergmann
Recent clang-9 snapshots double the kernel stack usage when building this file with -O0 -fsanitize=kernel-hwaddress, compared to clang-8 and older snapshots, this changed between commits svn364966 and svn366056: crypto/jitterentropy.c:516:5: error: stack frame size of 2640 bytes in function 'jent_entropy_init' [-Werror,-Wframe-larger-than=] int jent_entropy_init(void) ^ crypto/jitterentropy.c:185:14: error: stack frame size of 2224 bytes in function 'jent_lfsr_time' [-Werror,-Wframe-larger-than=] static __u64 jent_lfsr_time(struct rand_data *ec, __u64 time, __u64 loop_cnt) ^ I prepared a reduced test case in case any clang developers want to take a closer look, but from looking at the earlier output it seems that even with clang-8, something was very wrong here. Turn off any KASAN and UBSAN sanitizing for this file, as that likely clashes with -O0 anyway. Turning off just KASAN avoids the warning already, but I suspect both of these have undesired side-effects for jitterentropy. Link: https://godbolt.org/z/fDcwZ5 Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-02Revert "crypto: aegis128 - add support for SIMD acceleration"Herbert Xu
This reverts commit ecc8bc81f2fb3976737ef312f824ba6053aa3590 ("crypto: aegis128 - provide a SIMD implementation based on NEON intrinsics") and commit 7cdc0ddbf74a19cecb2f0e9efa2cae9d3c665189 ("crypto: aegis128 - add support for SIMD acceleration"). They cause compile errors on platforms other than ARM because the mechanism to selectively compile the SIMD code is broken. Repoted-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-27crypto: ghash - add comment and improve help textEric Biggers
To help avoid confusion, add a comment to ghash-generic.c which explains the convention that the kernel's implementation of GHASH uses. Also update the Kconfig help text and module descriptions to call GHASH a "hash function" rather than a "message digest", since the latter normally means a real cryptographic hash function, which GHASH is not. Cc: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-27crypto: aegis - fix badly optimized clang outputArnd Bergmann
Clang sometimes makes very different inlining decisions from gcc. In case of the aegis crypto algorithms, it decides to turn the innermost primitives (and, xor, ...) into separate functions but inline most of the rest. This results in a huge amount of variables spilled on the stack, leading to rather slow execution as well as kernel stack usage beyond the 32-bit warning limit when CONFIG_KASAN is enabled: crypto/aegis256.c:123:13: warning: stack frame size of 648 bytes in function 'crypto_aegis256_encrypt_chunk' [-Wframe-larger-than=] crypto/aegis256.c:366:13: warning: stack frame size of 1264 bytes in function 'crypto_aegis256_crypt' [-Wframe-larger-than=] crypto/aegis256.c:187:13: warning: stack frame size of 656 bytes in function 'crypto_aegis256_decrypt_chunk' [-Wframe-larger-than=] crypto/aegis128l.c:135:13: warning: stack frame size of 832 bytes in function 'crypto_aegis128l_encrypt_chunk' [-Wframe-larger-than=] crypto/aegis128l.c:415:13: warning: stack frame size of 1480 bytes in function 'crypto_aegis128l_crypt' [-Wframe-larger-than=] crypto/aegis128l.c:218:13: warning: stack frame size of 848 bytes in function 'crypto_aegis128l_decrypt_chunk' [-Wframe-larger-than=] crypto/aegis128.c:116:13: warning: stack frame size of 584 bytes in function 'crypto_aegis128_encrypt_chunk' [-Wframe-larger-than=] crypto/aegis128.c:351:13: warning: stack frame size of 1064 bytes in function 'crypto_aegis128_crypt' [-Wframe-larger-than=] crypto/aegis128.c:177:13: warning: stack frame size of 592 bytes in function 'crypto_aegis128_decrypt_chunk' [-Wframe-larger-than=] Forcing the primitives to all get inlined avoids the issue and the resulting code is similar to what gcc produces. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>