summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-01-04dt-bindings: crypto: qcom,prng: document ipq9574, ipq5424 and ipq5322Md Sadre Alam
Document ipq9574, ipq5424 and ipq5322 compatible for the True Random Number Generator. Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-04crypto: fips - Use str_enabled_disabled() helper in fips_enable()Thorsten Blum
Remove hard-coded strings by using the str_enabled_disabled() helper function. Use pr_info() instead of printk(KERN_INFO) to silence a checkpatch warning. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-28crypto: iaa - Fix IAA disabling that occurs when sync_mode is set to 'async'Kanchana P Sridhar
With the latest mm-unstable, setting the iaa_crypto sync_mode to 'async' causes crypto testmgr.c test_acomp() failure and dmesg call traces, and zswap being unable to use 'deflate-iaa' as a compressor: echo async > /sys/bus/dsa/drivers/crypto/sync_mode [ 255.271030] zswap: compressor deflate-iaa not available [ 369.960673] INFO: task cryptomgr_test:4889 blocked for more than 122 seconds. [ 369.970127] Not tainted 6.13.0-rc1-mm-unstable-12-16-2024+ #324 [ 369.977411] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 369.986246] task:cryptomgr_test state:D stack:0 pid:4889 tgid:4889 ppid:2 flags:0x00004000 [ 369.986253] Call Trace: [ 369.986256] <TASK> [ 369.986260] __schedule+0x45c/0xfa0 [ 369.986273] schedule+0x2e/0xb0 [ 369.986277] schedule_timeout+0xe7/0x100 [ 369.986284] ? __prepare_to_swait+0x4e/0x70 [ 369.986290] wait_for_completion+0x8d/0x120 [ 369.986293] test_acomp+0x284/0x670 [ 369.986305] ? __pfx_cryptomgr_test+0x10/0x10 [ 369.986312] alg_test_comp+0x263/0x440 [ 369.986315] ? sched_balance_newidle+0x259/0x430 [ 369.986320] ? __pfx_cryptomgr_test+0x10/0x10 [ 369.986323] alg_test.part.27+0x103/0x410 [ 369.986326] ? __schedule+0x464/0xfa0 [ 369.986330] ? __pfx_cryptomgr_test+0x10/0x10 [ 369.986333] cryptomgr_test+0x20/0x40 [ 369.986336] kthread+0xda/0x110 [ 369.986344] ? __pfx_kthread+0x10/0x10 [ 369.986346] ret_from_fork+0x2d/0x40 [ 369.986355] ? __pfx_kthread+0x10/0x10 [ 369.986358] ret_from_fork_asm+0x1a/0x30 [ 369.986365] </TASK> This happens because the only async polling without interrupts that iaa_crypto currently implements is with the 'sync' mode. With 'async', iaa_crypto calls to compress/decompress submit the descriptor and return -EINPROGRESS, without any mechanism in the driver to poll for completions. Hence callers such as test_acomp() in crypto/testmgr.c or zswap, that wrap the calls to crypto_acomp_compress() and crypto_acomp_decompress() in synchronous wrappers, will block indefinitely. Even before zswap can notice this problem, the crypto testmgr.c's test_acomp() will fail and prevent registration of "deflate-iaa" as a valid crypto acomp algorithm, thereby disallowing the use of "deflate-iaa" as a zswap compress (zswap will fall-back to the default compressor in this case). To fix this issue, this patch modifies the iaa_crypto sync_mode set function to treat 'async' equivalent to 'sync', so that the correct and only supported driver async polling without interrupts implementation is enabled, and zswap can use 'deflate-iaa' as the compressor. Hence, with this patch, this is what will happen: echo async > /sys/bus/dsa/drivers/crypto/sync_mode cat /sys/bus/dsa/drivers/crypto/sync_mode sync There are no crypto/testmgr.c test_acomp() errors, no call traces and zswap can use 'deflate-iaa' without any errors. The iaa_crypto documentation has also been updated to mention this caveat with 'async' and what to expect with this fix. True iaa_crypto async polling without interrupts is enabled in patch "crypto: iaa - Implement batch_compress(), batch_decompress() API in iaa_crypto." [1] which is under review as part of the "zswap IAA compress batching" patch-series [2]. Until this is merged, we would appreciate it if this current patch can be considered for a hotfix. [1]: https://patchwork.kernel.org/project/linux-mm/patch/20241221063119.29140-5-kanchana.p.sridhar@intel.com/ [2]: https://patchwork.kernel.org/project/linux-mm/list/?series=920084 Fixes: 09646c98d ("crypto: iaa - Add irq support for the crypto async interface") Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-28crypto: lib/aesgcm - Reduce stack usage in libaesgcm_initHerbert Xu
The stack frame in libaesgcm_init triggers a size warning on x86-64. Reduce it by making buf static. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: qce - revert "use __free() for a buffer that's always freed"Nathan Chancellor
Commit ce8fd0500b74 ("crypto: qce - use __free() for a buffer that's always freed") introduced a buggy use of __free(), which clang rightfully points out: drivers/crypto/qce/sha.c:365:3: error: cannot jump from this goto statement to its label 365 | goto err_free_ahash; | ^ drivers/crypto/qce/sha.c:373:6: note: jump bypasses initialization of variable with __attribute__((cleanup)) 373 | u8 *buf __free(kfree) = kzalloc(keylen + QCE_MAX_ALIGN_SIZE, | ^ Jumping over a variable declared with the cleanup attribute does not prevent the cleanup function from running; instead, the cleanup function is called with an uninitialized value. Moving the declaration back to the top function with __free() and a NULL initialization would resolve the bug but that is really not much different from the original code. Since the function is so simple and there is no functional reason to use __free() here, just revert the original change to resolve the issue. Fixes: ce8fd0500b74 ("crypto: qce - use __free() for a buffer that's always freed") Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Closes: https://lore.kernel.org/CA+G9fYtpAwXa5mUQ5O7vDLK2xN4t-kJoxgUe1ZFRT=AGqmLSRA@mail.gmail.com/ Signed-off-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: ixp4xx - fix OF node reference leaks in init_ixp_crypto()Joe Hattori
init_ixp_crypto() calls of_parse_phandle_with_fixed_args() multiple times, but does not release all the obtained refcounts. Fix it by adding of_node_put() calls. This bug was found by an experimental static analysis tool that I am developing. Fixes: 76f24b4f46b8 ("crypto: ixp4xx - Add device tree support") Signed-off-by: Joe Hattori <joe@pf.is.s.u-tokyo.ac.jp> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: hisilicon/sec2 - fix for aead invalid authsizeWenkai Lin
When the digest alg is HMAC-SHAx or another, the authsize may be less than 4 bytes and mac_len of the BD is set to zero, the hardware considers it a BD configuration error and reports a ras error, so the sec driver needs to switch to software calculation in this case, this patch add a check for it and remove unnecessary check that has been done by crypto. Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2") Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com> Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: hisilicon/sec2 - fix for aead icv errorWenkai Lin
When the AEAD algorithm is used for encryption or decryption, the input authentication length varies, the hardware needs to obtain the input length to pass the integrity check verification. Currently, the driver uses a fixed authentication length,which causes decryption failure, so the length configuration is modified. In addition, the step of setting the auth length is unnecessary, so it was deleted from the setkey function. Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2") Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com> Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - additional optimizationsEric Biggers
Reduce latency by taking advantage of the property vaesenclast(key, a) ^ b == vaesenclast(key ^ b, a), like I did in the AES-GCM code. Also replace a vpand and vpxor with a vpternlogd. On AMD Zen 5 this improves performance by about 3%. Intel performance remains about the same, with a 0.1% improvement being seen on Icelake. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - more code size optimizationsEric Biggers
Prefer immediates of -128 to 128, since the former fits in a signed byte, saving 3 bytes per instruction. Also prefer VEX-coded instructions to EVEX where this is easy to do. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - change len parameter to intEric Biggers
The AES-XTS assembly code currently treats the length as signed, since this saves a few instructions in the loop compared to treating it as unsigned. Therefore update the type to make this clear. (It is not actually passed any values larger than PAGE_SIZE.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - improve some commentsEric Biggers
Improve some of the comments in aes-xts-avx-x86_64.S. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - make the register aliases per-functionEric Biggers
Since aes-xts-avx-x86_64.S contains multiple functions, move the register aliases for the parameters and local variables of the XTS update function into the macro that generates that function. Then add register aliases to aes_xts_encrypt_iv() to improve readability there. This makes aes-xts-avx-x86_64.S consistent with the GCM assembly files. No change in the generated code. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-xts - use .irp when usefulEric Biggers
Use .irp instead of repeating code. No change in the generated code. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-gcm - tune better for AMD CPUsEric Biggers
Reorganize the main loop to free up the RNDKEYLAST[0-3] registers and use them for more cached round keys. This improves performance by about 2% on AMD Zen 4 and Zen 5. Intel performance remains about the same. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: x86/aes-gcm - code size optimizationEric Biggers
Prefer immediates of -128 to 128, since the former fits in a signed byte, saving 3 bytes per instruction. Also replace a vpand and vpxor with a vpternlogd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21crypto: lib/gf128mul - Remove some bbe deadcodeDr. David Alan Gilbert
gf128mul_4k_bbe(), gf128mul_bbe() and gf128mul_init_4k_bbe() are part of the library originally added in 2006 by commit c494e0705d67 ("[CRYPTO] lib: table driven multiplications in GF(2^128)") but have never been used. Remove them. (BBE is Big endian Byte/Big endian bits Note the 64k table version is used and I've left that in) Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-21rhashtable: Fix potential deadlock by moving schedule_work outside lockBreno Leitao
Move the hash table growth check and work scheduling outside the rht lock to prevent a possible circular locking dependency. The original implementation could trigger a lockdep warning due to a potential deadlock scenario involving nested locks between rhashtable bucket, rq lock, and dsq lock. By relocating the growth check and work scheduling after releasing the rth lock, we break this potential deadlock chain. This change expands the flexibility of rhashtable by removing restrictive locking that previously limited its use in scheduler and workqueue contexts. Import to say that this calls rht_grow_above_75(), which reads from struct rhashtable without holding the lock, if this is a problem, we can move the check to the lock, and schedule the workqueue after the lock. Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class") Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Modified so that atomic_inc is also moved outside of the bucket lock along with the growth above 75% check. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: keywrap - remove assignment of 0 to cra_alignmaskEric Biggers
Since this code is zero-initializing the algorithm struct, the assignment of 0 to cra_alignmask is redundant. Remove it to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: aegis - remove assignments of 0 to cra_alignmaskEric Biggers
Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: x86 - remove assignments of 0 to cra_alignmaskEric Biggers
Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: seed - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: khazad - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: tea - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: aria - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: anubis - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: skcipher - remove support for physical address walksEric Biggers
Since the physical address support in skcipher_walk is not used anymore, remove all the code associated with it. This includes: - The skcipher_walk_async() and skcipher_walk_complete() functions; - The SKCIPHER_WALK_PHYS flag and everything conditional on it; - The buffers, phys, and virt.page fields in struct skcipher_walk; - struct skcipher_walk_buffer. As a result, skcipher_walk now just supports virtual addresses. Physical address support in skcipher_walk is unneeded because drivers that need physical addresses just use the scatterlists directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: n2 - remove Niagara2 SPU driverEric Biggers
Remove the driver for the Stream Processing Unit (SPU) on the Niagara 2. Removing this driver allows removing the support for physical address walks in skcipher_walk. That is a misfeature that is used only by this driver and increases the overhead of the crypto API for everyone else. There is little evidence that anyone cares about this driver. The Niagara 2, a.k.a. the UltraSPARC T2, is a server CPU released in 2007. The SPU is also present on the SPARC T3, released in 2010. However, the SPU went away in SPARC T4, released in 2012, which replaced it with proper cryptographic instructions instead. These newer instructions are supported by the kernel in arch/sparc/crypto/. This driver was completely broken from (at least) 2015 to 2022, from commit 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero") to commit 76a4e8745935 ("crypto: n2 - add missing hash statesize"), since its probe function always returned an error before registering any algorithms. Though, even with that obvious issue fixed, it is unclear whether the driver now works correctly. E.g., there are no indications that anyone has run the self-tests recently. One bug report for this driver in 2017 (https://lore.kernel.org/r/nycvar.YFH.7.76.1712110214220.28416@n3.vanv.qr) complained that it crashed the kernel while being loaded. The reporter didn't seem to care about the functionality of the driver, but rather just the fact that loading it crashed the kernel. In fact not until 2022 was the driver fixed to maybe actually register its algorithms with the crypto API. The 2022 fix does have a Reported-by and Tested-by, but that may similarly have been just about making the error messages go away as opposed to someone actually wanting to use the driver. As such, it seems appropriate to retire this driver in mainline. Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - fix priority to be less than ARMv8 CEEric Biggers
As QCE is an order of magnitude slower than the ARMv8 Crypto Extensions on the CPU, and is also less well tested, give it a lower priority. Previously the QCE SHA algorithms had higher priority than the ARMv8 CE equivalents, and the ciphers such as AES-XTS had the same priority which meant the QCE versions were chosen if they happened to be loaded later. Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver") Cc: stable@vger.kernel.org Cc: Bartosz Golaszewski <brgl@bgdev.pl> Cc: Neil Armstrong <neil.armstrong@linaro.org> Cc: Thara Gopinath <thara.gopinath@gmail.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: ccp - Use scoped guard for mutexMario Limonciello
Use a scoped guard to simplify the cleanup handling. Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - switch to using a mutexBartosz Golaszewski
Having switched to workqueue from tasklet, we are no longer limited to atomic APIs and can now convert the spinlock to a mutex. This, along with the conversion from tasklet to workqueue grants us ~15% improvement in cryptsetup benchmarks for AES encryption. While at it: use guards to simplify locking code. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - convert tasklet to workqueueBartosz Golaszewski
There's nothing about the qce driver that requires running from a tasklet. Switch to using the system workqueue. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - use __free() for a buffer that's always freedBartosz Golaszewski
The buffer allocated in qce_ahash_hmac_setkey is always freed before returning to use __free() to automate it. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - make qce_register_algs() a managed interfaceBartosz Golaszewski
Make qce_register_algs() a managed interface. This allows us to further simplify the remove() callback. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - convert qce_dma_request() to use devresBartosz Golaszewski
Make qce_dma_request() into a managed interface. With this we can simplify the error path in probe() and drop another operations from remove(). Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - shrink code with devres clk helpersBartosz Golaszewski
Use devm_clk_get_optional_enabled() to avoid having to enable the clocks separately as well as putting the clocks in error path and the remove() callback. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - remove unneeded call to icc_set_bw() in error pathBartosz Golaszewski
There's no need to call icc_set_bw(qce->mem_path, 0, 0); in error path as this will already be done in the release path of devm_of_icc_get(). Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - unregister previously registered algos in error pathBartosz Golaszewski
If we encounter an error when registering alorithms with the crypto framework, we just bail out and don't unregister the ones we successfully registered in prior iterations of the loop. Add code that goes back over the algos and unregisters them before returning an error from qce_register_algs(). Cc: stable@vger.kernel.org Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver") Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: qce - fix goto jump in error pathBartosz Golaszewski
If qce_check_version() fails, we should jump to err_dma as we already called qce_dma_request() a couple lines before. Cc: stable@vger.kernel.org Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver") Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: sig - Set maskset to CRYPTO_ALG_TYPE_MASKHerbert Xu
As sig is now a standalone type, it no longer needs to have a wide mask that includes akcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10MAINTAINERS: Move rhashtable over to linux-cryptoHerbert Xu
This patch moves the rhashtable mailing list over to linux-crypto. This would allow rhashtable patches to go through my tree instead of the networking tree. More uses are popping up outside of the network stack and having it under the networking tree no longer makes sense. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: caam - use JobR's space to access page 0 regsGaurav Jain
On iMX8DXL/QM/QXP(SECO) & iMX8ULP(ELE) SoCs, access to controller region(CAAM page 0) is not permitted from non secure world. use JobR's register space to access page 0 registers. Fixes: 6a83830f649a ("crypto: caam - warn if blob_gen key is insecure") Signed-off-by: Gaurav Jain <gaurav.jain@nxp.com> Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10dt-bindings: crypto: qcom-qce: document the QCS8300 crypto engineYuvaraj Ranganathan
Document the crypto engine on the QCS8300 Platform. Acked-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10dt-bindings: crypto: ice: document the qcs8300 inline crypto engineYuvaraj Ranganathan
Add the compatible string for QCom ICE on qcs8300 SoCs. Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10dt-bindings: crypto: qcom,prng: document QCS8300Yuvaraj Ranganathan
Document QCS8300 compatible for the True Random Number Generator. Acked-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: hisilicon/zip - support new error reportWeili Qian
The error detection of the data aggregation feature is separated from the compression/decompression feature. This patch enables the error detection and reporting of the data aggregation feature. When an unrecoverable error occurs in the algorithm core, the device reports the error to the driver, and the driver will reset the device. Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: hisilicon/zip - add data aggregation featureWeili Qian
The zip device adds data aggregation feature, data with the same key can be combined. This patch enables the device data aggregation feature. New feature is called "hashagg" name and registered to the uacce subsystem to allow applications to submit data aggregation operations in user space. Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: api - Call crypto_schedule_test outside of mutexHerbert Xu
There is no need to hold the crypto mutex when scheduling a self- test. In fact prior to the patch introducing asynchronous testing, this was done outside of the locked area. Move the crypto_schedule_test call back out of the locked area. Also move crypto_remove_final to the else branch under the schedule- test call as the list of algorithms to be removed is non-empty only when the test larval is NULL (i.e., testing is disabled). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: api - Fix boot-up self-test raceHerbert Xu
During the boot process self-tests are postponed so that all algorithms are registered when the test starts. In the event that algorithms are still being registered during these tests, which can occur either because the algorithm is registered at late_initcall, or because a self-test itself triggers the creation of an instance, some self-tests may never start at all. Fix this by setting the flag at the start of crypto_start_tests. Note that this race is theoretical and has never been observed in practice. Fixes: adad556efcdd ("crypto: api - Fix built-in testing dependency failures") Signed-off-by: Herbert Xu <herbert.xu@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: tegra - do not transfer req when tegra init failsChen Ridong
The tegra_cmac_init or tegra_sha_init function may return an error when memory is exhausted. It should not transfer the request when they return an error. Fixes: 0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver") Signed-off-by: Chen Ridong <chenridong@huawei.com> Acked-by: Akhil R <akhilrajeev@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>