summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2025-05-27Merge tag 'sound-6.16-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound Pull sound updates from Takashi Iwai: "We've received a lot of activities in this cycle, mostly about leaf driver codes rather than the core part, but with a good mixture of code cleanups and new driver additions. Below are some highlights: ASoC: - Support for automatically enumerating DAIs from standards conforming SoundWire SDCA devices; not much used as of this writing, rather for future implementations - Conversion of quite a few drivers to newer GPIO APIs - Continued cleanups and helper usages in allover places - Support for a wider range of Intel AVS platforms - Support for AMD ACP 7.x platforms, Cirrus Logic CS35L63 and CS48L32 Everest Semiconductor ES8375 and ES8389, Longsoon-1 AC'97 controllers, nVidia Tegra264, Richtek ALC203 and RT9123 and Rockchip SAI controllers HD-audio: - Lots of cleanups of TAS2781 codec drivers - A new HD-audio control bound via ACPI for Nvidia - Support for Tegra264, Intel WCL, usual new codec quirks USB-audio: - Fix a race at removal of MIDI device - Pioneer DJM-V10 support, Scarlett2 driver cleanups Misc: - Cleanups of deprecated PCI functions - Removal of unused / dead function codes" * tag 'sound-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (364 commits) firmware: cs_dsp: Fix OOB memory read access in KUnit test ASoC: codecs: add support for ES8375 ASoC: dt-bindings: Add Everest ES8375 audio CODEC ALSA: hda: acpi: Make driver's match data const static ALSA: hda: acpi: Use SYSTEM_SLEEP_PM_OPS() ALSA: atmel: Replace deprecated strcpy() with strscpy() ALSA: core: fix up bus match const issues. ASoC: wm_adsp: Make cirrus_dir const ASoC: tegra: Tegra264 support in isomgr_bw ASoC: tegra: AHUB: Add Tegra264 support ASoC: tegra: ADX: Add Tegra264 support ASoC: tegra: AMX: Add Tegra264 support ASoC: tegra: I2S: Add Tegra264 support ASoC: tegra: Update PLL rate for Tegra264 ASoC: tegra: ASRC: Update ARAM address ASoC: tegra: ADMAIF: Add Tegra264 support ASoC: tegra: CIF: Add Tegra264 support dt-bindings: ASoC: Document Tegra264 APE support dt-bindings: ASoC: admaif: Add missing properties ASoC: dt-bindings: audio-graph-card2: reference audio-graph routing property ...
2025-05-27Merge tag 'ratelimit.2025.05.25a' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu Pull rate-limit updates from Paul McKenney: "lib/ratelimit: Reduce false-positive and silent misses: - Reduce open-coded use of ratelimit_state structure fields. - Convert the ->missed field to atomic_t. - Count misses that are due to lock contention. - Eliminate jiffies=0 special case. - Reduce ___ratelimit() false-positive rate limiting (Petr Mladek). - Allow zero ->burst to hard-disable rate limiting. - Optimize away atomic operations when a miss is guaranteed. - Warn if ->interval or ->burst are negative (Petr Mladek). - Simplify the resulting code. A smoke test and stress test have been created, but they are not yet ready for mainline. With luck, we will offer them for the v6.17 merge window" * tag 'ratelimit.2025.05.25a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: ratelimit: Drop redundant accesses to burst ratelimit: Use nolock_ret restructuring to collapse common case code ratelimit: Use nolock_ret label to collapse lock-failure code ratelimit: Use nolock_ret label to save a couple of lines of code ratelimit: Simplify common-case exit path ratelimit: Warn if ->interval or ->burst are negative ratelimit: Avoid atomic decrement under lock if already rate-limited ratelimit: Avoid atomic decrement if already rate-limited ratelimit: Don't flush misses counter if RATELIMIT_MSG_ON_RELEASE ratelimit: Force re-initialization when rate-limiting re-enabled ratelimit: Allow zero ->burst to disable ratelimiting ratelimit: Reduce ___ratelimit() false-positive rate limiting ratelimit: Avoid jiffies=0 special case ratelimit: Count misses due to lock contention ratelimit: Convert the ->missed field to atomic_t drm/amd/pm: Avoid open-coded use of ratelimit_state structure's internals drm/i915: Avoid open-coded use of ratelimit_state structure's ->missed field random: Avoid open-coded use of ratelimit_state structure's ->missed field ratelimit: Create functions to handle ratelimit_state internals
2025-05-27Merge tag 'x86_cache_for_v6.16_rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 resource control updates from Borislav Petkov: "Carve out the resctrl filesystem-related code into fs/resctrl/ so that multiple architectures can share the fs API for manipulating their respective hw resource control implementation. This is the second step in the work towards sharing the resctrl filesystem interface, the next one being plugging ARM's MPAM into the aforementioned fs API" * tag 'x86_cache_for_v6.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits) MAINTAINERS: Add reviewers for fs/resctrl x86,fs/resctrl: Move the resctrl filesystem code to live in /fs/resctrl x86/resctrl: Always initialise rid field in rdt_resources_all[] x86/resctrl: Relax some asm #includes x86/resctrl: Prefer alloc(sizeof(*foo)) idiom in rdt_init_fs_context() x86/resctrl: Squelch whitespace anomalies in resctrl core code x86/resctrl: Move pseudo lock prototypes to include/linux/resctrl.h x86/resctrl: Fix types in resctrl_arch_mon_ctx_{alloc,free}() stubs x86/resctrl: Move enum resctrl_event_id to resctrl.h x86/resctrl: Move the filesystem bits to headers visible to fs/resctrl fs/resctrl: Add boiler plate for external resctrl code x86/resctrl: Add 'resctrl' to the title of the resctrl documentation x86/resctrl: Split trace.h x86/resctrl: Expand the width of domid by replacing mon_data_bits x86/resctrl: Add end-marker to the resctrl_event_id enum x86/resctrl: Move is_mba_sc() out of core.c x86/resctrl: Drop __init/__exit on assorted symbols x86/resctrl: Resctrl_exit() teardown resctrl but leave the mount point x86/resctrl: Check all domains are offline in resctrl_exit() x86/resctrl: Rename resctrl_sched_in() to begin with "resctrl_arch_" ...
2025-05-26Merge tag 'linux_kselftest-kunit-6.16-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kunit updates from Shuah Khan: - Enable qemu_config for riscv32, sparc 64-bit, PowerPC 32-bit BE and 64-bit LE - Enable CONFIG_SPARC32 to clearly differentiate between sparc 32-bit and 64-bit configurations - Enable CONFIG_CPU_BIG_ENDIAN to clearly differentiate between powerpc LE and BE configurations - Add feature to list available architectures to kunit tool - Fixes to bugs and changes to documentation * tag 'linux_kselftest-kunit-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: kunit: Fix wrong parameter to kunit_deactivate_static_stub() kunit: tool: add test counts to JSON output Documentation: kunit: improve example on testing static functions kunit: executor: Remove const from kunit_filter_suites() allocation type kunit: qemu_configs: Disable faulting tests on 32-bit SPARC kunit: qemu_configs: Add 64-bit SPARC configuration kunit: qemu_configs: sparc: Explicitly enable CONFIG_SPARC32=y kunit: qemu_configs: Add PowerPC 32-bit BE and 64-bit LE kunit: qemu_configs: powerpc: Explicitly enable CONFIG_CPU_BIG_ENDIAN=y kunit: tool: Implement listing of available architectures kunit: qemu_configs: Add riscv32 config kunit: configs: Enable CONFIG_INIT_STACK_ALL_PATTERN in all_tests
2025-05-26Merge tag 'v6.16-p1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Fix memcpy_sglist to handle partially overlapping SG lists - Use memcpy_sglist to replace null skcipher - Rename CRYPTO_TESTS to CRYPTO_BENCHMARK - Flip CRYPTO_MANAGER_DISABLE_TEST into CRYPTO_SELFTESTS - Hide CRYPTO_MANAGER - Add delayed freeing of driver crypto_alg structures Compression: - Allocate large buffers on first use instead of initialisation in scomp - Drop destination linearisation buffer in scomp - Move scomp stream allocation into acomp - Add acomp scatter-gather walker - Remove request chaining - Add optional async request allocation Hashing: - Remove request chaining - Add optional async request allocation - Move partial block handling into API - Add ahash support to hmac - Fix shash documentation to disallow usage in hard IRQs Algorithms: - Remove unnecessary SIMD fallback code on x86 and arm/arm64 - Drop avx10_256 xts(aes)/ctr(aes) on x86 - Improve avx-512 optimisations for xts(aes) - Move chacha arch implementations into lib/crypto - Move poly1305 into lib/crypto and drop unused Crypto API algorithm - Disable powerpc/poly1305 as it has no SIMD fallback - Move sha256 arch implementations into lib/crypto - Convert deflate to acomp - Set block size correctly in cbcmac Drivers: - Do not use sg_dma_len before mapping in sun8i-ss - Fix warm-reboot failure by making shutdown do more work in qat - Add locking in zynqmp-sha - Remove cavium/zip - Add support for PCI device 0x17D8 to ccp - Add qat_6xxx support in qat - Add support for RK3576 in rockchip-rng - Add support for i.MX8QM in caam Others: - Fix irq_fpu_usable/kernel_fpu_begin inconsistency during CPU bring-up - Add new SEV/SNP platform shutdown API in ccp" * tag 'v6.16-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (382 commits) x86/fpu: Fix irq_fpu_usable() to return false during CPU onlining crypto: qat - add missing header inclusion crypto: api - Redo lookup on EEXIST Revert "crypto: testmgr - Add hash export format testing" crypto: marvell/cesa - Do not chain submitted requests crypto: powerpc/poly1305 - add depends on BROKEN for now Revert "crypto: powerpc/poly1305 - Add SIMD fallback" crypto: ccp - Add missing tee info reg for teev2 crypto: ccp - Add missing bootloader info reg for pspv5 crypto: sun8i-ce - move fallback ahash_request to the end of the struct crypto: octeontx2 - Use dynamic allocated memory region for lmtst crypto: octeontx2 - Initialize cptlfs device info once crypto: xts - Only add ecb if it is not already there crypto: lrw - Only add ecb if it is not already there crypto: testmgr - Add hash export format testing crypto: testmgr - Use ahash for generic tfm crypto: hmac - Add ahash support crypto: testmgr - Ignore EEXIST on shash allocation crypto: algapi - Add driver template support to crypto_inst_setname crypto: shash - Set reqsize in shash_alg ...
2025-05-26Merge tag 'crc-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull CRC updates from Eric Biggers: "Cleanups for the kernel's CRC (cyclic redundancy check) code: - Use __ro_after_init where appropriate - Remove unnecessary static_key on s390 - Rename some source code files - Rename the crc32 and crc32c crypto API modules - Use subsys_initcall instead of arch_initcall - Restore maintainers for crc_kunit.c - Fold crc16_byte() into crc16.c - Add some SPDX license identifiers" * tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: lib/crc32: add SPDX license identifier lib/crc16: unexport crc16_table and crc16_byte() w1: ds2406: use crc16() instead of crc16_byte() loop MAINTAINERS: add crc_kunit.c back to CRC LIBRARY lib/crc: make arch-optimized code use subsys_initcall crypto: crc32 - remove "generic" from file and module names x86/crc: drop "glue" from filenames sparc/crc: drop "glue" from filenames s390/crc: drop "glue" from filenames powerpc/crc: rename crc32-vpmsum_core.S to crc-vpmsum-template.S powerpc/crc: drop "glue" from filenames arm64/crc: drop "glue" from filenames arm/crc: drop "glue" from filenames s390/crc32: Remove no-op module init and exit functions s390/crc32: Remove have_vxrs static key lib/crc: make the CPU feature static keys __ro_after_init
2025-05-25alloc_tag: allocate percpu counters for module tags dynamicallySuren Baghdasaryan
When a module gets unloaded it checks whether any of its tags are still in use and if so, we keep the memory containing module's allocation tags alive until all tags are unused. However percpu counters referenced by the tags are freed by free_module(). This will lead to UAF if the memory allocated by a module is accessed after module was unloaded. To fix this we allocate percpu counters for module allocation tags dynamically and we keep it alive for tags which are still in use after module unloading. This also removes the requirement of a larger PERCPU_MODULE_RESERVE when memory allocation profiling is enabled because percpu memory for counters does not need to be reserved anymore. Link: https://lkml.kernel.org/r/20250517000739.5930-1-surenb@google.com Fixes: 0db6f8d7820a ("alloc_tag: load module tags into separate contiguous memory") Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reported-by: David Wang <00107082@163.com> Closes: https://lore.kernel.org/all/20250516131246.6244-1-00107082@163.com/ Tested-by: David Wang <00107082@163.com> Cc: Christoph Lameter (Ampere) <cl@gentwo.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Tejun Heo <tj@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-21kunit: Fix wrong parameter to kunit_deactivate_static_stub()Tzung-Bi Shih
kunit_deactivate_static_stub() accepts real_fn_addr instead of replacement_addr. In the case, it always passes NULL to kunit_deactivate_static_stub(). Fix it. Link: https://lore.kernel.org/r/20250520082050.2254875-1-tzungbi@kernel.org Fixes: e047c5eaa763 ("kunit: Expose 'static stub' API to redirect functions") Signed-off-by: Tzung-Bi Shih <tzungbi@kernel.org> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-05-15find: Add find_first_andnot_bit()Yury Norov [NVIDIA]
The function helps to implement cpumask_andnot() APIs. Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Fenghua Yu <fenghuay@nvidia.com> Tested-by: James Morse <james.morse@arm.com> Tested-by: Tony Luck <tony.luck@intel.com> Tested-by: Fenghua Yu <fenghuay@nvidia.com> Link: https://lore.kernel.org/20250515165855.31452-3-james.morse@arm.com
2025-05-14lib/crc32: add SPDX license identifierEric Biggers
lib/crc32.c and include/linux/crc32.h got missed by the bulk SPDX conversion because of the nonstandard explanation of the license. However, crc32.c clearly states that it's licensed under the GNU General Public License, Version 2. And the comment in crc32.h clearly indicates that it's meant to have the same license as crc32.c. Therefore, apply SPDX-License-Identifier: GPL-2.0-only to both files. Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20250514052409.194822-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-05-13lib/crc16: unexport crc16_table and crc16_byte()Eric Biggers
Now that neither crc16_table nor crc16_byte() is used outside lib/crc16.c, fold them into lib/crc16.c. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250513022115.39109-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-05-12crypto: testmgr - make it easier to enable the full set of testsEric Biggers
Currently the full set of crypto self-tests requires CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. This is problematic in two ways. First, developers regularly overlook this option. Second, the description of the tests as "extra" sometimes gives the impression that it is not required that all algorithms pass these tests. Given that the main use case for the crypto self-tests is for developers, make enabling CONFIG_CRYPTO_SELFTESTS=y just enable the full set of crypto self-tests by default. The slow tests can still be disabled by adding the command-line parameter cryptomgr.noextratests=1, soon to be renamed to cryptomgr.noslowtests=1. The only known use case for doing this is for people trying to use the crypto self-tests to satisfy the FIPS 140-3 pre-operational self-testing requirements when the kernel is being validated as a FIPS 140-3 cryptographic module. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: testmgr - replace CRYPTO_MANAGER_DISABLE_TESTS with CRYPTO_SELFTESTSEric Biggers
The negative-sense of CRYPTO_MANAGER_DISABLE_TESTS is a longstanding mistake that regularly causes confusion. Especially bad is that you can have CRYPTO=n && CRYPTO_MANAGER_DISABLE_TESTS=n, which is ambiguous. Replace CRYPTO_MANAGER_DISABLE_TESTS with CRYPTO_SELFTESTS which has the expected behavior. The tests continue to be disabled by default. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: lib/chacha - add array bounds to function prototypesEric Biggers
Add explicit array bounds to the function prototypes for the parameters that didn't already get handled by the conversion to use chacha_state: - chacha_block_*(): Change 'u8 *out' or 'u8 *stream' to u8 out[CHACHA_BLOCK_SIZE]. - hchacha_block_*(): Change 'u32 *out' or 'u32 *stream' to u32 out[HCHACHA_OUT_WORDS]. - chacha_init(): Change 'const u32 *key' to 'const u32 key[CHACHA_KEY_WORDS]'. Change 'const u8 *iv' to 'const u8 iv[CHACHA_IV_SIZE]'. No functional changes. This just makes it clear when fixed-size arrays are expected. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: lib/chacha - add strongly-typed state zeroizationEric Biggers
Now that the ChaCha state matrix is strongly-typed, add a helper function chacha_zeroize_state() which zeroizes it. Then convert all applicable callers to use it instead of direct memzero_explicit. No functional changes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: lib/chacha - use struct assignment to copy stateEric Biggers
Use struct assignment instead of memcpy() in lib/crypto/chacha.c where appropriate. No functional change. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: lib/chacha - strongly type the ChaCha stateEric Biggers
The ChaCha state matrix is 16 32-bit words. Currently it is represented in the code as a raw u32 array, or even just a pointer to u32. This weak typing is error-prone. Instead, introduce struct chacha_state: struct chacha_state { u32 x[16]; }; Convert all ChaCha and HChaCha functions to use struct chacha_state. No functional changes. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-08ratelimit: Drop redundant accesses to burstPaul E. McKenney
Now that there is the "burst <= 0" fastpath, for all later code, burst must be strictly greater than zero. Therefore, drop the redundant checks of this local variable. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret restructuring to collapse common case codePaul E. McKenney
Now that unlock_ret releases the lock, then falls into nolock_ret, which handles ->missed based on the value of ret, the common-case lock-held code can be collapsed into a single "if" statement with a single-statement "then" clause. Yes, we could go further and just assign the "if" condition to ret, but in the immortal words of MSDOS, "Are you sure?". Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret label to collapse lock-failure codePaul E. McKenney
Now that we have a nolock_ret label that handles ->missed correctly based on the value of ret, we can eliminate a local variable and collapse several "if" statements on the lock-acquisition-failure code path. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret label to save a couple of lines of codePaul E. McKenney
Create a nolock_ret label in order to start consolidating the unlocked return paths that conditionally invoke ratelimit_state_inc_miss(). Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Simplify common-case exit pathPaul E. McKenney
By making "ret" always be initialized, and moving the final call to ratelimit_state_inc_miss() out from under the lock, we save a goto and a couple lines of code. This also saves a couple of lines of code from the unconditional enable/disable slowpath. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Warn if ->interval or ->burst are negativePetr Mladek
Currently, ___ratelimit() treats a negative ->interval or ->burst as if it was zero, but this is an accident of the current implementation. Therefore, splat in this case, which might have the benefit of detecting use of uninitialized ratelimit_state structures on the one hand or easing addition of new features on the other. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid atomic decrement under lock if already rate-limitedPaul E. McKenney
Currently, if the lock is acquired, the code unconditionally does an atomic decrement on ->rs_n_left, even if that atomic operation is guaranteed to return a limit-rate verdict. A limit-rate verdict will in fact be the common case when something is spewing into a rate limit. This unconditional atomic operation incurs needless overhead and also raises the spectre of counter wrap. Therefore, do the atomic decrement only if there is some chance that rates won't be limited. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid atomic decrement if already rate-limitedPaul E. McKenney
Currently, if the lock could not be acquired, the code unconditionally does an atomic decrement on ->rs_n_left, even if that atomic operation is guaranteed to return a limit-rate verdict. This incurs needless overhead and also raises the spectre of counter wrap. Therefore, do the atomic decrement only if there is some chance that rates won't be limited. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Don't flush misses counter if RATELIMIT_MSG_ON_RELEASEPaul E. McKenney
Restore the previous semantics where the misses counter is unchanged if the RATELIMIT_MSG_ON_RELEASE flag is set. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Force re-initialization when rate-limiting re-enabledPaul E. McKenney
Currently, if rate limiting is disabled, ___ratelimit() does an immediate early return with no state changes. This can result in false-positive drops when re-enabling rate limiting. Therefore, mark the ratelimit_state structure "uninitialized" when rate limiting is disabled. [ paulmck: Apply Petr Mladek feedback. ] Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Allow zero ->burst to disable ratelimitingPaul E. McKenney
If ->interval is zero, then rate-limiting will be disabled. Alternatively, if interval is greater than zero and ->burst is zero, then rate-limiting will be applied unconditionally. The point of this distinction is to handle current users that pass zero-initialized ratelimit_state structures to ___ratelimit(), and in such cases the ->lock field will be uninitialized. Acquiring ->lock in this case is clearly not a strategy to win. Therefore, make this classification be lockless. Note that although negative ->interval and ->burst happen to be treated as if they were zero, this is an accident of the current implementation. The semantics of negative values for these fields is subject to change without notice. Especially given that Bert Karwatzki determined that no current calls to ___ratelimit() ever have negative values for these fields. This commit replaces an earlier buggy versions. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Reported-by: Bert Karwatzki <spasswolf@web.de> Reported-by: "Aithal, Srikanth" <sraithal@amd.com> Closes: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Reported-by: Mark Brown <broonie@kernel.org> Closes: https://lore.kernel.org/all/257c3b91-e30f-48be-9788-d27a4445a416@sirena.org.uk/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Tested-by: "Aithal, Srikanth" <sraithal@amd.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Reduce ___ratelimit() false-positive rate limitingPetr Mladek
Retain the locked design, but check rate-limiting even when the lock could not be acquired. Link: https://lore.kernel.org/all/Z_VRo63o2UsVoxLG@pathway.suse.cz/ Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid jiffies=0 special casePaul E. McKenney
The ___ratelimit() function special-cases the jiffies-counter value of zero as "uninitialized". This works well on 64-bit systems, where the jiffies counter is not going to return to zero for more than half a billion years on systems with HZ=1000, but similar 32-bit systems take less than 50 days to wrap the jiffies counter. And although the consequences of wrapping the jiffies counter seem to be limited to minor confusion on the duration of the rate-limiting interval that happens to end at time zero, it is almost no work to avoid this confusion. Therefore, introduce a RATELIMIT_INITIALIZED bit to the ratelimit_state structure's ->flags field so that a ->begin value of zero is no longer special. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Count misses due to lock contentionPaul E. McKenney
The ___ratelimit() function simply returns zero ("do ratelimiting") if the trylock fails, but does not adjust the ->missed field. This means that the resulting dropped printk()s are dropped silently, which could seriously confuse people trying to do console-log-based debugging. Therefore, increment the ->missed field upon trylock failure. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Convert the ->missed field to atomic_tPaul E. McKenney
The ratelimit_state structure's ->missed field is sometimes incremented locklessly, and it would be good to avoid lost counts. This is also needed to count the number of misses due to trylock failure. Therefore, convert the ratelimit_state structure's ->missed field to atomic_t. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Create functions to handle ratelimit_state internalsPaul E. McKenney
A number of ratelimit use cases do open-coded access to the ratelimit_state structure's ->missed field. This works, but is a bit messy and makes it more annoying to make changes to this field. Therefore, provide a ratelimit_state_inc_miss() function that increments the ->missed field, a ratelimit_state_get_miss() function that reads out the ->missed field, and a ratelimit_state_reset_miss() function that reads out that field, but that also resets its value to zero. These functions will replace client-code open-coded uses of ->missed. In addition, a new ratelimit_state_reset_interval() function encapsulates what was previously open-coded lock acquisition and direct field updates. [ paulmck: Apply kernel test robot feedback. ] Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-06crypto: lib/poly1305 - Build main library on LIB_POLY1305 and split generic ↵Herbert Xu
code out Split the lib poly1305 code just as was done with sha256. Make the main library code conditional on LIB_POLY1305 instead of LIB_POLY1305_GENERIC. Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Fixes: 10a6d72ea355 ("crypto: lib/poly1305 - Use block-only interface") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Use generic block helperHerbert Xu
Use the BLOCK_HASH_UPDATE_BLOCKS helper instead of duplicating partial block handling. Also remove the unused lib/sha256 force-generic interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Add helpers for block-based shashHerbert Xu
Add an internal sha256_finup helper and move the finalisation code from __sha256_final into it. Also add sha256_choose_blocks and CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD so that the Crypto API can use the SIMD block function unconditionally. The Crypto API must not be used in hard IRQs and there is no reason to have a fallback path for hardirqs. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - improve function prototypesEric Biggers
Follow best practices by changing the length parameters to size_t and explicitly specifying the length of the output digest arrays. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: sparc/sha256 - implement library instead of shashEric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: sha256 - support arch-optimized lib and expose through shashEric Biggers
As has been done for various other algorithms, rework the design of the SHA-256 library to support arch-optimized implementations, and make crypto/sha256.c expose both generic and arch-optimized shash algorithms that wrap the library functions. This allows users of the SHA-256 library functions to take advantage of the arch-optimized code, and this makes it much simpler to integrate SHA-256 for each architecture. Note that sha256_base.h is not used in the new design. It will be removed once all the architecture-specific code has been updated. Move the generic block function into its own module to avoid a circular dependency from libsha256.ko => sha256-$ARCH.ko => libsha256.ko. Signed-off-by: Eric Biggers <ebiggers@google.com> Add export and import functions to maintain existing export format. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/poly1305 - Use block-only interfaceHerbert Xu
Now that every architecture provides a block function, use that to implement the lib/poly1305 and remove the old per-arch code. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/poly1305 - Add block-only interfaceHerbert Xu
Add a block-only interface for poly1305. Implement the generic code first. Also use the generic partial block helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux v6.15-rc5Herbert Xu
Merge mainline to pick up bcachefs poly1305 patch 4bf4b5046de0 ("bcachefs: use library APIs for ChaCha20 and Poly1305"). This is a prerequisite for removing the poly1305 shash algorithm.
2025-05-01ASoC: codec: twl4030: Convert to GPIO descriptorsMark Brown
Merge series from "Peng Fan (OSS)" <peng.fan@oss.nxp.com>: This is separated from [1]. With an update that sorting the headers in a separate patch. No other changes, so I still keep Linus' R-b for Patch 2. [1] https://lore.kernel.org/all/20250408-asoc-gpio-v1-3-c0db9d3fd6e9@nxp.com/
2025-04-29kunit: executor: Remove const from kunit_filter_suites() allocation typeKees Cook
In preparation for making the kmalloc family of allocators type aware, we need to make sure that the returned type from the allocation matches the type of the variable being assigned. (Before, the allocator would always return "void *", which can be implicitly cast to any pointer type.) The assigned type is "struct kunit_suite **" but the returned type will be "struct kunit_suite * const *". Since it isn't generally possible to remove the const qualifier, adjust the allocation type to match the assignment. Link: https://lore.kernel.org/r/20250426062433.work.124-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-04-28crypto: lib/poly1305 - remove INTERNAL symbol and selection of CRYPTOEric Biggers
Now that the architecture-optimized Poly1305 kconfig symbols are defined regardless of CRYPTO, there is no need for CRYPTO_LIB_POLY1305 to select CRYPTO. So, remove that. This makes the indirection through the CRYPTO_LIB_POLY1305_INTERNAL symbol unnecessary, so get rid of that and just use CRYPTO_LIB_POLY1305 directly. Finally, make the fallback to the generic implementation use a default value instead of a select; this makes it consistent with how the arch-optimized code gets enabled and also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: lib/chacha - remove INTERNAL symbol and selection of CRYPTOEric Biggers
Now that the architecture-optimized ChaCha kconfig symbols are defined regardless of CRYPTO, there is no need for CRYPTO_LIB_CHACHA to select CRYPTO. So, remove that. This makes the indirection through the CRYPTO_LIB_CHACHA_INTERNAL symbol unnecessary, so get rid of that and just use CRYPTO_LIB_CHACHA directly. Finally, make the fallback to the generic implementation use a default value instead of a select; this makes it consistent with how the arch-optimized code gets enabled and also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: x86 - move library functions to arch/x86/lib/crypto/Eric Biggers
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the x86 BLAKE2s, ChaCha, and Poly1305 library functions into a new directory arch/x86/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: s390 - move library functions to arch/s390/lib/crypto/Eric Biggers
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the s390 ChaCha library functions into a new directory arch/s390/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/. Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: riscv - move library functions to arch/riscv/lib/crypto/Eric Biggers
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the riscv ChaCha library functions into a new directory arch/riscv/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: powerpc - move library functions to arch/powerpc/lib/crypto/Eric Biggers
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the powerpc ChaCha and Poly1305 library functions into a new directory arch/powerpc/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>