summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2025-03-03platform: arm64: add Huawei Matebook E Go EC driverPengyu Luo
There are three variants of which Huawei released the first two simultaneously. Huawei Matebook E Go LTE(sc8180x), codename seems to be gaokun2. Huawei Matebook E Go(sc8280xp@3.0GHz), codename must be gaokun3. (see [1]) Huawei Matebook E Go 2023(sc8280xp@2.69GHz), codename should be also gaokun3. Adding support for the latter two variants for now, this driver should also work for the sc8180x variant according to acpi table files, but I don't have the device to test yet. Different from other Qualcomm Snapdragon sc8280xp based machines, the Huawei Matebook E Go uses an embedded controller while others use a system called PMIC GLink. This embedded controller can be used to perform a set of various functions, including, but not limited to: - Battery and charger monitoring; - Charge control and smart charge; - Fn_lock settings; - Tablet lid status; - Temperature sensors; - USB Type-C notifications (ports orientation, DP alt mode HPD); - USB Type-C PD (according to observation, up to 48w). Add a driver for the EC which creates devices for UCSI and power supply devices. This driver is inspired by the following drivers: drivers/platform/arm64/acer-aspire1-ec.c drivers/platform/arm64/lenovo-yoga-c630.c drivers/platform/x86/huawei-wmi.c Also thanks for reviewers' working. They have made this patch improve a lot. [1] https://bugzilla.kernel.org/show_bug.cgi?id=219645 Signed-off-by: Pengyu Luo <mitltlatltl@gmail.com> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20250214180656.28599-3-mitltlatltl@gmail.com Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
2025-03-03gpiolib: deprecate gpio_chip::set and gpio_chip::set_multipleBartosz Golaszewski
We now have setter callbacks that allow us to indicate success or failure using the integer return value. Deprecate the older callbacks so that no new code is tempted to use them. Link: https://lore.kernel.org/r/20250227083748.22400-1-brgl@bgdev.pl Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-03-03Merge tag 'v6.14-rc5' of ↵Bartosz Golaszewski
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into gpio/for-next Linux 6.14-rc5
2025-03-02crypto/krb5: Implement the Camellia enctypes from rfc6803David Howells
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from rfc6803. Note that the test vectors in rfc6803 for encryption are incomplete, lacking the key usage number needed to derive Ke and Ki, and there are errata for this: https://www.rfc-editor.org/errata_search.php?rfc=6803 Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the AES enctypes from rfc8009David Howells
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192 enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto scheme. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Provide infrastructure and key derivationDavid Howells
Provide key derivation interface functions and a helper to implement the PRF+ function from rfc4402. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to perform requestsDavid Howells
Add an API by which users of the krb5 crypto library can perform crypto requests, such as encrypt, decrypt, get_mic and verify_mic. These functions take the previously prepared crypto objects to work on. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to alloc and prepare a crypto objectDavid Howells
Add an API by which users of the krb5 crypto library can get an allocated and keyed crypto object. For encryption-mode operation, an AEAD object is returned; for checksum-mode operation, a synchronous hash object is returned. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to query the layout of the crypto sectionDavid Howells
Provide some functions to allow the called to find out about the layout of the crypto section: (1) Calculate, for a given size of data, how big a buffer will be required to hold it and where the data will be within it. (2) Calculate, for an amount of buffer, what's the maximum size of data that will fit therein, and where it will start. (3) Determine where the data will be in a received message. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement Kerberos crypto coreDavid Howells
Provide core structures, an encoding-type registry and basic module and config bits for a generic Kerberos crypto library. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto: Add 'krb5enc' hash and cipher AEAD algorithmDavid Howells
Add an AEAD template that does hash-then-cipher (unlike authenc that does cipher-then-hash). This is required for a number of Kerberos 5 encoding types. [!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of ciphers, one non-CTS and one CTS, using the former to do all the aligned blocks and the latter to do the last two blocks if they aren't also aligned. It may be necessary to do this here too for performance reasons - but there are considerations both ways: (1) firstly, there is an optimised assembly version of cts(cbc(aes)) on x86_64 that should be used instead of having two ciphers; (2) secondly, none of the hardware offload drivers seem to offer CTS support (Intel QAT does not, for instance). However, I don't know if it's possible to query the crypto API to find out whether there's an optimised CTS algorithm available. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add some constants out of sunrpc headersDavid Howells
Add some constants from the sunrpc headers. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02hwmon: (k10temp) add support for cyan skillfishMikhail Paulyshka
Add support for Cyan Skillfish (AMD Family 17h Model 47h), which appear to be Zen 2 based APU. The patch was tested with an AMD BC-250 board. Signed-off-by: Mikhail Paulyshka <me@mixaill.net> Link: https://lore.kernel.org/r/20250302155009.49951-1-me@mixaill.net Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2025-03-02cred: Fix RCU warnings in override/revert_credsHerbert Xu
Fix RCU warnings in override_creds and revert_creds by turning the RCU pointer into a normal pointer using rcu_replace_pointer. These warnings were previously private to the cred code, but due to the move into the header file they are now polluting unrelated subsystems. Fixes: 49dffdfde462 ("cred: Add a light version of override/revert_creds()") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Link: https://lore.kernel.org/r/Z8QGQGW0IaSklKG7@gondor.apana.org.au Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-02crypto: skcipher - Use restrict rather than hand-rolling accessesHerbert Xu
Rather than accessing 'alg' directly to avoid the aliasing issue which leads to unnecessary reloads, use the __restrict keyword to explicitly tell the compiler that there is no aliasing. This generates equivalent if not superior code on x86 with gcc 12. Note that in skcipher_walk_virt the alg assignment is moved after might_sleep_if because that function is a compiler barrier and forces a reload. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - don't split at page boundaries when !HIGHMEMEric Biggers
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not actually map anything, and the address it returns is just the address from the kernel's direct map, where each sg entry's data is virtually contiguous. To improve performance, stop unnecessarily clamping data segments to page boundaries in this case. For now, still limit segments to PAGE_SIZE. This is needed to prevent preemption from being disabled for too long when SIMD is used, and to support the alignmask case which still uses a page-sized bounce buffer. Even so, this change still helps a lot in cases where messages cross a page boundary. For example, testing IPsec with AES-GCM on x86_64, the messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side over a third cross a page boundary. These ended up being processed in three parts, with the middle part going through skcipher_next_slow which uses a 16-byte bounce buffer. That was causing a significant amount of overhead which unnecessarily reduced the performance benefit of the new x86_64 AES-GCM assembly code. This change solves the problem; all these messages now get passed to the assembly code in one part. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - remove obsolete functionsEric Biggers
Remove various functions that are no longer used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add scatterwalk_get_sglist()Eric Biggers
Add a function that creates a scatterlist that represents the remaining data in a walk. This will be used to replace chain_to_walk() in net/tls/tls_device_fallback.c so that it will no longer need to reach into the internals of struct scatter_walk. Cc: Boris Pismenny <borisp@nvidia.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for copying dataEric Biggers
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1 respectively. They follow the same argument order as memcpy_from_page() and memcpy_to_page() from <linux/highmem.h>. Note that in the case of memcpy_from_sglist(), this also happens to be the same argument order that scatterwalk_map_and_copy() uses. The new code is also faster, mainly because it builds the scatter_walk directly without creating a temporary scatterlist. E.g., a 20% performance improvement is seen for copying the AES-GCM auth tag. Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist() and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but there are a lot of them so they aren't all being updated right away. Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk() which are similar but operate on a scatter_walk instead of a scatterlist. These will replace scatterwalk_copychunks() with the 'out' argument 0 and 1 respectively. Their behavior differs slightly from scatterwalk_copychunks() in that they automatically take care of flushing the dcache when needed, making them easier to use. scatterwalk_copychunks() itself is left unchanged for now. It will be removed after its callers are updated to use other functions instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for iterating through dataEric Biggers
Add scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Also add scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone(). A later patch will remove scatterwalk_done() and scatterwalk_pagedone(). The new code eliminates the error-prone 'more' parameter. Advancing to the next sg entry now only happens just-in-time in scatterwalk_next(). The new code also pairs the dcache flush more closely with the actual write, similar to memcpy_to_page(). Previously it was paired with advancing to the next page. This is currently causing bugs where the dcache flush is incorrectly being skipped, usually due to scatterwalk_copychunks() being called without a following scatterwalk_done(). The dcache flush may have been placed where it was in order to not call flush_dcache_page() redundantly when visiting a page more than once. However, that case is rare in practice, and most architectures either do not implement flush_dcache_page() anyway or implement it lazily where it just clears a page flag. Another limitation of the old code was that by the time the flush happened, there was no way to tell if more than one page needed to be flushed. That has been sufficient because the code goes page by page, but I would like to optimize that on !HIGHMEM platforms. The new code makes this possible, and a later patch will implement this optimization. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for skipping dataEric Biggers
Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - move to next sg entry just in timeEric Biggers
The scatterwalk_* functions are designed to advance to the next sg entry only when there is more data from the request to process. Compared to the alternative of advancing after each step if !sg_is_last(sg), this has the advantage that it doesn't cause problems if users accidentally don't terminate their scatterlist with the end marker (which is an easy mistake to make, and there are examples of this). Currently, the advance to the next sg entry happens in scatterwalk_done(), which is called after each "step" of the walk. It requires the caller to pass in a boolean 'more' that indicates whether there is more data. This works when the caller immediately knows whether there is more data, though it adds some complexity. However in the case of scatterwalk_copychunks() it's not immediately known whether there is more data, so the call to scatterwalk_done() has to happen higher up the stack. This is error-prone, and indeed the needed call to scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is sometimes called multiple times in a row. This causes a zero-length step to get added in some cases, which is unexpected and seems to work only by accident. This patch begins the switch to a less error-prone approach where the advance to the next sg entry happens just in time instead. For now, that means just doing the advance in scatterwalk_clamp() if it's needed there. Initially this is redundant, but it's needed to keep the tree in a working state as later patches change things to the final state. Later patches will similarly move the dcache flushing logic out of scatterwalk_done() and then remove scatterwalk_done() entirely. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-01Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Ryan's been hard at work finding and fixing mm bugs in the arm64 code, so here's a small crop of fixes for -rc5. The main changes are to fix our zapping of non-present PTEs for hugetlb entries created using the contiguous bit in the page-table rather than a block entry at the level above. Prior to these fixes, we were pulling the contiguous bit back out of the PTE in order to determine the size of the hugetlb page but this is clearly bogus if the thing isn't present and consequently both the clearing of the PTE(s) and the TLB invalidation were unreliable. Although the problem was found by code inspection, we really don't want this sitting around waiting to trigger and the changes are CC'd to stable accordingly. Note that the diffstat looks a lot worse than it really is; huge_ptep_get_and_clear() now takes a size argument from the core code and so all the arch implementations of that have been updated in a pretty mechanical fashion. - Fix a sporadic boot failure due to incorrect randomization of the linear map on systems that support it - Fix the zapping (both clearing the entries *and* invalidating the TLB) of hugetlb PTEs constructed using the contiguous bit" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes mm: hugetlb: Add huge page size param to huge_ptep_get_and_clear() arm64/mm: Fix Boot panic on Ampere Altra
2025-03-01io.h: drop unused headersRaag Jadav
Drop unused headers and type declaration from io.h. Signed-off-by: Raag Jadav <raag.jadav@intel.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-03-01asm-generic/io.h: rework split ioread64/iowrite64 helpersArnd Bergmann
There are two incompatible sets of definitions of these eight functions: On 64-bit architectures setting CONFIG_HAS_IOPORT, they turn into either pair of 32-bit PIO (inl/outl) accesses or a single 64-bit MMIO (readq/writeq). On other 64-bit architectures, they are always split into 32-bit accesses. Depending on which header gets included in a driver, there are additionally definitions for ioread64()/iowrite64() that are expected to produce a 64-bit register MMIO access on all 64-bit architectures. To separate the conflicting definitions, make the version in include/linux/io-64-nonatomic-*.h visible on all architectures but pick the one from lib/iomap.c on architectures that set CONFIG_GENERIC_IOMAP in place of the default fallback. Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-03-01Merge branch 'perf/urgent' into perf/core, to pick up dependent patches and ↵Ingo Molnar
fixes Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-01Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM: - Fix TCR_EL2 configuration to not use the ASID in TTBR1_EL2 and not mess-up T1SZ/PS by using the HCR_EL2.E2H==0 layout. - Bring back the VMID allocation to the vcpu_load phase, ensuring that we only setup VTTBR_EL2 once on VHE. This cures an ugly race that would lead to running with an unallocated VMID. RISC-V: - Fix hart status check in SBI HSM extension - Fix hart suspend_type usage in SBI HSM extension - Fix error returned by SBI IPI and TIME extensions for unsupported function IDs - Fix suspend_type usage in SBI SUSP extension - Remove unnecessary vcpu kick after injecting interrupt via IMSIC guest file x86: - Fix an nVMX bug where KVM fails to detect that, after nested VM-Exit, L1 has a pending IRQ (or NMI). - To avoid freeing the PIC while vCPUs are still around, which would cause a NULL pointer access with the previous patch, destroy vCPUs before any VM-level destruction. - Handle failures to create vhost_tasks" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: kvm: retry nx_huge_page_recovery_thread creation vhost: return task creation error instead of NULL KVM: nVMX: Process events on nested VM-Exit if injectable IRQ or NMI is pending KVM: x86: Free vCPUs before freeing VM state riscv: KVM: Remove unnecessary vcpu kick KVM: arm64: Ensure a VMID is allocated before programming VTTBR_EL2 KVM: arm64: Fix tcr_el2 initialisation in hVHE mode riscv: KVM: Fix SBI sleep_type use riscv: KVM: Fix SBI TIME error generation riscv: KVM: Fix SBI IPI error generation riscv: KVM: Fix hart suspend_type use riscv: KVM: Fix hart suspend status check
2025-03-01dt-bindings: clock: add clock definitions and documentation for exynos7870 CMUKaustabh Chakraborty
Add unique identifiers for exynos7870 clocks for every bank. It adds all clocks of CMU_MIF, CMU_DISPAUD, CMU_G3D, CMU_ISP, CMU_MFCMSCL, and CMU_PERI. Document the devicetree bindings as well. Signed-off-by: Kaustabh Chakraborty <kauschluss@disroot.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20250301-exynos7870-pmu-clocks-v5-1-715b646d5206@disroot.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
2025-03-01dt-bindings: clock: add Exynos2200 SoCIvaylo Ivanov
Provide dt-schema documentation for Exynos2200 SoC clock controller. Add device tree clock binding definitions for the following CMU blocks: - CMU_ALIVE - CMU_CMGP - CMU_HSI0 - CMU_PERIC0/1/2 - CMU_PERIS - CMU_TOP - CMU_UFS - CMU_VTS Signed-off-by: Ivaylo Ivanov <ivo.ivanov.ivanov1@gmail.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20250223115601.723886-2-ivo.ivanov.ivanov1@gmail.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
2025-03-01fs: place f_ref to 3rd cache line in struct file to resolve false sharingPan Deng
When running syscall pread in a high core count system, f_ref contends with the reading of f_mode, f_op, f_mapping, f_inode, f_flags in the same cache line. This change places f_ref to the 3rd cache line where fields are not updated as frequently as the 1st cache line, and the contention is grealy reduced according to tests. In addition, the size of file object is kept in 3 cache lines. This change has been tested with rocksdb benchmark readwhilewriting case in 1 socket 64 physical core 128 logical core baremetal machine, with build config CONFIG_RANDSTRUCT_NONE=y Command: ./db_bench --benchmarks="readwhilewriting" --threads $cnt --duration 60 The throughput(ops/s) is improved up to ~21%. ===== thread baseline compare 16 100% +1.3% 32 100% +2.2% 64 100% +7.2% 128 100% +20.9% It was also tested with UnixBench: syscall, fsbuffer, fstime, fsdisk cases that has been used for file struct layout tuning, no regression was observed. Signed-off-by: Pan Deng <pan.deng@intel.com> Link: https://lore.kernel.org/r/20250228020059.3023375-1-pan.deng@intel.com Tested-by: Lipeng Zhu <lipeng.zhu@intel.com> Reviewed-by: Tianyou Li <tianyou.li@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-01gpu: ipu-v3 ipu-cpmem: Remove unused functionsDr. David Alan Gilbert
ipu_cpmem_set_yuv_interleaved() was added in 2012 by commit 0125f21b2baf ("staging: drm/imx: Add ipu_cpmem_set_yuv_interleaved()") but has remained unused. ipu_cpmem_get_burstsize() was added in 2016 by commit 03085911d7bb ("gpu: ipu-cpmem: Add ipu_cpmem_get_burstsize()") but has remained unused. Remove them. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-8-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01gpu: ipu-v3: ipu-csi: Remove unused functionsDr. David Alan Gilbert
ipu_csi_get_window(), ipu_csi_is_interlaced() and ipu_csi_set_test_generator() were added in 2014 by commit 2ffd48f2e7ae ("gpu: ipu-v3: Add Camera Sensor Interface unit") but have remained unused. Remove them. ipu_csi_set_testgen_mclk() is now unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-7-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01gpu: ipu-v3: Remove unused ipu_vdi_unsetupDr. David Alan Gilbert
ipu_vdi_unsetup() was added in 2016 by commit 2d2ead453077 ("gpu: ipu-v3: Add Video Deinterlacer unit") but has remained unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-6-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01gpu: ipu-v3: Remove unused ipu_image_convert_* functionsDr. David Alan Gilbert
ipu_image_convert_enum_format() and ipu_image_convert_sync() were both added in 2016 by commit cd98e85a6b78 ("gpu: ipu-v3: Add queued image conversion support") but have remained unused. Remove them. ipu_image_convert_sync() was the last user of image_convert_sync_complete(). Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-5-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01gpu: ipu-v3: Remove unused ipu_rot_mode_to_degreesDr. David Alan Gilbert
ipu_rot_mode_to_degrees() was added in 2014 by commit f835f386a119 ("gpu: ipu-v3: Add rotation mode conversion utilities") but has remained unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-3-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01gpu: ipu-v3: ipu-ic: Remove unused ipu_ic_task_graphics_initDr. David Alan Gilbert
ipu_ic_task_graphics_init() was added in 2014 by commit 1aa8ea0d2bd5 ("gpu: ipu-v3: Add Image Converter unit") but has been unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20241226022752.219399-2-linux@treblig.org Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2025-03-01kvm: retry nx_huge_page_recovery_thread creationKeith Busch
A VMM may send a non-fatal signal to its threads, including vCPU tasks, at any time, and thus may signal vCPU tasks during KVM_RUN. If a vCPU task receives the signal while its trying to spawn the huge page recovery vhost task, then KVM_RUN will fail due to copy_process() returning -ERESTARTNOINTR. Rework call_once() to mark the call complete if and only if the called function succeeds, and plumb the function's true error code back to the call_once() invoker. This provides userspace with the correct, non-fatal error code so that the VMM doesn't terminate the VM on -ENOMEM, and allows subsequent KVM_RUN a succeed by virtue of retrying creation of the NX huge page task. Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> [implemented the kvm user side] Signed-off-by: Keith Busch <kbusch@kernel.org> Message-ID: <20250227230631.303431-3-kbusch@meta.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-03-01perf: arm_pmu: Move PMUv3-specific dataMark Rutland
A few fields in struct arm_pmu are only used with PMUv3, and soon we will need to add more for BRBE. Group the fields together so that we have a logical place to add more data in future. At the same time, remove the comment for reg_pmmir as it doesn't convey anything useful. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Tested-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20250218-arm-brbe-v19-v20-7-4e9922fc2e8e@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-02-28io_uring/ublk: report error when unregister operation failsCaleb Sander Mateos
Indicate to userspace applications if a UBLK_IO_UNREGISTER_IO_BUF command specifies an invalid buffer index by returning an error code. Return -EINVAL if no buffer is registered with the given index, and -EBUSY if the registered buffer is not a kernel bvec. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Link: https://lore.kernel.org/r/20250228231432.642417-1-csander@purestorage.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-02-28io_uring: convert cmd_to_io_kiocb() macro to functionCaleb Sander Mateos
The cmd_to_io_kiocb() macro applies a pointer cast to its input without parenthesizing it. Currently all inputs are variable names, so this has the intended effect. But since casts have relatively high precedence, the macro would apply the cast to the wrong value if the input was a pointer addition, for example. Turn the macro into a static inline function to ensure the pointer cast is applied to the full input value. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Link: https://lore.kernel.org/r/20250228230305.630885-1-csander@purestorage.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-02-28io_uring/uring_cmd: specify io_uring_cmd_import_fixed() pointer typeCaleb Sander Mateos
io_uring_cmd_import_fixed() takes a struct io_uring_cmd *, but the type of the ioucmd parameter is void *. Make the pointer type explicit so the compiler can type check it. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Link: https://lore.kernel.org/r/20250228221514.604350-1-csander@purestorage.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-02-28Merge tag 'objtool-urgent-2025-02-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool fixes from Ingo Molnar: "Fix an objtool false positive, and objtool related build warnings that happens on PIE-enabled architectures such as LoongArch" * tag 'objtool-urgent-2025-02-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool: Add bch2_trans_unlocked_or_in_restart_error() to bcachefs noreturns objtool: Fix C jump table annotations for Clang vmlinux.lds: Ensure that const vars with relocations are mapped R/O
2025-02-28Merge tag 'locking-urgent-2025-02-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fix from Ingo Molnar: "Fix an rcuref_put() slowpath race" * tag 'locking-urgent-2025-02-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: rcuref: Plug slowpath race in rcuref_put()
2025-02-28pwm: Check for CONFIG_PWM using IS_REACHABLE() in main headerUwe Kleine-König
Preparing CONFIG_PWM becoming tristate the right magic to check for the availability of the pwm functions is using IS_REACHABLE() and not IS_ENABLED(). The latter gives the wrong result for built-in code with CONFIG_PWM=m. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Link: https://lore.kernel.org/r/20250217102504.687916-2-u.kleine-koenig@baylibre.com Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org>
2025-02-28Merge branch 'v6.15-shared/clkids' into v6.15-clk/nextHeiko Stuebner
2025-02-28dt-bindings: clock: Add RK3562 cruKever Yang
Document the device tree bindings of the rockchip rk3562 SoC clock and reset unit. Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Reviewed-by: "Rob Herring (Arm)" <robh@kernel.org> Link: https://lore.kernel.org/r/20250227105916.2340856-2-kever.yang@rock-chips.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-02-28driver core: Introduce device_{add,remove}_of_node()Herve Codina
An of_node can be set to a device using device_set_node(), which does not prevent any of_node and/or fwnode overwrites. When adding an of_node on an already present device, the following operations need to be done: - Attach the of_node only if no of_node is already attached - Attach the of_node as a fwnode if no fwnode were already attached This is the purpose of device_add_of_node(). device_remove_of_node() reverts the operations done by device_add_of_node(). Link: https://lore.kernel.org/r/20250224141356.36325-2-herve.codina@bootlin.com Signed-off-by: Herve Codina <herve.codina@bootlin.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rob Herring (Arm) <robh@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-28nilfs2: Mark on-disk strings as nonstringKees Cook
In preparation for memtostr*() checking that its source is marked as nonstring, annotate the device strings accordingly using the new UAPI alias for the "nonstring" attribute. Signed-off-by: Kees Cook <kees@kernel.org>
2025-02-28uapi: stddef.h: Introduce __kernel_nonstringKees Cook
In order to annotate byte arrays in UAPI that are not C strings (i.e. they may not be NUL terminated), the "nonstring" attribute is needed. However, we can't expose this to userspace as it is compiler version specific. Signed-off-by: Kees Cook <kees@kernel.org>
2025-02-28mm: security: Check early if HARDENED_USERCOPY is enabledMel Gorman
HARDENED_USERCOPY is checked within a function so even if disabled, the function overhead still exists. Move the static check inline. This is at best a micro-optimisation and any difference in performance was within noise but it is relatively consistent with the init_on_* implementations. Suggested-by: Kees Cook <kees@kernel.org> Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Link: https://lore.kernel.org/r/20250123221115.19722-4-mgorman@techsingularity.net Signed-off-by: Kees Cook <kees@kernel.org>