summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2025-07-08arm64/sysreg: Add ICC_PPI_{C/S}ACTIVER<n>_EL1Lorenzo Pieralisi
Add ICC_PPI_{C/S}ACTIVER<n>_EL1 registers description. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-7-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-08arm64/sysreg: Add ICC_PPI_ENABLER<n>_EL1Lorenzo Pieralisi
Add ICC_PPI_ENABLER<n>_EL1 registers sysreg description. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-6-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-08arm64/sysreg: Add ICC_PPI_HMR<n>_EL1Lorenzo Pieralisi
Add ICC_PPI_HMR<n>_EL1 registers sysreg description. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-5-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-08arm64/sysreg: Add ICC_ICSR_EL1Lorenzo Pieralisi
Add ICC_ICSR_EL1 register sysreg description. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-4-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-08arm64/sysreg: Add ICC_PPI_PRIORITY<n>_EL1Lorenzo Pieralisi
Add ICC_PPI_PRIORITY<n>_EL1 sysreg description. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-3-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-08arm64/sysreg: Add GCIE field to ID_AA64PFR2_EL1Lorenzo Pieralisi
Add field reporting the GCIE feature to ID_AA64PFR2_EL1 sysreg. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250703-gicv5-host-v7-2-12e71f1b3528@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-07-09arm64: dts: allwinner: a523: Rename emac0 to gmac0Chen-Yu Tsai
The datasheets refer to the first Ethernet controller as GMAC0, not EMAC0. Fix the compatible string, node label and pinmux function name to match what the datasheets use. A change to the device tree binding is sent separately. Fixes: 56766ca6c4f6 ("arm64: dts: allwinner: a523: Add EMAC0 ethernet MAC") Fixes: acca163f3f51 ("arm64: dts: allwinner: a527: add EMAC0 to Radxa A5E board") Fixes: c6800f15998b ("arm64: dts: allwinner: t527: add EMAC0 to Avaota-A1 board") Link: https://patch.msgid.link/20250628054438.2864220-3-wens@kernel.org Signed-off-by: Chen-Yu Tsai <wens@csie.org>
2025-07-08KVM: arm64: nvhe: Disable branch generation in nVHE guestsAnshuman Khandual
While BRBE can record branches within guests, the host recording branches in guests is not supported by perf (though events are). Support for BRBE in guests will supported by providing direct access to BRBE within the guests. That is how x86 LBR works for guests. Therefore, BRBE needs to be disabled on guest entry and restored on exit. For nVHE, this requires explicit handling for guests. Before entering a guest, save the BRBE state and disable the it. When returning to the host, restore the state. For VHE, it is not necessary. We initialize BRBCR_EL1.{E1BRE,E0BRE}=={0,0} at boot time, and HCR_EL2.TGE==1 while running in the host. We configure BRBCR_EL2.{E2BRE,E0HBRE} to enable branch recording in the host. When entering the guest, we set HCR_EL2.TGE==0 which means BRBCR_EL1 is used instead of BRBCR_EL2. Consequently for VHE, BRBE recording is disabled at EL1 and EL0 when running a guest. Should recording in guests (by the host) ever be desired, the perf ABI will need to be extended to distinguish guest addresses (struct perf_branch_entry.priv) for starters. BRBE records would also need to be invalidated on guest entry/exit as guest/host EL1 and EL0 records can't be distinguished. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Co-developed-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Tested-by: James Clark <james.clark@linaro.org> Reviewed-by: Leo Yan <leo.yan@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250611-arm-brbe-v19-v23-3-e7775563036e@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: Handle BRBE booting requirementsAnshuman Khandual
To use the Branch Record Buffer Extension (BRBE), some configuration is necessary at EL3 and EL2. This patch documents the requirements and adds the initial EL2 setup code, which largely consists of configuring the fine-grained traps and initializing a couple of BRBE control registers. Before this patch, __init_el2_fgt() would initialize HDFGRTR_EL2 and HDFGWTR_EL2 with the same value, relying on the read/write trap controls for a register occupying the same bit position in either register. The 'nBRBIDR' trap control only exists in bit 59 of HDFGRTR_EL2, while bit 59 of HDFGWTR_EL2 is RES0, and so this assumption no longer holds. To handle HDFGRTR_EL2 and HDFGWTR_EL2 having (slightly) different bit layouts, __init_el2_fgt() is changed to accumulate the HDFGRTR_EL2 and HDFGWTR_EL2 control bits separately. While making this change the open-coded value (1 << 62) is replaced with HDFG{R,W}TR_EL2_nPMSNEVFR_EL1_MASK. The BRBCR_EL1 and BRBCR_EL2 registers are unusual and require special initialisation: even though they are subject to E2H renaming, both have an effect regardless of HCR_EL2.TGE, even when running at EL2. So we must initialize BRBCR_EL2 in case we run in nVHE mode. This is handled in __init_el2_brbe() with a comment to explain the situation. Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Leo Yan <leo.yan@arm.com> Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [Mark: rewrite commit message, fix typo in comment] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Co-developed-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> tested-by: Adam Young <admiyo@os.amperecomputing.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250611-arm-brbe-v19-v23-2-e7775563036e@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64/sysreg: Add BRBE registers and fieldsAnshuman Khandual
This patch adds definitions related to the Branch Record Buffer Extension (BRBE) as per ARM DDI 0487K.a. These will be used by KVM and a BRBE driver in subsequent patches. Some existing BRBE definitions in asm/sysreg.h are replaced with equivalent generated definitions. Cc: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: James Clark <james.clark@linaro.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> tested-by: Adam Young <admiyo@os.amperecomputing.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250611-arm-brbe-v19-v23-1-e7775563036e@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: fix unnecessary rebuilding when CONFIG_DEBUG_EFI=yMasahiro Yamada
When CONFIG_DEBUG_EFI is enabled, some objects are needlessly rebuilt. [Steps to reproduce] Enable CONFIG_DEBUG_EFI and run 'make' twice in a clean source tree. On the second run, arch/arm64/kernel/head.o is rebuilt even though no files have changed. $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- clean $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- [ snip ] $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- CALL scripts/checksyscalls.sh AS arch/arm64/kernel/head.o AR arch/arm64/kernel/built-in.a AR arch/arm64/built-in.a AR built-in.a [ snip ] The issue is caused by the use of the $(realpath ...) function. At the time arch/arm64/kernel/Makefile is parsed on the first run, $(objtree)/vmlinux does not exist. As a result, $(realpath $(objtree)/vmlinux) expands to an empty string. On the second run of Make, $(objtree)/vmlinux already exists, so $(realpath $(objtree)/vmlinux) expands to the absolute path of vmlinux. However, this change in the command line causes arch/arm64/kernel/head.o to be rebuilt. To address this issue, use $(abspath ...) instead, which does not require the file to exist. While $(abspath ...) does not resolve symlinks, this should be fine from a debugging perspective. The GNU Make manual [1] clearly explains the difference between the two: $(realpath names...) For each file name in names return the canonical absolute name. A canonical name does not contain any . or .. components, nor any repeated path separators (/) or symlinks. In case of a failure the empty string is returned. Consult the realpath(3) documentation for a list of possible failure causes. $(abspath namees...) For each file name in names return an absolute name that does not contain any . or .. components, nor any repeated path separators (/). Note that, in contrast to realpath function, abspath does not resolve symlinks and does not require the file names to refer to an existing file or directory. Use the wildcard function to test for existence. The same problem exists in drivers/firmware/efi/libstub/Makefile.zboot. On the first run of Make, $(obj)/vmlinuz.efi.elf does not exist when the Makefile is parsed, so -DZBOOT_EFI_PATH is set to an empty string. Replace $(realpath ...) with $(abspath ...) there as well. [1]: https://www.gnu.org/software/make/manual/make.html#File-Name-Functions Fixes: 757b435aaabe ("efi: arm64: Add vmlinux debug link to the Image binary") Fixes: a050910972bb ("efi/libstub: implement generic EFI zboot") Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250625125555.2504734-1-masahiroy@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: remove CONFIG_VMAP_STACK checks from entry codeBreno Leitao
With VMAP_STACK now always enabled on arm64, remove all CONFIG_VMAP_STACK conditionals from entry handling in arch/arm64/kernel/entry-common.c and arch/arm64/kernel/entry.S. This change unconditionally includes the bad stack handling and overflow detection logic, simplifying the code and reflecting the mandatory use of VMAP_STACK for all arm64 kernel builds. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-8-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: remove CONFIG_VMAP_STACK checks from SDEI stack handlingBreno Leitao
With VMAP_STACK now always enabled on arm64, remove all CONFIG_VMAP_STACK conditionals from SDEI stack allocation and initialization in arch/arm64/kernel/sdei.c. This change unconditionally defines the SDEI stack pointers and replaces runtime checks with BUILD_BUG_ON() assertions, ensuring that the code is only built when VMAP_STACK is enabled. This simplifies the logic and reflects the mandatory use of VMAP_STACK for all arm64 kernel builds. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-7-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: remove CONFIG_VMAP_STACK checks from stacktrace overflow logicBreno Leitao
With VMAP_STACK now always enabled on arm64, remove all CONFIG_VMAP_STACK conditionals from overflow stack handling in stacktrace code. This change unconditionally defines the per-CPU overflow_stack and stackinfo_get_overflow() helper in arch/arm64/include/asm/stacktrace.h, and always includes the overflow stack in the stack_info array in arch/arm64/kernel/stacktrace.c. Also, drop redundant CONFIG_VMAP_STACK checks from SDEI stack declarations. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-6-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: remove CONFIG_VMAP_STACK conditionals from traps overflow stackBreno Leitao
With VMAP_STACK now always enabled on arm64, remove the CONFIG_VMAP_STACK checks from overflow stack definitions and related code in arch/arm64/kernel/traps.c. The overflow_stack and panic_bad_stack() logic are now unconditionally included, simplifying the source and matching the mandatory stack model. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-5-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: remove CONFIG_VMAP_STACK conditionals from irq stack setupBreno Leitao
With VMAP_STACK always enabled on arm64, drop the CONFIG_VMAP_STACK checks and legacy irq stack allocation from arch/arm64/kernel/irq.c. The code now unconditionally uses the VMAP_STACK path for irq stack initialization, simplifying the logic. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-4-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: Remove CONFIG_VMAP_STACK conditionals from THREAD_SHIFT and THREAD_ALIGNBreno Leitao
Now that VMAP_STACK is always enabled on arm64, remove the CONFIG_VMAP_STACK conditional logic from the definitions of THREAD_SHIFT and THREAD_ALIGN in arch/arm64/include/asm/memory.h. This simplifies the code by unconditionally setting THREAD_ALIGN to (2 * THREAD_SIZE) and adjusting the THREAD_SHIFT definition to only depend on MIN_THREAD_SHIFT and PAGE_SHIFT. This change reflects the updated arm64 stack model, where all kernel threads use virtually mapped stacks with guard pages, and ensures alignment and stack sizing are consistently handled. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-3-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: efi: Remove CONFIG_VMAP_STACK checkBreno Leitao
Remove the CONFIG_VMAP_STACK check in arm64_efi_rt_init() since VMAP_STACK is now always enabled on arm64. The arch_alloc_vmap_stack() call will fail to build if VMAP_STACK is not set, providing sufficient protection without the explicit runtime check. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-2-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: Mandate VMAP_STACKBreno Leitao
On arm64, VMAP_STACK has been enabled by default for a while now, and the only reason to disable it was a historical lack of support for KASAN_VMALLOC. Today there's no good reason to disable VMAP_STACK. Mandate VMAP_STACK, which will allow code to be simplified in subsequent patches. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707-arm64_vmap-v1-1-8de98ca0f91c@debian.org Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: remove debug exception registration infrastructureAda Couprie Diaz
Now that debug exceptions are handled individually and without the need for dynamic registration, remove the unused registration infrastructure. This removes the external caller for `debug_exception_enter()` and `debug_exception_exit()`. Make them static again and remove them from the header. Remove `early_brk64()` as it has been made redundant by (arm64: debug: split brk64 exception entry) and is not used anymore. Note : in `early_brk64()` `bug_brk_handler()` is called unconditionally as a fall-through, but now `call_break_hook()` only calls it if the immediate matches. This does not change the behaviour in early boot, as if `bug_brk_handler()` was called on a non-BUG immediate it would return DBG_HOOK_ERROR anyway, which `call_break_hook()` will do if no immediate matches. Remove `trap_init()`, as it would be empty and a weak definition already exists in `init/main.c`. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-14-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: split bkpt32 exception entryAda Couprie Diaz
Currently all debug exceptions share common entry code and are routed to `do_debug_exception()`, which calls dynamically-registered handlers for each specific debug exception. This is unfortunate as different debug exceptions have different entry handling requirements, and it would be better to handle these distinct requirements earlier. The BKPT32 exception can only be triggered by a BKPT instruction. Thus, we know that the PC is a legitimate address and isn't being used to train a branch predictor with a bogus address : we don't need to call `arm64_apply_bp_hardening()`. The handler for this exception only pends a signal and doesn't depend on any per-CPU state : we don't need to inhibit preemption, nor do we need to keep the DAIF exceptions masked, so we can unmask them earlier. Split the BKPT32 exception entry and adjust function signatures and its behaviour to match its relaxed constraints compared to other debug exceptions. We can also remove `NOKRPOBE_SYMBOL`, as this cannot lead to a kprobe recursion. This replaces the last usage of `el0_dbg()`, so remove it. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-13-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: split brk64 exception entryAda Couprie Diaz
Currently all debug exceptions share common entry code and are routed to `do_debug_exception()`, which calls dynamically-registered handlers for each specific debug exception. This is unfortunate as different debug exceptions have different entry handling requirements, and it would be better to handle these distinct requirements earlier. The BRK64 instruction can only be triggered by a BRK instruction. Thus, we know that the PC is a legitimate address and isn't being used to train a branch predictor with a bogus address : we don't need to call `arm64_apply_bp_hardening()`. We do not need to handle the Cortex-A76 erratum #1463225 either, as it only relevant for single stepping at EL1. BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so. Split the BRK64 exception entry, adjust the function signature, and its behaviour to match the lack of needed mitigations. Further, as the EL0 and EL1 code paths are cleanly separated, we can split `do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them directly from the relevant entry paths. Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and `do_el1_undef()`. We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot lead to a kprobe recursion. When taking a BRK64 exception from EL0, the exception handling is safely preemptible : the only possible handler is `uprobe_brk_handler()`. It only operates on task-local data and properly checks its validity, then raises a Thread Information Flag, processed before returning to userspace in `do_notify_resume()`, which is already preemptible. Thus we can safely unmask interrupts and enable preemption before handling the break itself, fixing a PREEMPT_RT issue where the handler could call a sleeping function with preemption disabled. Given that the break hook registration is handled statically in `call_break_hook` since (arm64: debug: call software break handlers statically) and that we now bypass the exception handler registration, this change renders `early_brk64` redundant : its functionality is now handled through the post-init path. This also removes the last usage of `el1_dbg()`. This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`. Mark it `__maybe_unused`, to prevent a warning when building this patch without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: split hardware watchpoint exception entryAda Couprie Diaz
Currently all debug exceptions share common entry code and are routed to `do_debug_exception()`, which calls dynamically-registered handlers for each specific debug exception. This is unfortunate as different debug exceptions have different entry handling requirements, and it would be better to handle these distinct requirements earlier. Hardware watchpoints are the only debug exceptions that will write FAR_EL1, so we need to preserve it and pass it down. However, they cannot be used to maliciously train branch predictors, so we can omit calling `arm64_bp_hardening()`, nor do they need to handle the Cortex-A76 erratum #1463225, as it only applies to single stepping exceptions. As the hardware watchpoint handler only returns 0 and never triggers the call to `arm64_notify_die()`, we can call it directly from `entry-common.c`. Split the hardware watchpoint exception entry and adjust the behaviour to match the lack of needed mitigations. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-11-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: split single stepping exception entryAda Couprie Diaz
Currently all debug exceptions share common entry code and are routed to `do_debug_exception()`, which calls dynamically-registered handlers for each specific debug exception. This is unfortunate as different debug exceptions have different entry handling requirements, and it would be better to handle these distinct requirements earlier. The single stepping exception has the most constraints : it can be exploited to train branch predictors and it needs special handling at EL1 for the Cortex-A76 erratum #1463225. We need to conserve all those mitigations. However, it does not write an address at FAR_EL1, as only hardware watchpoints do so. The single-step handler does its own signaling if it needs to and only returns 0, so we can call it directly from `entry-common.c`. Split the single stepping exception entry, adjust the function signature, keep the security mitigation and erratum handling. Further, as the EL0 and EL1 code paths are cleanly separated, we can split `do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and call them directly from the relevant entry paths. We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot lead to a kprobe recursion. Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that we can do it as early as possible, and only for the exceptions coming from EL0, where it is needed. This is safe to do as it is `noinstr`, as are all the functions it may call. `el0_ia()` and `el0_pc()` already call it this way. When taking a soft-step exception from EL0, most of the single stepping handling is safely preemptible : the only possible handler is `uprobe_single_step_handler()`. It only operates on task-local data and properly checks its validity, then raises a Thread Information Flag, processed before returning to userspace in `do_notify_resume()`, which is already preemptible. However, the soft-step handler first calls `reinstall_suspended_bps()` to check if there is any hardware breakpoint or watchpoint pending or already stepped through. This cannot be preempted as it manipulates the hardware breakpoint and watchpoint registers. Move the call to `try_step_suspended_breakpoints()` to `entry-common.c` and adjust the relevant comments. We can now safely unmask interrupts before handling the step itself, fixing a PREEMPT_RT issue where the handler could call a sleeping function with preemption disabled. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/ Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: refactor reinstall_suspended_bps()Ada Couprie Diaz
`reinstall_suspended_bps()` plays a key part in the stepping process when we have hardware breakpoints and watchpoints enabled. It checks if we need to step one, will re-enable it if it has been handled and will return whether or not we need to proceed with a single-step. However, the current naming and return values make it harder to understand the logic and goal of the function. Rename it `try_step_suspended_breakpoints()` and change the return value to a boolean, aligning it with similar functions used in `do_el0_undef()` like `try_emulate_mrs()`, and making its behaviour more obvious. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-9-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: split hardware breakpoint exception entryAda Couprie Diaz
Currently all debug exceptions share common entry code and are routed to `do_debug_exception()`, which calls dynamically-registered handlers for each specific debug exception. This is unfortunate as different debug exceptions have different entry handling requirements, and it would be better to handle these distinct requirements earlier. Hardware breakpoints exceptions are generated by the hardware after user configuration. As such, they can be exploited when training branch predictors outside of the userspace VA range: they still need to call `arm64_apply_bp_hardening()` if needed to mitigate against this attack. However, they do not need to handle the Cortex-A76 erratum #1463225 as it only applies to single stepping exceptions. It does not set an address in FAR_EL1 either, only the hardware watchpoint does. As the hardware breakpoint handler only returns 0 and never triggers the call to `arm64_notify_die()`, we can call it directly from `entry-common.c`. Split the hardware breakpoint exception entry, adjust the function signature, and handling of the Cortex-A76 erratum to fit the behaviour of the exception. Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that we can do it as early as possible, and only for the exceptions coming from EL0, where it is needed. This is safe to do as it is `noinstr`, as are all the functions it may call. `el0_ia()` and `el0_pc()` already call it this way. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-8-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: entry: Add entry and exit functions for debug exceptionsAda Couprie Diaz
Move the `debug_exception_enter()` and `debug_exception_exit()` functions from mm/fault.c, as they are needed to split the debug exceptions entry paths from the current unified one. Make them externally visible in include/asm/exception.h until the caller in mm/fault.c is cleaned up. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-7-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: remove break/step handler registration infrastructureAda Couprie Diaz
Remove all infrastructure for the dynamic registration previously used by software breakpoints and stepping handlers. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-6-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: call step handlers staticallyAda Couprie Diaz
Software stepping checks for the correct handler by iterating over a list of dynamically registered handlers and calling all of them until one handles the exception. This is the only generic way to handle software stepping handlers in arm64 as the exception does not provide an immediate that could be checked, contrary to software breakpoints. However, the registration mechanism is not exported and has only two current users : the KGDB stepping handler, and the uprobe single step handler. Given that one comes from user mode and the other from kernel mode, call the appropriate one by checking the source EL of the exception. Add a stand-in that returns DBG_HOOK_ERROR when the configuration options are not enabled. Remove `arch_init_uprobes()` as it is not useful anymore and is specific to arm64. Unify the naming of the handler to XXX_single_step_handler(), making it clear they are related. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-5-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: call software breakpoint handlers staticallyAda Couprie Diaz
Software breakpoints pass an immediate value in ESR ("comment") that can be used to call a specialized handler (KGDB, KASAN...). We do so in two different ways : - During early boot, `early_brk64` statically checks against known immediates and calls the corresponding handler, - During init, handlers are dynamically registered into a list. When called, the generic software breakpoint handler will iterate over the list to find the appropriate handler. The dynamic registration does not provide any benefit here as it is not exported and all its uses are within the arm64 tree. It also depends on an RCU list, whose safe access currently relies on the non-preemptible state of `do_debug_exception`. Replace the list iteration logic in `call_break_hooks` to call the breakpoint handlers statically if they are enabled, like in `early_brk64`. Expose the handlers in their respective headers to be reachable from `arch/arm64/kernel/debug-monitors.c` at link time. Unify the naming of the software breakpoint handlers to XXX_brk_handler(), making it clear they are related and to differentiate from the hardware breakpoints. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20250707114109.35672-4-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: refactor aarch32_break_handler()Ada Couprie Diaz
`aarch32_break_handler()` is called in `do_el0_undef()` when we are trying to handle an exception whose Exception Syndrome is unknown. It checks if the instruction hit might be a 32-bit arm break (be it A32 or T2), and sends a SIGTRAP to userspace if it is so that it can be handled. However, this is badly represented in the naming of the function, and is not consistent with the other functions called with the same logic in `do_el0_undef()`. Rename it `try_handle_aarch32_break()` and change the return value to a boolean to align with the logic of the other tentative handlers in `do_el0_undef()`, the previous error code being ignored anyway. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20250707114109.35672-3-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: debug: clean up single_step_handler logicAda Couprie Diaz
Remove the unnecessary boolean which always checks if the handler was found and return early instead. Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20250707114109.35672-2-ada.coupriediaz@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-07-08arm64: dts: renesas: r9a09g057h44-rzv2h-evk: Enable serial NOR FLASHLad Prabhakar
Enable MT25QU512ABB8E12 FLASH connected to XSPI. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250704140823.163572-5-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g056n48-rzv2n-evk: Enable serial NOR FLASHLad Prabhakar
Enable MT25QU512ABB8E12 FLASH connected to XSPI. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250704140823.163572-4-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g057: Add XSPI nodeLad Prabhakar
Add XSPI node to RZ/V2H(P) ("R9A09G057") SoC DTSI. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250704140823.163572-3-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g056: Add XSPI nodeLad Prabhakar
Add XSPI node to RZ/V2N ("R9A09G056") SoC DTSI. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250704140823.163572-2-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g056n48-rzv2n-evk: Fix pinctrl node name for GBETH1Lad Prabhakar
Rename the GBETH1 pinctrl node from "eth0" to "eth1" to avoid duplicate node names in the DT and correctly reflect the label "eth1_pins". Fixes: f111192baa80 ("arm64: dts: renesas: r9a09g056n48-rzv2n-evk: Enable GBETH") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250703235544.715433-3-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g057h44-rzv2h-evk: Fix pinctrl node name for GBETH1Lad Prabhakar
Rename the GBETH1 pinctrl node from "eth0" to "eth1" to avoid duplicate node names in the DT and correctly reflect the label "eth1_pins". Fixes: 802292ee27a7 ("arm64: dts: renesas: r9a09g057h44-rzv2h-evk: Enable GBETH") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250703235544.715433-2-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r8a779g3-sparrow-hawk-fan-pwm: Add missing install targetNiklas Söderlund
The target to consider the dtbo file for installation is missing, add it. Fixes: a719915e76f2 ("arm64: dts: renesas: r8a779g3: Add Retronix R-Car V4H Sparrow Hawk board support") Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Marek Vasut <marek.vasut+renesas@mailbox.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Link: https://lore.kernel.org/20250701112612.3957799-2-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: rzg3e-smarc-som: Enable eth{0-1} (GBETH) interfacesJohn Madieu
Enable the Gigabit Ethernet Interfaces (GBETH) populated on the RZ/G3E SMARC EVK Signed-off-by: John Madieu <john.madieu.xa@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250702005706.1200059-5-john.madieu.xa@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-08arm64: dts: renesas: r9a09g047e57-smarc: Add gpio keysBiju Das
RZ/G3E SMARC EVK has 3 user buttons called USER_SW1, USER_SW2 and USER_SW3 and SLEEP button with NMI support. Add a DT node in device tree to instantiate the gpio-keys driver for these buttons. The system can enter into STR state by pressing the sleep button and wakeup from STR is done by pressing power button. The USER_SW{1,2,3} configured as wakeup-source, so it can wakeup the system during s2idle. Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/20250702092755.70847-1-biju.das.jz@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2025-07-07KVM: arm64: Expose new KVM cap for cacheable PFNMAPAnkit Agrawal
Introduce a new KVM capability to expose to the userspace whether cacheable mapping of PFNMAP is supported. The ability to safely do the cacheable mapping of PFNMAP is contingent on S2FWB and ARM64_HAS_CACHE_DIC. S2FWB allows KVM to avoid flushing the D cache, ARM64_HAS_CACHE_DIC allows KVM to avoid flushing the icache and turns icache_inval_pou() into a NOP. The cap would be false if those requirements are missing and is checked by making use of kvm_arch_supports_cacheable_pfnmap. This capability would allow userspace to discover the support. It could for instance be used by userspace to prevent live-migration across FWB and non-FWB hosts. CC: Catalin Marinas <catalin.marinas@arm.com> CC: Jason Gunthorpe <jgg@nvidia.com> CC: Oliver Upton <oliver.upton@linux.dev> CC: David Hildenbrand <david@redhat.com> Suggested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250705071717.5062-7-ankita@nvidia.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-07-07KVM: arm64: Allow cacheable stage 2 mapping using VMA flagsAnkit Agrawal
KVM currently forces non-cacheable memory attributes (either Normal-NC or Device-nGnRE) for a region based on pfn_is_map_memory(), i.e. whether or not the kernel has a cacheable alias for it. This is necessary in situations where KVM needs to perform CMOs on the region but is unnecessarily restrictive when hardware obviates the need for CMOs. KVM doesn't need to perform any CMOs on hardware with FEAT_S2FWB and CTR_EL0.DIC. As luck would have it, there are implementations in the wild that need to map regions of a device with cacheable attributes to function properly. An example of this is Nvidia's Grace Hopper/Blackwell systems where GPU memory is interchangeable with DDR and retains properties such as cacheability, unaligned accesses, atomics and handling of executable faults. Of course, for this to work in a VM the GPU memory needs to have a cacheable mapping at stage-2. Allow cacheable stage-2 mappings to be created on supporting hardware when the VMA has cacheable memory attributes. Check these preconditions during memslot creation (in addition to fault handling) to potentially 'fail-fast' as a courtesy to userspace. CC: Oliver Upton <oliver.upton@linux.dev> CC: Sean Christopherson <seanjc@google.com> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: David Hildenbrand <david@redhat.com> Tested-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250705071717.5062-6-ankita@nvidia.com [ Oliver: refine changelog, squash kvm_supports_cacheable_pfnmap() patch ] Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-07-07KVM: arm64: Block cacheable PFNMAP mappingAnkit Agrawal
Fixes a security bug due to mismatched attributes between S1 and S2 mapping. Currently, it is possible for a region to be cacheable in the userspace VMA, but mapped non cached in S2. This creates a potential issue where the VMM may sanitize cacheable memory across VMs using cacheable stores, ensuring it is zeroed. However, if KVM subsequently assigns this memory to a VM as uncached, the VM could end up accessing stale, non-zeroed data from a previous VM, leading to unintended data exposure. This is a security risk. Block such mismatch attributes case by returning EINVAL when userspace try to map PFNMAP cacheable. Only allow NORMAL_NC and DEVICE_*. CC: Oliver Upton <oliver.upton@linux.dev> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Sean Christopherson <seanjc@google.com> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: David Hildenbrand <david@redhat.com> Tested-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250705071717.5062-4-ankita@nvidia.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-07-07KVM: arm64: Assume non-PFNMAP/MIXEDMAP VMAs can be mapped cacheableAnkit Agrawal
Despite its name, kvm_is_device_pfn() is actually used to determine if a given PFN has a kernel mapping that can be used to perform cache maintenance, as it calls pfn_is_map_memory() internally. Expand the helper into its single callsite and further condition the check on the VMA having either VM_PFNMAP or VM_MIXEDMAP set. VMAs that set neither of these flags must always contain Normal, struct page backed memory with valid aliases in the kernel address space. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: David Hildenbrand <david@redhat.com> Tested-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250705071717.5062-3-ankita@nvidia.com [ Oliver: fixed typos, refined changelog ] Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-07-07KVM: arm64: Rename the device variable to s2_force_noncacheableAnkit Agrawal
To perform cache maintenance on a region of memory, KVM/arm64 relies on that region having a cacheable alias in the kernel's address space which can be used with CMO instructions. The 'device' variable is somewhat of a misnomer, as it actually indicates whether or not the stage-2 alias is allowed to have cacheable memory attributes. The resulting stage-2 memory attributes are further modified by VM_ALLOW_ANY_UNCACHED, selecting between Normal-NC or Device-nGnRE depending on what the endpoint supports. Rename the to s2_force_noncacheable such that its purpose is a bit more obvious. CC: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: David Hildenbrand <david@redhat.com> Tested-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250705071717.5062-2-ankita@nvidia.com [ Oliver: addressed typos, wound up rewriting changelog ] Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-07-07arm64: defconfig: Enable Tegra HSP and BPMPThierry Reding
Selecting the IVC, HSP and BPMP drivers via Kconfig is problematic because it can create conflicting configurations. Instead, enable them in the default configuration. Link: https://lore.kernel.org/r/20250506133118.1011777-12-thierry.reding@gmail.com Signed-off-by: Thierry Reding <treding@nvidia.com>
2025-07-07arm64: dts: mediatek: mt8395-genio-1200-evk: Add MT6359 PMIC key supportLouis-Alexis Eyraud
Add in mt8395-genio-1200-evk devicetree file a sub node in pmic for the mt6359-keys compatible to add the Power and Home MT6359 PMIC keys support. Signed-off-by: Louis-Alexis Eyraud <louisalexis.eyraud@collabora.com> Link: https://lore.kernel.org/r/20250703-add-mt6359-pmic-keys-support-v1-3-21a4d2774e34@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-07-07arm64: dts: mediatek: mt8390-genio-common: Add Home MT6359 PMIC key supportLouis-Alexis Eyraud
Add in mt8390-genio-common dtsi file the support of Home MT6359 PMIC key. Signed-off-by: Louis-Alexis Eyraud <louisalexis.eyraud@collabora.com> Link: https://lore.kernel.org/r/20250703-add-mt6359-pmic-keys-support-v1-2-21a4d2774e34@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-07-07arm64: dts: mediatek: mt7988a-bpi-r4: add gpio ledsFrank Wunderlich
Bananapi R4 has a green and a blue led which can be switched by gpio. Green led is for running state so default on. Green led also shares pin with eeprom writeprotect where led off allows writing to eeprom. Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20250706132213.20412-14-linux@fw-web.de [Angelo: Fixed missing dt-bindings/leds/common.h header inclusion] Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>