summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-11-18arm64: move on_thread_stack() to <asm/stacktrace.h>Mark Rutland
Currently on_thread_stack() is defined in <asm/processor.h>, depending upon definitiong from <asm/stacktrace.h> despite this header not being included. This ends up being fragile, and any user of on_thread_stack() must include both <asm/processor.h> and <asm/stacktrace.h>. We organised things this way due to header dependencies back in commit: 0b3e336601b82c6a ("arm64: Add support for STACKLEAK gcc plugin") ... but now that we no longer use current_top_of_stack(), and given that stackleak includes <asm/stacktrace.h> via <linux/stackleak.h>, we no longer need the definition to live in <asm/processor.h>. Move on_thread_stack() to <asm/stacktrace.h>, where all its dependencies are guaranteed to be defined. This requires having arm64's irq.c explicitly include <asm/stacktrace.h>, and I've taken the opportunity to sort the includes, which were slightly out of order. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221117120902.3974163-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64: remove current_top_of_stack()Mark Rutland
We no longer use current_top_of_stack() on arm64, so it can be removed. We introduced current_top_of_stack() for STACKLEAK in commit: 0b3e336601b82c6a ("arm64: Add support for STACKLEAK gcc plugin") ... then we figured out the intended semantics were unclear, and reworked it in commit: e85094c31ddb794a ("arm64: stackleak: fix current_top_of_stack()") ... then we removed the only user in commit: 0cfa2ccd285d98ad ("stackleak: rework stack high bound handling") Given that it's no longer used, and it's very easy to misuse, this patch removes current_top_of_stack(). For the moment, on_thread_stack() is left where it is as moving it will change some header dependencies. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221117120902.3974163-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18kselftest/arm64: Use preferred form for predicate load/storesMark Brown
The preferred form of the str/ldr for predicate registers with an immediate of zero is to omit the zero, and the clang built in assembler rejects the zero immediate. Drop the immediate. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221117114130.687261-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64: alternatives: make apply_alternatives_vdso() staticMark Rutland
We define and use apply_alternatives_vdso() within alternative.c, and don't provide a prototype in a header. There's no need for it to be visible outside of alternative.c, so mark it as static. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221117131650.4056636-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64: kdump: Support crashkernel=X fall back to reserve region above DMA zonesZhen Lei
For crashkernel=X without '@offset', select a region within DMA zones first, and fall back to reserve region above DMA zones. This allows users to use the same configuration on multiple platforms. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Baoquan He <bhe@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221116121044.1690-3-thunder.leizhen@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64: kdump: Provide default size when crashkernel=Y,low is not specifiedZhen Lei
Try to allocate at least 128 MiB low memory automatically for the case that crashkernel=,high is explicitly specified, while crashkenrel=,low is omitted. This allows users to focus more on the high memory requirements of their business rather than the low memory requirements of the crash kernel booting. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Baoquan He <bhe@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221116121044.1690-2-thunder.leizhen@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64/mm: Drop idmap_pg_end[] declarationAnshuman Khandual
idmap_pg_end[] is not used anywhere, hence just drop its declaration. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20221116084302.320685-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)Anshuman Khandual
__create_pgd_mapping_locked() expects a page allocator used while mapping a virtual range. This page allocator function propagates down the call chain, while building intermediate levels in the page table. Passed page allocator is a necessary ingredient required to build the page table but its presence can be asserted just once in the very beginning rather than in all the down stream functions. This consolidates BUG_ON(!pgtable_alloc) checks just in a single place i.e __create_pgd_mapping_locked(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20221118053102.500216-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18ftrace: arm64: move from REGS to ARGSMark Rutland
This commit replaces arm64's support for FTRACE_WITH_REGS with support for FTRACE_WITH_ARGS. This removes some overhead and complexity, and removes some latent issues with inconsistent presentation of struct pt_regs (which can only be reliably saved/restored at exception boundaries). FTRACE_WITH_REGS has been supported on arm64 since commit: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") As noted in the commit message, the major reasons for implementing FTRACE_WITH_REGS were: (1) To make it possible to use the ftrace graph tracer with pointer authentication, where it's necessary to snapshot/manipulate the LR before it is signed by the instrumented function. (2) To make it possible to implement LIVEPATCH in future, where we need to hook function entry before an instrumented function manipulates the stack or argument registers. Practically speaking, we need to preserve the argument/return registers, PC, LR, and SP. Neither of these need a struct pt_regs, and only require the set of registers which are live at function call/return boundaries. Our calling convention is defined by "Procedure Call Standard for the Arm® 64-bit Architecture (AArch64)" (AKA "AAPCS64"), which can currently be found at: https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst Per AAPCS64, all function call argument and return values are held in the following GPRs: * X0 - X7 : parameter / result registers * X8 : indirect result location register * SP : stack pointer (AKA SP) Additionally, ad function call boundaries, the following GPRs hold context/return information: * X29 : frame pointer (AKA FP) * X30 : link register (AKA LR) ... and for ftrace we need to capture the instrumented address: * PC : program counter No other GPRs are relevant, as none of the other arguments hold parameters or return values: * X9 - X17 : temporaries, may be clobbered * X18 : shadow call stack pointer (or temorary) * X19 - X28 : callee saved This patch implements FTRACE_WITH_ARGS for arm64, only saving/restoring the minimal set of registers necessary. This is always sufficient to manipulate control flow (e.g. for live-patching) or to manipulate function arguments and return values. This reduces the necessary stack usage from 336 bytes for pt_regs down to 112 bytes for ftrace_regs + 32 bytes for two frame records, freeing up 188 bytes. This could be reduced further with changes to the unwinder. As there is no longer a need to save different sets of registers for different features, we no longer need distinct `ftrace_caller` and `ftrace_regs_caller` trampolines. This allows the trampoline assembly to be simpler, and simplifies code which previously had to handle the two trampolines. I've tested this with the ftrace selftests, where there are no unexpected failures. Co-developed-by: Florent Revest <revest@chromium.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Florent Revest <revest@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20221103170520.931305-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18ftrace: abstract DYNAMIC_FTRACE_WITH_ARGS accessesMark Rutland
In subsequent patches we'll arrange for architectures to have an ftrace_regs which is entirely distinct from pt_regs. In preparation for this, we need to minimize the use of pt_regs to where strictly necessary in the core ftrace code. This patch adds new ftrace_regs_{get,set}_*() helpers which can be used to manipulate ftrace_regs. When CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y, these can always be used on any ftrace_regs, and when CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=n these can be used when regs are available. A new ftrace_regs_has_args(fregs) helper is added which code can use to check when these are usable. Co-developed-by: Florent Revest <revest@chromium.org> Signed-off-by: Florent Revest <revest@chromium.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20221103170520.931305-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18ftrace: rename ftrace_instruction_pointer_set() -> ↵Mark Rutland
ftrace_regs_set_instruction_pointer() In subsequent patches we'll add a sew of ftrace_regs_{get,set}_*() helpers. In preparation, this patch renames ftrace_instruction_pointer_set() to ftrace_regs_set_instruction_pointer(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20221103170520.931305-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18ftrace: pass fregs to arch_ftrace_set_direct_caller()Mark Rutland
In subsequent patches we'll arrange for architectures to have an ftrace_regs which is entirely distinct from pt_regs. In preparation for this, we need to minimize the use of pt_regs to where strictly necessary in the core ftrace code. This patch changes the prototype of arch_ftrace_set_direct_caller() to take ftrace_regs rather than pt_regs, and moves the extraction of the pt_regs into arch_ftrace_set_direct_caller(). On x86, arch_ftrace_set_direct_caller() can be used even when CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=n, and <linux/ftrace.h> defines struct ftrace_regs. Due to this, it's necessary to define arch_ftrace_set_direct_caller() as a macro to avoid using an incomplete type. I've also moved the body of arch_ftrace_set_direct_caller() after the CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y defineidion of struct ftrace_regs. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20221103170520.931305-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18perf: arm_cspmu: Fix module cyclic dependencyBesar Wicaksono
Build on arm64 allmodconfig failed with: | depmod: ERROR: Cycle detected: arm_cspmu -> nvidia_cspmu -> arm_cspmu | depmod: ERROR: Found 2 modules in dependency cycles! The arm_cspmu.c provides standard functions to operate the PMU and the vendor code provides vendor specific attributes. Both need to be built as single kernel module. Update the makefile to compile sources under arm_cspmu into one module. Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Reviewed-and-Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20221116203952.34168-1-bwicaksono@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18perf: arm_cspmu: Fix build failure on x86_64Besar Wicaksono
Building on x86_64 allmodconfig failed: | drivers/perf/arm_cspmu/arm_cspmu.c:1114:29: error: implicit | declaration of function 'get_acpi_id_for_cpu' get_acpi_id_for_cpu is a helper function from ARM64. Fix by adding ARM64 dependency. Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20221116190455.55651-1-bwicaksono@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf: arm_cspmu: Fix modular builds due to missing MODULE_LICENSE()sWill Deacon
Building an arm64 allmodconfig target results in the following failure from modpost: | ERROR: modpost: missing MODULE_LICENSE() in drivers/perf/arm_cspmu/arm_cspmu.o | ERROR: modpost: missing MODULE_LICENSE() in drivers/perf/arm_cspmu/nvidia_cspmu.o | make[1]: *** [scripts/Makefile.modpost:126: Module.symvers] Error 1 | make: *** [Makefile:1944: modpost] Error 2 Add the missing MODULE_LICENSE() macros, following the license of the source files and symbol exports. Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf: arm_cspmu: Add support for NVIDIA SCF and MCF attributeBesar Wicaksono
Add support for NVIDIA System Cache Fabric (SCF) and Memory Control Fabric (MCF) PMU attributes for CoreSight PMU implementation in NVIDIA devices. Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Link: https://lore.kernel.org/r/20221111222330.48602-3-bwicaksono@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf: arm_cspmu: Add support for ARM CoreSight PMU driverBesar Wicaksono
Add support for ARM CoreSight PMU driver framework and interfaces. The driver provides generic implementation to operate uncore PMU based on ARM CoreSight PMU architecture. The driver also provides interface to get vendor/implementation specific information, for example event attributes and formating. The specification used in this implementation can be found below: * ACPI Arm Performance Monitoring Unit table: https://developer.arm.com/documentation/den0117/latest * ARM Coresight PMU architecture: https://developer.arm.com/documentation/ihi0091/latest Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Link: https://lore.kernel.org/r/20221111222330.48602-2-bwicaksono@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf/smmuv3: Fix hotplug callback leak in arm_smmu_pmu_init()Shang XiaoJing
arm_smmu_pmu_init() won't remove the callback added by cpuhp_setup_state_multi() when platform_driver_register() failed. Remove the callback by cpuhp_remove_multi_state() in fail path. Similar to the handling of arm_ccn_init() in commit 26242b330093 ("bus: arm-ccn: Prevent hotplug callback leak") Fixes: 7d839b4b9e00 ("perf/smmuv3: Add arm64 smmuv3 pmu driver") Signed-off-by: Shang XiaoJing <shangxiaojing@huawei.com> Reviewed-by: Punit Agrawal <punit.agrawal@bytedance.com> Link: https://lore.kernel.org/r/20221115115540.6245-3-shangxiaojing@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf/arm_dmc620: Fix hotplug callback leak in dmc620_pmu_init()Shang XiaoJing
dmc620_pmu_init() won't remove the callback added by cpuhp_setup_state_multi() when platform_driver_register() failed. Remove the callback by cpuhp_remove_multi_state() in fail path. Similar to the handling of arm_ccn_init() in commit 26242b330093 ("bus: arm-ccn: Prevent hotplug callback leak") Fixes: 53c218da220c ("driver/perf: Add PMU driver for the ARM DMC-620 memory controller") Signed-off-by: Shang XiaoJing <shangxiaojing@huawei.com> Reviewed-by: Punit Agrawal <punit.agrawal@bytedance.com> Link: https://lore.kernel.org/r/20221115115540.6245-2-shangxiaojing@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: armv8_deprecated: rework deprected instruction handlingMark Rutland
Support for deprecated instructions can be enabled or disabled at runtime. To handle this, the code in armv8_deprecated.c registers and unregisters undef_hooks, and makes cross CPU calls to configure HW support. This is rather complicated, and the synchronization required to make this safe ends up serializing the handling of instructions which have been trapped. This patch simplifies the deprecated instruction handling by removing the dynamic registration and unregistration, and changing the trap handling code to determine whether a handler should be invoked. This removes the need for dynamic list management, and simplifies the locking requirements, making it possible to handle trapped instructions entirely in parallel. Where changing the emulation state requires a cross-call, this is serialized by locally disabling interrupts, ensuring that the CPU is not left in an inconsistent state. To simplify sysctl management, each insn_emulation is given a separate sysctl table, permitting these to be registered separately. The core sysctl code will iterate over all of these when walking sysfs. I've tested this with userspace programs which use each of the deprecated instructions, and I've concurrently modified the support level for each of the features back-and-forth between HW and emulated to check that there are no spurious SIGILLs sent to userspace when the support level is changed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-10-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: armv8_deprecated: move aarch32 helper earlierMark Rutland
Subsequent patches will rework the logic in armv8_deprecated.c. In preparation for subsequent changes, this patch moves some shared logic earlier in the file. This will make subsequent diffs simpler and easier to read. At the same time, drop the `__kprobes` annotation from aarch32_check_condition(), as this is only used for traps from compat userspace, and has no risk of recursion within kprobes. As this is the last kprobes annotation in armve8_deprecated.c, we no longer need to include <asm/kprobes.h>. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-9-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: armv8_deprecated move emulation functionsMark Rutland
Subsequent patches will rework the logic in armv8_deprecated.c. In preparation for subsequent changes, this patch moves the emulation logic earlier in the file, and moves the infrastructure later in the file. This will make subsequent diffs simpler and easier to read. This is purely a move. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-8-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: armv8_deprecated: fold ops into insn_emulationMark Rutland
The code for emulating deprecated instructions has two related structures: struct insn_emulation_ops and struct insn_emulation, where each struct insn_emulation_ops is associated 1-1 with a struct insn_emulation. It would be simpler to combine the two into a single structure, removing the need for (unconditional) dynamic allocation at boot time, and simplifying some runtime pointer chasing. This patch merges the two structures together. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-7-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: rework EL0 MRS emulationMark Rutland
On CPUs without FEAT_IDST, ID register emulation is slower than it needs to be, as all threads contend for the same lock to perform the emulation. This patch reworks the emulation to avoid this unnecessary contention. On CPUs with FEAT_IDST (which is mandatory from ARMv8.4 onwards), EL0 accesses to ID registers result in a SYS trap, and emulation of these is handled with a sys64_hook. These hooks are statically allocated, and no locking is required to iterate through the hooks and perform the emulation, allowing emulation to occur in parallel with no contention. On CPUs without FEAT_IDST, EL0 accesses to ID registers result in an UNDEFINED exception, and emulation of these accesses is handled with an undef_hook. When an EL0 MRS instruction is trapped to EL1, the kernel finds the relevant handler by iterating through all of the undef_hooks, requiring undef_lock to be held during this lookup. This locking is only required to safely traverse the list of undef_hooks (as it can be concurrently modified), and the actual emulation of the MRS does not require any mutual exclusion. This locking is an unfortunate bottleneck, especially given that MRS emulation is enabled unconditionally and is never disabled. This patch reworks the non-FEAT_IDST MRS emulation logic so that it can be invoked directly from do_el0_undef(). This removes the bottleneck, allowing MRS traps to be handled entirely in parallel, and is a stepping stone to making all of the undef_hooks lock-free. I've tested this in a 64-vCPU VM on a 64-CPU ThunderX2 host, with a benchmark which spawns a number of threads which each try to read ID_AA64ISAR0_EL1 1000000 times. This is vastly more contention than will ever be seen in realistic usage, but clearly demonstrates the removal of the bottleneck: | Threads || Time (seconds) | | || Before || After | | || Real | System || Real | System | |---------++--------+---------++--------+---------| | 1 || 0.29 | 0.20 || 0.24 | 0.12 | | 2 || 0.35 | 0.51 || 0.23 | 0.27 | | 4 || 1.08 | 3.87 || 0.24 | 0.56 | | 8 || 4.31 | 33.60 || 0.24 | 1.11 | | 16 || 9.47 | 149.39 || 0.23 | 2.15 | | 32 || 19.07 | 605.27 || 0.24 | 4.38 | | 64 || 65.40 | 3609.09 || 0.33 | 11.27 | Aside from the speedup, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-6-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: factor insn read out of call_undef_hook()Mark Rutland
Subsequent patches will rework EL0 UNDEF handling, removing the need for struct undef_hook and call_undef_hook. In preparation for those changes, this patch factors the logic for reading user instructions out of call_undef_hook() and into a new user_insn_read() helper, matching the style of the existing aarch64_insn_read() helper used for reading kernel instructions. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: factor out EL1 SSBS emulation hookMark Rutland
Currently call_undef_hook() is used to handle UNDEFINED exceptions from EL0 and EL1. As support for deprecated instructions may be enabled independently, the handlers for individual instructions are organised as a linked list of struct undef_hook which can be manipulated dynamically. As this can be manipulated dynamically, the list is protected with a raw_spinlock which must be acquired when handling UNDEFINED exceptions or when manipulating the list of handlers. This locking is unfortunate as it serialises handling of UNDEFINED exceptions, and requires RCU to be enabled for lockdep, requiring the use of RCU_NONIDLE() in resume path of cpu_suspend() since commit: a2c42bbabbe260b7 ("arm64: spectre: Prevent lockdep splat on v4 mitigation enable path") The list of UNDEFINED handlers largely consist of handlers for exceptions taken from EL0, and the only handler for exceptions taken from EL1 handles `MSR SSBS, #imm` on CPUs which feature PSTATE.SSBS but lack the corresponding MSR (Immediate) instruction. Other than this we never expect to take an UNDEFINED exception from EL1 in normal operation. This patch reworks do_el0_undef() to invoke the EL1 SSBS handler directly, relegating call_undef_hook() to only handle EL0 UNDEFs. This removes redundant work to iterate the list for EL1 UNDEFs, and removes the need for locking, permitting EL1 UNDEFs to be handled in parallel without contention. The RCU_NONIDLE() call in cpu_suspend() will be removed in a subsequent patch, as there are other potential issues with the use of instrumentable code and RCU in the CPU suspend code. I've tested this by forcing the detection of SSBS on a CPU that doesn't have it, and verifying that the try_emulate_el1_ssbs() callback is invoked. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: split EL0/EL1 UNDEF handlersMark Rutland
In general, exceptions taken from EL1 need to be handled separately from exceptions taken from EL0, as the logic to handle the two cases can be significantly divergent, and exceptions taken from EL1 typically have more stringent requirements on locking and instrumentation. Subsequent patches will rework the way EL1 UNDEFs are handled in order to address longstanding soundness issues with instrumentation and RCU. In preparation for that rework, this patch splits the existing do_undefinstr() handler into separate do_el0_undef() and do_el1_undef() handlers. Prior to this patch, do_undefinstr() was marked with NOKPROBE_SYMBOL(), preventing instrumentation via kprobes. However, do_undefinstr() invokes other code which can be instrumented, and: * For UNDEFINED exceptions taken from EL0, there is no risk of recursion within kprobes. Therefore it is safe for do_el0_undef to be instrumented with kprobes, and it does not need to be marked with NOKPROBE_SYMBOL(). * For UNDEFINED exceptions taken from EL1, either: (a) The exception is has been taken when manipulating SSBS; these cases are limited and do not occur within code that can be invoked recursively via kprobes. Hence, in these cases instrumentation with kprobes is benign. (b) The exception has been taken for an unknown reason, as other than manipulating SSBS we do not expect to take UNDEFINED exceptions from EL1. Any handling of these exception is best-effort. ... and in either case, marking do_el1_undef() with NOKPROBE_SYMBOL() isn't sufficient to prevent recursion via kprobes as functions it calls (including die()) are instrumentable via kprobes. Hence, it's not worthwhile to mark do_el1_undef() with NOKPROBE_SYMBOL(). The same applies to do_el1_bti() and do_el1_fpac(), so their NOKPROBE_SYMBOL() annotations are also removed. Aside from the new instrumentability, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: allow kprobes on EL0 handlersMark Rutland
Currently do_sysinstr() and do_cp15instr() are marked with NOKPROBE_SYMBOL(). However, these are only called for exceptions taken from EL0, and there is no risk of recursion in kprobes, so this is not necessary. Remove the NOKPROBE_SYMBOL() annotation, and rename the two functions to more clearly indicate that these are solely for exceptions taken from EL0, better matching the names used by the lower level entry points in entry-common.c. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221019144123.612388-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15drivers: perf: marvell_cn10k: Fix hotplug callback leak in tad_pmu_init()Yuan Can
tad_pmu_init() won't remove the callback added by cpuhp_setup_state_multi() when platform_driver_register() failed. Remove the callback by cpuhp_remove_multi_state() in fail path. Similar to the handling of arm_ccn_init() in commit 26242b330093 ("bus: arm-ccn: Prevent hotplug callback leak") Fixes: 036a7584bede ("drivers: perf: Add LLC-TAD perf counter support") Signed-off-by: Yuan Can <yuancan@huawei.com> Link: https://lore.kernel.org/r/20221115070207.32634-3-yuancan@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15perf: arm_dsu: Fix hotplug callback leak in dsu_pmu_init()Yuan Can
dsu_pmu_init() won't remove the callback added by cpuhp_setup_state_multi() when platform_driver_register() failed. Remove the callback by cpuhp_remove_multi_state() in fail path. Similar to the handling of arm_ccn_init() in commit 26242b330093 ("bus: arm-ccn: Prevent hotplug callback leak") Fixes: 7520fa99246d ("perf: ARM DynamIQ Shared Unit PMU support") Signed-off-by: Yuan Can <yuancan@huawei.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20221115070207.32634-2-yuancan@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: mm: kfence: only handle translation faultsMark Rutland
Alexander noted that KFENCE only expects to handle faults from invalid page table entries (i.e. translation faults), but arm64's fault handling logic will call kfence_handle_page_fault() for other types of faults, including alignment faults caused by unaligned atomics. This has the unfortunate property of causing those other faults to be reported as "KFENCE: use-after-free", which is misleading and hinders debugging. Fix this by only forwarding unhandled translation faults to the KFENCE code, similar to what x86 does already. Alexander has verified that this passes all the tests in the KFENCE test suite and avoids bogus reports on misaligned atomics. Link: https://lore.kernel.org/all/20221102081620.1465154-1-zhongbaisong@huawei.com/ Fixes: 840b23986344 ("arm64, kfence: enable KFENCE for ARM64") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Alexander Potapenko <glider@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marco Elver <elver@google.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221114104411.2853040-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15ACPI: APMT: Fix kerneldoc and indentationBesar Wicaksono
Add missing kerneldoc and fix alignment on one of the arguments of apmt_add_platform_device function. Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Link: https://lore.kernel.org/r/20221111234323.16182-1-bwicaksono@nvidia.com [will: Fixed up additional indentation issue] Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: insn: always inline hint generationMark Rutland
All users of aarch64_insn_gen_hint() (e.g. aarch64_insn_gen_nop()) pass a constant argument and generate a constant value. Some of those users are noinstr code (e.g. for alternatives patching). For noinstr code it is necessary to either inline these functions or to ensure the out-of-line versions are noinstr. Since in all cases these are generating a constant, make them __always_inline. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20221114135928.3000571-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: insn: simplify insn group identificationMark Rutland
The only code which needs to check for an entire instruction group is the aarch64_insn_is_steppable() helper function used by kprobes, which must not be instrumented, and only needs to check for the "Branch, exception generation and system instructions" class. Currently we have an out-of-line helper in insn.c which must be marked as __kprobes, which indexes a table with some bits extracted from the instruction. In aarch64_insn_is_steppable() we then need to compare the result with an expected enum value. It would be simpler to have a predicate for this, as with the other aarch64_insn_is_*() helpers, which would be always inlined to prevent inadvertent instrumentation, and would permit better code generation. This patch adds a predicate function for this instruction group using the existing __AARCH64_INSN_FUNCS() helpers, and removes the existing out-of-line helper. As the only class we currently care about is the branch+exception+sys class, I have only added helpers for this, and left the other classes unimplemented for now. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20221114135928.3000571-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: insn: always inline predicatesMark Rutland
We have a number of aarch64_insn_*() predicates which are used in code which is not instrumentation safe (e.g. alternatives patching, kprobes). Some of those are marked with __kprobes, but most are not, and are implemented out-of-line in insn.c. This patch moves the predicates to insn.h and marks them with __always_inline. This is ensures that they will respect the instrumentation requirements of their caller which they will be inlined into. At the same time, I've formatted each of the functions consistently as a list, to make them easier to read and update in future. Other than preventing unwanted instrumentation, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20221114135928.3000571-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: insn: remove aarch64_insn_gen_prefetch()Mark Rutland
There are no users of aarch64_insn_gen_prefetch(), and which encodes a PRFM (immediate) with a hard-coded offset of 0. Remove it for now; we can always restore it with tests if we need it in future. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20221114135928.3000571-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-15arm64: make is_ttbrX_addr() noinstr-safeMark Rutland
We use is_ttbr0_addr() in noinstr code, but as it's only marked as inline, it's theoretically possible for the compiler to place it out-of-line and instrument it, which would be problematic. Mark is_ttbr0_addr() as __always_inline such that that can safely be used from noinstr code. For consistency, do the same to is_ttbr1_addr(). Note that while is_ttbr1_addr() calls arch_kasan_reset_tag(), this is a macro (and its callees are either macros or __always_inline), so there is not a risk of transient instrumentation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221114144042.3001140-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-14ACPI: Enable FPDT on arm64Jeremy Linton
FPDT provides some boot timing records useful for analyzing parts of the UEFI boot stack. Given the existing code works on arm64, and allows reading the values without utilizing /dev/mem it seems like a good idea to turn it on. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20221109174720.203723-1-jeremy.linton@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-14arm64/signal: Document our convention for choosing magic numbersMark Brown
Szabolcs Nagy has pointed out that most of our signal frame magic numbers are chosen to be meaningful ASCII when dumped to aid manual parsing. This seems sensible since it might help someone parsing things out, let's document it so people implementing new signal contexts are aware of it and are more likely to follow it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221031192450.826159-1-broonie@kernel.org [will: Fixed typo and tweaked wording] Signed-off-by: Will Deacon <will@kernel.org>
2022-11-14arm64: atomics: lse: remove stale dependency on JUMP_LABELMark Rutland
Currently CONFIG_ARM64_USE_LSE_ATOMICS depends upon CONFIG_JUMP_LABEL, as the inline atomics were indirected with a static branch. However, since commit: 21fb26bfb01ffe0d ("arm64: alternatives: add alternative_has_feature_*()") ... we use an alternative_branch (which is always available) rather than a static branch, and hence the dependency is unnecessary. Remove the stale dependency, along with the stale include. This will allow the use of LSE atomics in kernels built with CONFIG_JUMP_LABEL=n, and reduces the risk of circular header dependencies via <asm/lse.h>. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221114125424.2998268-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-14kselftest/arm64: fix array_size.cocci warningwangkailong@jari.cn
Fix following coccicheck warning: tools/testing/selftests/arm64/mte/check_mmap_options.c:64:24-25: WARNING: Use ARRAY_SIZE tools/testing/selftests/arm64/mte/check_mmap_options.c:66:20-21: WARNING: Use ARRAY_SIZE tools/testing/selftests/arm64/mte/check_mmap_options.c:135:25-26: WARNING: Use ARRAY_SIZE tools/testing/selftests/arm64/mte/check_mmap_options.c:96:25-26: WARNING: Use ARRAY_SIZE tools/testing/selftests/arm64/mte/check_mmap_options.c:190:24-25: WARNING: Use ARRAY_SIZE Signed-off-by: KaiLong Wang <wangkailong@jari.cn> Link: https://lore.kernel.org/r/777ce8ba.12e.184705d4211.Coremail.wangkailong@jari.cn Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64/mm: Simplify and document pte_to_phys() for 52 bit addressesAnshuman Khandual
pte_to_phys() assembly definition does multiple bits field transformations to derive physical address, embedded inside a page table entry. Unlike its C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It simplifies these operations via a new macro PTE_ADDR_HIGH_SHIFT indicating how far the pte encoded higher address bits need to be left shifted. While here, this also updates __pte_to_phys() and __phys_to_pte_val(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20221107141753.2938621-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64: paravirt: remove conduit check in has_pv_steal_clockUsama Arif
arm_smccc_1_1_invoke() which is called later on in the function will return failure if there's no conduit (or pre-SMCCC 1.1), hence the check is unnecessary. Suggested-by: Steven Price <steven.price@arm.com> Signed-off-by: Usama Arif <usama.arif@bytedance.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20221104061659.4116508-1-usama.arif@bytedance.com Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64: implement dynamic shadow call stack for ClangArd Biesheuvel
Implement dynamic shadow call stack support on Clang, by parsing the unwind tables at init time to locate all occurrences of PACIASP/AUTIASP instructions, and replacing them with the shadow call stack push and pop instructions, respectively. This is useful because the overhead of the shadow call stack is difficult to justify on hardware that implements pointer authentication (PAC), and given that the PAC instructions are executed as NOPs on hardware that doesn't, we can just replace them without breaking anything. As PACIASP/AUTIASP are guaranteed to be paired with respect to manipulations of the return address, replacing them 1:1 with shadow call stack pushes and pops is guaranteed to result in the desired behavior. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20221027155908.1940624-4-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09scs: add support for dynamic shadow call stacksArd Biesheuvel
In order to allow arches to use code patching to conditionally emit the shadow stack pushes and pops, rather than always taking the performance hit even on CPUs that implement alternatives such as stack pointer authentication on arm64, add a Kconfig symbol that can be set by the arch to omit the SCS codegen itself, without otherwise affecting how support code for SCS and compiler options (for register reservation, for instance) are emitted. Also, add a static key and some plumbing to omit the allocation of shadow call stack for dynamic SCS configurations if SCS is disabled at runtime. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20221027155908.1940624-3-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64: unwind: add asynchronous unwind tables to kernel and modulesArd Biesheuvel
Enable asynchronous unwind table generation for both the core kernel as well as modules, and emit the resulting .eh_frame sections as init code so we can use the unwind directives for code patching at boot or module load time. This will be used by dynamic shadow call stack support, which will rely on code patching rather than compiler codegen to emit the shadow call stack push and pop instructions. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20221027155908.1940624-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09kselftest/arm64: Add SVE 2.1 to hwcap testMark Brown
Add coverage for FEAT_SVE2p1. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221017152520.1039165-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64/hwcap: Add support for SVE 2.1Mark Brown
FEAT_SVE2p1 introduces a number of new SVE instructions. Since there is no new architectural state added kernel support is simply a new hwcap which lets userspace know that the feature is supported. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221017152520.1039165-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09kselftest/arm64: Add FEAT_RPRFM to the hwcap testMark Brown
Since the newly added instruction is in the HINT space we can't reasonably test for it actually being present. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221017152520.1039165-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09arm64/hwcap: Add support for FEAT_RPRFMMark Brown
FEAT_RPRFM adds a new range prefetch hint within the existing PRFM space for range prefetch hinting. Add a new hwcap to allow userspace to discover support for the new instruction. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20221017152520.1039165-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>