summaryrefslogtreecommitdiff
path: root/arch/arm64/include
AgeCommit message (Collapse)Author
2024-10-23arm64: rsi: Add RSI definitionsSuzuki K Poulose
The RMM (Realm Management Monitor) provides functionality that can be accessed by a realm guest through SMC (Realm Services Interface) calls. The SMC definitions are based on DEN0137[1] version 1.0-rel0. [1] https://developer.arm.com/documentation/den0137/1-0rel0/ Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20241017131434.40935-2-steven.price@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-22arm64: preserve pt_regs::stackframe during exec*()Mark Rutland
When performing an exec*(), there's a transient period before the return to userspace where any stacktrace will result in a warning triggered by kunwind_next_frame_record_meta() encountering a struct frame_record_meta with an unknown type. This can be seen fairly reliably by enabling KASAN or KFENCE, e.g. | WARNING: CPU: 3 PID: 143 at arch/arm64/kernel/stacktrace.c:223 arch_stack_walk+0x264/0x3b0 | Modules linked in: | CPU: 3 UID: 0 PID: 143 Comm: login Not tainted 6.12.0-rc2-00010-g0f0b9a3f6a50 #1 | Hardware name: linux,dummy-virt (DT) | pstate: 814000c5 (Nzcv daIF +PAN -UAO -TCO +DIT -SSBS BTYPE=--) | pc : arch_stack_walk+0x264/0x3b0 | lr : arch_stack_walk+0x1ec/0x3b0 | sp : ffff80008060b970 | x29: ffff80008060ba10 x28: fff00000051133c0 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: fff000007fe84000 | x23: ffff9d1b3c940af0 x22: 0000000000000000 x21: fff00000051133c0 | x20: ffff80008060ba50 x19: ffff9d1b3c9408e0 x18: 0000000000000014 | x17: 000000006d50da47 x16: 000000008e3f265e x15: fff0000004e8bf40 | x14: 0000ffffc5e50e48 x13: 000000000000000f x12: 0000ffffc5e50fed | x11: 000000000000001f x10: 000018007f8bffff x9 : 0000000000000000 | x8 : ffff80008060b9c0 x7 : ffff80008060bfd8 x6 : ffff80008060ba80 | x5 : ffff80008060ba00 x4 : ffff80008060c000 x3 : ffff80008060bff0 | x2 : 0000000000000018 x1 : ffff80008060bfd8 x0 : 0000000000000000 | Call trace: | arch_stack_walk+0x264/0x3b0 (P) | arch_stack_walk+0x1ec/0x3b0 (L) | stack_trace_save+0x50/0x80 | metadata_update_state+0x98/0xa0 | kfence_guarded_free+0xec/0x2c4 | __kfence_free+0x50/0x100 | kmem_cache_free+0x1a4/0x37c | putname+0x9c/0xc0 | do_execveat_common.isra.0+0xf0/0x1e4 | __arm64_sys_execve+0x40/0x60 | invoke_syscall+0x48/0x104 | el0_svc_common.constprop.0+0x40/0xe0 | do_el0_svc+0x1c/0x28 | el0_svc+0x34/0xe0 | el0t_64_sync_handler+0x120/0x12c | el0t_64_sync+0x198/0x19c This happens because start_thread_common() zeroes the entirety of current_pt_regs(), including pt_regs::stackframe::type, changing this from FRAME_META_TYPE_FINAL to 0 and making the final record invalid. The stacktrace code will reject this until the next return to userspace, where a subsequent exception entry will reinitialize the type to FRAME_META_TYPE_FINAL. This zeroing wasn't a problem prior to commit: c2c6b27b5aa14fa2 ("arm64: stacktrace: unwind exception boundaries") ... as before that commit the stacktrace code only expected the final pt_regs::stackframe to contain zeroes, which was unchanged by start_thread_common(). A stacktrace could occur at any time, either due to instrumentation or an exception, and so start_thread_common() must ensure that pt_regs::stackframe is always valid. Fix this by changing the way start_thread_common() zeroes and reinitializes the pt_regs fields: * The '{regs,pc,pstate}' fields are initialized in one go via a struct assignment to the user_regs, with start_thread() and compat_start_thread() modified to pass 'pstate' into start_thread_common(). * The 'sp' and 'compat_sp' fields are zeroed by the struct assignment in start_thread_common(), and subsequently overwritten in start_thread() and compat_start_thread respectively, matching existing behaviour. * The 'syscallno' field is implicitly preserved while the 'orig_x0' field is explicitly zeroed, maintaining existing ABI. * The 'pmr' field is explicitly initialized, as necessary for an exec*() from a kernel thread, matching existing behaviour. * The 'stackframe' field is implicitly preserved, with a new comment and some assertions to ensure we don't accidentally break this in future. * All other fields are implicitly preserved, and should have no functional impact: - 'sdei_ttbr1' is only used for SDEI exception entry/exit, and we never exec*() inside an SDEI handler. - 'lockdep_hardirqs' and 'exit_rcu' are only used for EL1 exception entry/exit, and we never exec*() inside an EL1 exception handler. While updating compat_start_thread() to pass 'pstate' into start_thread_common(), I've also updated the logic to remove the ifdeffery, replacing: | #ifdef __AARCH64EB__ | regs->pstate |= PSR_AA32_E_BIT; | #endif ... with: | if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) | pstate |= PSR_AA32_E_BIT; ... which should be functionally equivalent, and matches our preferred code style. Fixes: c2c6b27b5aa1 ("arm64: stacktrace: unwind exception boundaries") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Puranjay Mohan <puranjay12@gmail.com> Cc: Will Deacon <will@kernel.org> Fixes: c2c6b27b5aa1 ("arm64: stacktrace: unwind exception boundaries") Tested-by: Puranjay Mohan <puranjay12@gmail.com> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20241021164456.2275285-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-21Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM64: - Fix the guest view of the ID registers, making the relevant fields writable from userspace (affecting ID_AA64DFR0_EL1 and ID_AA64PFR1_EL1) - Correcly expose S1PIE to guests, fixing a regression introduced in 6.12-rc1 with the S1POE support - Fix the recycling of stage-2 shadow MMUs by tracking the context (are we allowed to block or not) as well as the recycling state - Address a couple of issues with the vgic when userspace misconfigures the emulation, resulting in various splats. Headaches courtesy of our Syzkaller friends - Stop wasting space in the HYP idmap, as we are dangerously close to the 4kB limit, and this has already exploded in -next - Fix another race in vgic_init() - Fix a UBSAN error when faking the cache topology with MTE enabled RISCV: - RISCV: KVM: use raw_spinlock for critical section in imsic x86: - A bandaid for lack of XCR0 setup in selftests, which causes trouble if the compiler is configured to have x86-64-v3 (with AVX) as the default ISA. Proper XCR0 setup will come in the next merge window. - Fix an issue where KVM would not ignore low bits of the nested CR3 and potentially leak up to 31 bytes out of the guest memory's bounds - Fix case in which an out-of-date cached value for the segments could by returned by KVM_GET_SREGS. - More cleanups for KVM_X86_QUIRK_SLOT_ZAP_ALL - Override MTRR state for KVM confidential guests, making it WB by default as is already the case for Hyper-V guests. Generic: - Remove a couple of unused functions" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (27 commits) RISCV: KVM: use raw_spinlock for critical section in imsic KVM: selftests: Fix out-of-bounds reads in CPUID test's array lookups KVM: selftests: x86: Avoid using SSE/AVX instructions KVM: nSVM: Ignore nCR3[4:0] when loading PDPTEs from memory KVM: VMX: reset the segment cache after segment init in vmx_vcpu_reset() KVM: x86: Clean up documentation for KVM_X86_QUIRK_SLOT_ZAP_ALL KVM: x86/mmu: Add lockdep assert to enforce safe usage of kvm_unmap_gfn_range() KVM: x86/mmu: Zap only SPs that shadow gPTEs when deleting memslot x86/kvm: Override default caching mode for SEV-SNP and TDX KVM: Remove unused kvm_vcpu_gfn_to_pfn_atomic KVM: Remove unused kvm_vcpu_gfn_to_pfn KVM: arm64: Ensure vgic_ready() is ordered against MMIO registration KVM: arm64: vgic: Don't check for vgic_ready() when setting NR_IRQS KVM: arm64: Fix shift-out-of-bounds bug KVM: arm64: Shave a few bytes from the EL2 idmap code KVM: arm64: Don't eagerly teardown the vgic on init error KVM: arm64: Expose S1PIE to guests KVM: arm64: nv: Clarify safety of allowing TLBI unmaps to reschedule KVM: arm64: nv: Punt stage-2 recycling to a vCPU request KVM: arm64: nv: Do not block when unmapping stage-2 if disallowed ...
2024-10-17arm64: Support AT_HWCAP3Mark Brown
We have filled all 64 bits of AT_HWCAP2 so in order to support discovery of further features provide the framework to use the already defined AT_HWCAP3 for further CPU features. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20241004-arm64-elf-hwcap3-v2-2-799d1daad8b0@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: stacktrace: unwind exception boundariesMark Rutland
When arm64's stack unwinder encounters an exception boundary, it uses the pt_regs::stackframe created by the entry code, which has a copy of the PC and FP at the time the exception was taken. The unwinder doesn't know anything about pt_regs, and reports the PC from the stackframe, but does not report the LR. The LR is only guaranteed to contain the return address at function call boundaries, and can be used as a scratch register at other times, so the LR at an exception boundary may or may not be a legitimate return address. It would be useful to report the LR value regardless, as it can be helpful when debugging, and in future it will be helpful for reliable stacktrace support. This patch changes the way we unwind across exception boundaries, allowing both the PC and LR to be reported. The entry code creates a frame_record_meta structure embedded within pt_regs, which the unwinder uses to find the pt_regs. The unwinder can then extract pt_regs::pc and pt_regs::lr as two separate unwind steps before continuing with a regular walk of frame records. When a PC is unwound from pt_regs::lr, dump_backtrace() will log this with an "L" marker so that it can be identified easily. For example, an unwind across an exception boundary will appear as follows: | el1h_64_irq+0x6c/0x70 | _raw_spin_unlock_irqrestore+0x10/0x60 (P) | __aarch64_insn_write+0x6c/0x90 (L) | aarch64_insn_patch_text_nosync+0x28/0x80 ... with a (P) entry for pt_regs::pc, and an (L) entry for pt_regs:lr. Note that the LR may be stale at the point of the exception, for example, shortly after a return: | el1h_64_irq+0x6c/0x70 | default_idle_call+0x34/0x180 (P) | default_idle_call+0x28/0x180 (L) | do_idle+0x204/0x268 ... where the LR points a few instructions before the current PC. This plays nicely with all the other unwind metadata tracking. With the ftrace_graph profiler enabled globally, and kretprobes installed on generic_handle_domain_irq() and do_interrupt_handler(), a backtrace triggered by magic-sysrq + L reports: | Call trace: | show_stack+0x20/0x40 (CF) | dump_stack_lvl+0x60/0x80 (F) | dump_stack+0x18/0x28 | nmi_cpu_backtrace+0xfc/0x140 | nmi_trigger_cpumask_backtrace+0x1c8/0x200 | arch_trigger_cpumask_backtrace+0x20/0x40 | sysrq_handle_showallcpus+0x24/0x38 (F) | __handle_sysrq+0xa8/0x1b0 (F) | handle_sysrq+0x38/0x50 (F) | pl011_int+0x460/0x5a8 (F) | __handle_irq_event_percpu+0x60/0x220 (F) | handle_irq_event+0x54/0xc0 (F) | handle_fasteoi_irq+0xa8/0x1d0 (F) | generic_handle_domain_irq+0x34/0x58 (F) | gic_handle_irq+0x54/0x140 (FK) | call_on_irq_stack+0x24/0x58 (F) | do_interrupt_handler+0x88/0xa0 | el1_interrupt+0x34/0x68 (FK) | el1h_64_irq_handler+0x18/0x28 | el1h_64_irq+0x6c/0x70 | default_idle_call+0x34/0x180 (P) | default_idle_call+0x28/0x180 (L) | do_idle+0x204/0x268 | cpu_startup_entry+0x3c/0x50 (F) | rest_init+0xe4/0xf0 | start_kernel+0x744/0x750 | __primary_switched+0x88/0x98 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-11-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: stacktrace: split unwind_consume_stack()Mark Rutland
When unwinding stacks, we use unwind_consume_stack() to both find whether an object (e.g. a frame record) is on an accessible stack *and* to update the stack boundaries. This works fine today since we only care about one type of object which does not overlap other objects. In subsequent patches we'll want to check whether an object (e.g a frame record) is on the stack and follow this up by accessing a larger object containing the first (e.g. a pt_regs with an embedded frame record). To make that pattern easier to implement, this patch reworks unwind_find_next_stack() and unwind_consume_stack() so that the former can be used to check if an object is on any accessible stack, and the latter is purely used to update the stack boundaries. As unwind_find_next_stack() is modified to also check the stack currently being unwound, it is renamed to unwind_find_stack(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-10-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: use a common struct frame_recordMark Rutland
Currently the signal handling code has its own struct frame_record, the definition of struct pt_regs open-codes a frame record as an array, and the kernel unwinder hard-codes frame record offsets. Move to a common struct frame_record that can be used throughout the kernel. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: pt_regs: swap 'unused' and 'pmr' fieldsMark Rutland
In subsequent patches we'll want to add an additional u64 to struct pt_regs. To make space, this patch swaps the 'unused' and 'pmr' fields, as the 'pmr' value only requires bits[7:0] and can safely fit into a u32, which frees up a 64-bit unused field. The 'lockdep_hardirqs' and 'exit_rcu' fields should eventually be moved out of pt_regs and managed locally within entry-common.c, so I've left those as-is for the moment. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: pt_regs: rename "pmr_save" -> "pmr"Mark Rutland
The pt_regs::pmr_save field is weirdly named relative to all other pt_regs fields, with a '_save' suffix that doesn't make anything clearer and only leads to more typing to access the field. Remove the '_save' suffix. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: pt_regs: remove stale big-endian layoutMark Rutland
For historical reasons the layout of struct pt_regs depends on the configured endianness, with the order of the 'syscallno' and 'unused2' fields varying dependent upon whether __AARCH64EB__ is defined. We no longer depend on the order of these two fields and can remove the ifdeffery. The current conditional layout was introduced in commit: 35d0e6fb4d219d64 ("arm64: syscallno is secretly an int, make it official") At the time, this was necessary so that the entry assembly could use a single STP instruction to save the pt_regs::{orig_x0,syscallno} fields, without logic that was conditional on the endianness of the kernel: | el0_svc_naked: | stp x0, xscno, [sp, #S_ORIG_X0] // save the original x0 and syscall number This logic was converted to C in commit: f37099b6992a0b81 ("arm64: convert syscall trace logic to C") Since that commit, we no longer manipulate pt_regs::orig_x0 from assembly, and only manipulate pt_regs::syscallno as a 32-bit quantity early in the kernel_entry assembly: | /* Not in a syscall by default (el0_svc overwrites for real syscall) */ | .if \el == 0 | mov w21, #NO_SYSCALL | str w21, [sp, #S_SYSCALLNO] | .endif Given the above, there's no longer a need for the layout of pt_regs::{syscallno,unused2} to depend on the endianness of the kernel. This patch removes the ifdeffery and places 'syscallno' before 'unused2' regardless of the endianess of the kernel. At the same time, 'unused2' is renamed to 'unused', as it is the only unused field within pt_regs. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: pt_regs: assert pt_regs is a multiple of 16 bytesMark Rutland
To ensure that the stack is correctly aligned when branching to C code, we require that struct pt_regs is a multiple of 16 bytes, as noted in a comment. Add an explicit assertion for this, so that any accidental violation of this requirement will be caught by the compiler. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241017092538.1859841-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: - Disable software tag-based KASAN when compiling with GCC, as functions are incorrectly instrumented leading to a crash early during boot - Fix pkey configuration for kernel threads when POE is enabled - Fix invalid memory accesses in uprobes when targetting load-literal instructions * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: kasan: Disable Software Tag-Based KASAN with GCC Documentation/protection-keys: add AArch64 to documentation arm64: set POR_EL0 for kernel threads arm64: probes: Fix uprobes for big-endian kernels arm64: probes: Fix simulate_ldr*_literal() arm64: probes: Remove broken LDR (literal) uprobe support
2024-10-17arm64: mops: Handle MOPS exceptions from EL1Kristina Martsenko
We will soon be using MOPS instructions in the kernel, so wire up the exception handler to handle exceptions from EL1 caused by the copy/set operation being stopped and resumed on a different type of CPU. Add a helper for advancing the single step state machine, similarly to what the EL0 exception handler does. Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20240930161051.3777828-3-kristina.martsenko@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17arm64: probes: Disable kprobes/uprobes on MOPS instructionsKristina Martsenko
FEAT_MOPS instructions require that all three instructions (prologue, main and epilogue) appear consecutively in memory. Placing a kprobe/uprobe on one of them doesn't work as only a single instruction gets executed out-of-line or simulated. So don't allow placing a probe on a MOPS instruction. Fixes: b7564127ffcb ("arm64: mops: detect and enable FEAT_MOPS") Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20240930161051.3777828-2-kristina.martsenko@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-17KVM: arm64: Shave a few bytes from the EL2 idmap codeMarc Zyngier
Our idmap is becoming too big, to the point where it doesn't fit in a 4kB page anymore. There are some low-hanging fruits though, such as the el2_init_state horror that is expanded 3 times in the kernel. Let's at least limit ourselves to two copies, which makes the kernel link again. At some point, we'll have to have a better way of doing this. Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241009204903.GA3353168@thelio-3990X
2024-10-16arm64: head: Drop SWAPPER_TABLE_SHIFTGavin Shan
There is no users of SWAPPER_TABLE_SHIFT after commit 84b04d3e6bdb ("arm64: kernel: Create initial ID map from C code"). Just drop it. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20241014030341.995806-1-gshan@redhat.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16arm64: cpufeature: add POE to cpucap_is_possible()Joey Gouly
Since de66cb37ab6 ("arm64: Add cpucap_is_possible()"), alternative_has_cap_unlikely() includes the IS_ENABLED() check. Add CONFIG_ARM64_POE to cpucap_is_possible() to avoid the explicit check. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241008140121.2774348-1-joey.gouly@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16hugetlb: arm64: add mte supportYang Shi
Enable MTE support for hugetlb. The MTE page flags will be set on the folio only. When copying hugetlb folio (for example, CoW), the tags for all subpages will be copied when copying the first subpage. When freeing hugetlb folio, the MTE flags will be cleared. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Yang Shi <yang@os.amperecomputing.com> Link: https://lore.kernel.org/r/20241001225220.271178-1-yang@os.amperecomputing.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16arm64/mm: Change pgattr_change_is_safe() arguments as pteval_tAnshuman Khandual
pgattr_change_is_safe() processes two distinct page table entries that just happen to be 64 bits for all levels. This changes both arguments to reflect the actual data type being processed in the function. This change is important when moving to FEAT_D128 based 128 bit page tables because it makes it simple to change the entry size in one place. Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20241001045804.1119881-1-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16arm64: optimize flush tlb kernel rangeKefeng Wang
Currently the kernel TLBs is flushed page by page if the target VA range is less than MAX_DVM_OPS * PAGE_SIZE, otherwise we'll brutally issue a TLBI ALL. But we could optimize it when CPU supports TLB range operations, convert to use __flush_tlb_range_op() like other tlb range flush to improve performance. Co-developed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240923131351.713304-3-wangkefeng.wang@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16arm64: tlbflush: add __flush_tlb_range_limit_excess()Kefeng Wang
The __flush_tlb_range_limit_excess() helper will be used when flush tlb kernel range soon. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240923131351.713304-2-wangkefeng.wang@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-16vdso: Introduce vdso/page.hVincenzo Frascino
The VDSO implementation includes headers from outside of the vdso/ namespace. Introduce vdso/page.h to make sure that the generic library uses only the allowed namespace. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Link: https://lore.kernel.org/all/20241014151340.1639555-3-vincenzo.frascino@arm.com
2024-10-15arm64: insn: Simulate nop instruction for better uprobe performanceLiao Chang
v2->v1: 1. Remove the simuation of STP and the related bits. 2. Use arm64_skip_faulting_instruction for single-stepping or FEAT_BTI scenario. As Andrii pointed out, the uprobe/uretprobe selftest bench run into a counterintuitive result that nop and push variants are much slower than ret variant [0]. The root cause lies in the arch_probe_analyse_insn(), which excludes 'nop' and 'stp' from the emulatable instructions list. This force the kernel returns to userspace and execute them out-of-line, then trapping back to kernel for running uprobe callback functions. This leads to a significant performance overhead compared to 'ret' variant, which is already emulated. Typicall uprobe is installed on 'nop' for USDT and on function entry which starts with the instrucion 'stp x29, x30, [sp, #imm]!' to push lr and fp into stack regardless kernel or userspace binary. In order to improve the performance of handling uprobe for common usecases. This patch supports the emulation of Arm64 equvialents instructions of 'nop' and 'push'. The benchmark results below indicates the performance gain of emulation is obvious. On Kunpeng916 (Hi1616), 4 NUMA nodes, 64 Arm64 cores@2.4GHz. xol (1 cpus) ------------ uprobe-nop: 0.916 ± 0.001M/s (0.916M/prod) uprobe-push: 0.908 ± 0.001M/s (0.908M/prod) uprobe-ret: 1.855 ± 0.000M/s (1.855M/prod) uretprobe-nop: 0.640 ± 0.000M/s (0.640M/prod) uretprobe-push: 0.633 ± 0.001M/s (0.633M/prod) uretprobe-ret: 0.978 ± 0.003M/s (0.978M/prod) emulation (1 cpus) ------------------- uprobe-nop: 1.862 ± 0.002M/s (1.862M/prod) uprobe-push: 1.743 ± 0.006M/s (1.743M/prod) uprobe-ret: 1.840 ± 0.001M/s (1.840M/prod) uretprobe-nop: 0.964 ± 0.004M/s (0.964M/prod) uretprobe-push: 0.936 ± 0.004M/s (0.936M/prod) uretprobe-ret: 0.940 ± 0.001M/s (0.940M/prod) As shown above, the performance gap between 'nop/push' and 'ret' variants has been significantly reduced. Due to the emulation of 'push' instruction needs to access userspace memory, it spent more cycles than the other. As Mark suggested [1], it is painful to emulate the correct atomicity and ordering properties of STP, especially when it interacts with MTE, POE, etc. So this patch just focus on the simuation of 'nop'. The simluation of STP and related changes will be addressed in a separate patch. [0] https://lore.kernel.org/all/CAEf4BzaO4eG6hr2hzXYpn+7Uer4chS0R99zLn02ezZ5YruVuQw@mail.gmail.com/ [1] https://lore.kernel.org/all/Zr3RN4zxF5XPgjEB@J2N7QTR9R3/ CC: Andrii Nakryiko <andrii.nakryiko@gmail.com> CC: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Liao Chang <liaochang1@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20240909071114.1150053-1-liaochang1@huawei.com [catalin.marinas@arm.com: small tweaks following MarkR's comments] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-15arm64: asm-offsets: remove VMA_VM_*Mark Rutland
The VMA_VM_MM definition is only used by the vma_vm_mm macro, which itself is unused. The VMA_VM_FLAGS definition isn't used anywhere. Remove them all. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241007123921.549340-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-15arm64: probes: Remove probe_opcode_tMark Rutland
The probe_opcode_t typedef for u32 isn't necessary, and is a source of confusion as it is easily confused with kprobe_opcode_t, which is a typedef for __le32. The typedef is only used within arch/arm64, and all of arm64's commn insn code uses u32 for the endian-agnostic value of an instruction, so it'd be clearer to use u32 consistently. Remove probe_opcode_t and use u32 directly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marnias@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241008155851.801546-7-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-15arm64: probes: Cleanup kprobes endianness conversionsMark Rutland
The core kprobes code uses kprobe_opcode_t for the in-memory representation of an instruction, using 'kprobe_opcode_t *' for XOL slots. As arm64 instructions are always little-endian 32-bit values, kprobes_opcode_t should be __le32, but at the moment kprobe_opcode_t is typedef'd to u32. Today there is no functional issue as we convert values via cpu_to_le32() and le32_to_cpu() where necessary, but these conversions are inconsistent with the types used, causing sparse warnings: | CHECK arch/arm64/kernel/probes/kprobes.c | arch/arm64/kernel/probes/kprobes.c:102:21: warning: cast to restricted __le32 | CHECK arch/arm64/kernel/probes/decode-insn.c | arch/arm64/kernel/probes/decode-insn.c:122:46: warning: cast to restricted __le32 | arch/arm64/kernel/probes/decode-insn.c:124:50: warning: cast to restricted __le32 | arch/arm64/kernel/probes/decode-insn.c:136:31: warning: cast to restricted __le32 Improve this by making kprobes_opcode_t a typedef for __le32 and consistently using this for pointers to executable instructions. With this change we can rely on the type system to tell us where conversions are necessary. Since kprobe::opcode is changed from u32 to __le32, the existing le32_to_cpu() converion moves from the point this is initialized (in arch_prepare_kprobe()) to the points this is consumed when passed to a handler or text patching function. As kprobe::opcode isn't altered or consumed elsewhere, this shouldn't result in a functional change. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241008155851.801546-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-15arm64: probes: Move kprobes-specific fieldsMark Rutland
We share struct arch_probe_insn between krpboes and uprobes, but most of its fields aren't necessary for uprobes: * The 'insn' field is only used by kprobes as a pointer to the XOL slot. * The 'restore' field is only used by probes as the PC to restore after stepping an instruction in the XOL slot. * The 'pstate_cc' field isn't used by kprobes or uprobes, and seems to only exist as a result of copy-pasting the 32-bit arm implementation of kprobes. As these fields live in struct arch_probe_insn they cannot use definitions that only exist when CONFIG_KPROBES=y, such as the kprobe_opcode_t typedef, which we'd like to use in subsequent patches. Clean this up by removing the 'pstate_cc' field, and moving the kprobes-specific fields into the kprobes-specific struct arch_specific_insn. To make it clear that the fields are related to stepping instructions in the XOL slot, 'insn' is renamed to 'xol_insn' and 'restore' is renamed to 'xol_restore' At the same time, remove the misleading and useless comment above struct arch_probe_insn. The should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241008155851.801546-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-15vdso: Remove timekeeper argument of __arch_update_vsyscall()Thomas Weißschuh
No implementation of this hook uses the passed in timekeeper anymore. This avoids including a non-VDSO header while building the VDSO, which can lead to compilation errors. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/all/20241010-vdso-generic-arch_update_vsyscall-v1-1-7fe5a3ea4382@linutronix.de
2024-10-15ftrace: Consolidate ftrace_regs accessor functions for archs using pt_regsSteven Rostedt
Most architectures use pt_regs within ftrace_regs making a lot of the accessor functions just calls to the pt_regs internally. Instead of duplication this effort, use a HAVE_ARCH_FTRACE_REGS for architectures that have their own ftrace_regs that is not based on pt_regs and will define all the accessor functions, and for the architectures that just use pt_regs, it will leave it undefined, and the default accessor functions will be used. Note, this will also make it easier to add new accessor functions to ftrace_regs as it will mean having to touch less architectures. Cc: <linux-arch@vger.kernel.org> Cc: "x86@kernel.org" <x86@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Naveen N Rao <naveen@kernel.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/20241010202114.2289f6fd@gandalf.local.home Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390 Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> # powerpc Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-10ftrace: Make ftrace_regs abstract from direct useSteven Rostedt
ftrace_regs was created to hold registers that store information to save function parameters, return value and stack. Since it is a subset of pt_regs, it should only be used by its accessor functions. But because pt_regs can easily be taken from ftrace_regs (on most archs), it is tempting to use it directly. But when running on other architectures, it may fail to build or worse, build but crash the kernel! Instead, make struct ftrace_regs an empty structure and have the architectures define __arch_ftrace_regs and all the accessor functions will typecast to it to get to the actual fields. This will help avoid usage of ftrace_regs directly. Link: https://lore.kernel.org/all/20241007171027.629bdafd@gandalf.local.home/ Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org> Cc: "x86@kernel.org" <x86@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Naveen N Rao <naveen@kernel.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/20241008230628.958778821@goodmis.org Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390 Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-10-09arm64: probes: Fix uprobes for big-endian kernelsMark Rutland
The arm64 uprobes code is broken for big-endian kernels as it doesn't convert the in-memory instruction encoding (which is always little-endian) into the kernel's native endianness before analyzing and simulating instructions. This may result in a few distinct problems: * The kernel may may erroneously reject probing an instruction which can safely be probed. * The kernel may erroneously erroneously permit stepping an instruction out-of-line when that instruction cannot be stepped out-of-line safely. * The kernel may erroneously simulate instruction incorrectly dur to interpretting the byte-swapped encoding. The endianness mismatch isn't caught by the compiler or sparse because: * The arch_uprobe::{insn,ixol} fields are encoded as arrays of u8, so the compiler and sparse have no idea these contain a little-endian 32-bit value. The core uprobes code populates these with a memcpy() which similarly does not handle endianness. * While the uprobe_opcode_t type is an alias for __le32, both arch_uprobe_analyze_insn() and arch_uprobe_skip_sstep() cast from u8[] to the similarly-named probe_opcode_t, which is an alias for u32. Hence there is no endianness conversion warning. Fix this by changing the arch_uprobe::{insn,ixol} fields to __le32 and adding the appropriate __le32_to_cpu() conversions prior to consuming the instruction encoding. The core uprobes copies these fields as opaque ranges of bytes, and so is unaffected by this change. At the same time, remove MAX_UINSN_BYTES and consistently use AARCH64_INSN_SIZE for clarity. Tested with the following: | #include <stdio.h> | #include <stdbool.h> | | #define noinline __attribute__((noinline)) | | static noinline void *adrp_self(void) | { | void *addr; | | asm volatile( | " adrp %x0, adrp_self\n" | " add %x0, %x0, :lo12:adrp_self\n" | : "=r" (addr)); | } | | | int main(int argc, char *argv) | { | void *ptr = adrp_self(); | bool equal = (ptr == adrp_self); | | printf("adrp_self => %p\n" | "adrp_self() => %p\n" | "%s\n", | adrp_self, ptr, equal ? "EQUAL" : "NOT EQUAL"); | | return 0; | } .... where the adrp_self() function was compiled to: | 00000000004007e0 <adrp_self>: | 4007e0: 90000000 adrp x0, 400000 <__ehdr_start> | 4007e4: 911f8000 add x0, x0, #0x7e0 | 4007e8: d65f03c0 ret Before this patch, the ADRP is not recognized, and is assumed to be steppable, resulting in corruption of the result: | # ./adrp-self | adrp_self => 0x4007e0 | adrp_self() => 0x4007e0 | EQUAL | # echo 'p /root/adrp-self:0x007e0' > /sys/kernel/tracing/uprobe_events | # echo 1 > /sys/kernel/tracing/events/uprobes/enable | # ./adrp-self | adrp_self => 0x4007e0 | adrp_self() => 0xffffffffff7e0 | NOT EQUAL After this patch, the ADRP is correctly recognized and simulated: | # ./adrp-self | adrp_self => 0x4007e0 | adrp_self() => 0x4007e0 | EQUAL | # | # echo 'p /root/adrp-self:0x007e0' > /sys/kernel/tracing/uprobe_events | # echo 1 > /sys/kernel/tracing/events/uprobes/enable | # ./adrp-self | adrp_self => 0x4007e0 | adrp_self() => 0x4007e0 | EQUAL Fixes: 9842ceae9fa8 ("arm64: Add uprobe support") Cc: stable@vger.kernel.org Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241008155851.801546-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-10-08KVM: arm64: nv: Punt stage-2 recycling to a vCPU requestOliver Upton
Currently, when a nested MMU is repurposed for some other MMU context, KVM unmaps everything during vcpu_load() while holding the MMU lock for write. This is quite a performance bottleneck for large nested VMs, as all vCPU scheduling will spin until the unmap completes. Start punting the MMU cleanup to a vCPU request, where it is then possible to periodically release the MMU lock and CPU in the presence of contention. Ensure that no vCPU winds up using a stale MMU by tracking the pending unmap on the S2 MMU itself and requesting an unmap on every vCPU that finds it. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241007233028.2236133-4-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-10-08KVM: arm64: nv: Do not block when unmapping stage-2 if disallowedOliver Upton
Right now the nested code allows unmap operations on a shadow stage-2 to block unconditionally. This is wrong in a couple places, such as a non-blocking MMU notifier or on the back of a sched_in() notifier as part of shadow MMU recycling. Carry through whether or not blocking is allowed to kvm_pgtable_stage2_unmap(). This 'fixes' an issue where stage-2 MMU reclaim would precipitate a stack overflow from a pile of kvm_sched_in() callbacks, all trying to recycle a stage-2 MMU. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241007233028.2236133-3-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-10-06Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM64: - Fix pKVM error path on init, making sure we do not change critical system registers as we're about to fail - Make sure that the host's vector length is at capped by a value common to all CPUs - Fix kvm_has_feat*() handling of "negative" features, as the current code is pretty broken - Promote Joey to the status of official reviewer, while James steps down -- hopefully only temporarly x86: - Fix compilation with KVM_INTEL=KVM_AMD=n - Fix disabling KVM_X86_QUIRK_SLOT_ZAP_ALL when shadow MMU is in use Selftests: - Fix compilation on non-x86 architectures" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: x86/reboot: emergency callbacks are now registered by common KVM code KVM: x86: leave kvm.ko out of the build if no vendor module is requested KVM: x86/mmu: fix KVM_X86_QUIRK_SLOT_ZAP_ALL for shadow MMU KVM: arm64: Fix kvm_has_feat*() handling of negative features KVM: selftests: Fix build on architectures other than x86_64 KVM: arm64: Another reviewer reshuffle KVM: arm64: Constrain the host to the maximum shared SVE VL with pKVM KVM: arm64: Fix __pkvm_init_vcpu cptr_el2 error path
2024-10-04arm64/ptrace: Expose GCS via ptrace and core filesMark Brown
Provide a new register type NT_ARM_GCS reporting the current GCS mode and pointer for EL0. Due to the interactions with allocation and deallocation of Guarded Control Stacks we do not permit any changes to the GCS mode via ptrace, only GCSPR_EL0 may be changed. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-27-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/signal: Expose GCS state in signal framesMark Brown
Add a context for the GCS state and include it in the signal context when running on a system that supports GCS. We reuse the same flags that the prctl() uses to specify which GCS features are enabled and also provide the current GCS pointer. We do not support enabling GCS via signal return, there is a conflict between specifying GCSPR_EL0 and allocation of a new GCS and this is not an ancticipated use case. We also enforce GCS configuration locking on signal return. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Acked-by: Yury Khrustalev <yury.khrustalev@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-26-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/signal: Set up and restore the GCS context for signal handlersMark Brown
When invoking a signal handler we use the GCS configuration and stack for the current thread. Since we implement signal return by calling the signal handler with a return address set up pointing to a trampoline in the vDSO we need to also configure any active GCS for this by pushing a frame for the trampoline onto the GCS. If we do not do this then signal return will generate a GCS protection fault. In order to guard against attempts to bypass GCS protections via signal return we only allow returning with GCSPR_EL0 pointing to an address where it was previously preempted by a signal. We do this by pushing a cap onto the GCS, this takes the form of an architectural GCS cap token with the top bit set and token type of 0 which we add on signal entry and validate and pop off on signal return. The combination of the top bit being set and the token type mean that this can't be interpreted as a valid token or address. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-25-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Implement shadow stack prctl() interfaceMark Brown
Implement the architecture neutral prctl() interface for setting the shadow stack status, this supports setting and reading the current GCS configuration for the current thread. Userspace can enable basic GCS functionality and additionally also support for GCS pushes and arbitrary GCS stores. It is expected that this prctl() will be called very early in application startup, for example by the dynamic linker, and not subsequently adjusted during normal operation. Users should carefully note that after enabling GCS for a thread GCS will become active with no call stack so it is not normally possible to return from the function that invoked the prctl(). State is stored per thread, enabling GCS for a thread causes a GCS to be allocated for that thread. Userspace may lock the current GCS configuration by specifying PR_SHADOW_STACK_ENABLE_LOCK, this prevents any further changes to the GCS configuration via any means. If GCS is not being enabled then all flags other than _LOCK are ignored, it is not possible to enable stores or pops without enabling GCS. When disabling the GCS we do not free the allocated stack, this allows for inspection of the GCS after disabling as part of fault reporting. Since it is not an expected use case and since it presents some complications in determining what to do with previously initialsed data on the GCS attempts to reenable GCS after this are rejected. This can be revisted if a use case arises. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-23-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Ensure that new threads have a GCSMark Brown
When a new thread is created by a thread with GCS enabled the GCS needs to be specified along with the regular stack. Unfortunately plain clone() is not extensible and existing clone3() users will not specify a stack so all existing code would be broken if we mandated specifying the stack explicitly. For compatibility with these cases and also x86 (which did not initially implement clone3() support for shadow stacks) if no GCS is specified we will allocate one so when a thread is created which has GCS enabled allocate one for it. We follow the extensively discussed x86 implementation and allocate min(RLIMIT_STACK/2, 2G). Since the GCS only stores the call stack and not any variables this should be more than sufficient for most applications. GCSs allocated via this mechanism will be freed when the thread exits. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Acked-by: Yury Khrustalev <yury.khrustalev@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-22-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Context switch GCS state for EL0Mark Brown
There are two registers controlling the GCS state of EL0, GCSPR_EL0 which is the current GCS pointer and GCSCRE0_EL1 which has enable bits for the specific GCS functionality enabled for EL0. Manage these on context switch and process lifetime events, GCS is reset on exec(). Also ensure that any changes to the GCS memory are visible to other PEs and that changes from other PEs are visible on this one by issuing a GCSB DSYNC when moving to or from a thread with GCS. Since the current GCS configuration of a thread will be visible to userspace we store the configuration in the format used with userspace and provide a helper which configures the system register as needed. On systems that support GCS we always allow access to GCSPR_EL0, this facilitates reporting of GCS faults if userspace implements disabling of GCS on error - the GCS can still be discovered and examined even if GCS has been disabled. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-21-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/traps: Handle GCS exceptionsMark Brown
A new exception code is defined for GCS specific faults other than standard load/store faults, for example GCS token validation failures, add handling for this. These faults are reported to userspace as segfaults with code SEGV_CPERR (protection error), mirroring the reporting for x86 shadow stack errors. GCS faults due to memory load/store operations generate data aborts with a flag set, these will be handled separately as part of the data abort handling. Since we do not currently enable GCS for EL1 we should not get any faults there but while we're at it we wire things up there, treating any GCS fault as fatal. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-19-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/hwcap: Add hwcap for GCSMark Brown
Provide a hwcap to enable userspace to detect support for GCS. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Yury Khrustalev <yury.khrustalev@arm.com> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-18-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/mm: Map pages for guarded control stackMark Brown
Map pages flagged as being part of a GCS as such rather than using the full set of generic VM flags. This is done using a conditional rather than extending the size of protection_map since that would make for a very sparse array. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-15-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/mm: Allocate PIE slots for EL0 guarded control stackMark Brown
Pages used for guarded control stacks need to be described to the hardware using the Permission Indirection Extension, GCS is not supported without PIE. In order to support copy on write for guarded stacks we allocate two values, one for active GCSs and one for GCS pages marked as read only prior to copy. Since the actual effect is defined using PIE the specific bit pattern used does not matter to the hardware but we choose two values which differ only in PTE_WRITE in order to help share code with non-PIE cases. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-13-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/cpufeature: Runtime detection of Guarded Control Stack (GCS)Mark Brown
Add a cpufeature for GCS, allowing other code to conditionally support it at runtime. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-12-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Provide basic EL2 setup to allow GCS usage at EL0 and EL1Mark Brown
There is a control HCRX_EL2.GCSEn which must be set to allow GCS features to take effect at lower ELs and also fine grained traps for GCS usage at EL0 and EL1. Configure all these to allow GCS usage by EL0 and EL1. We also initialise GCSCR_EL1 and GCSCRE0_EL1 to ensure that we can execute function call instructions without faulting regardless of the state when the kernel is started. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-11-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Provide put_user_gcs()Mark Brown
In order for EL1 to write to an EL0 GCS it must use the GCSSTTR instruction rather than a normal STTR. Provide a put_user_gcs() which does this. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-10-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/gcs: Add manual encodings of GCS instructionsMark Brown
Define C callable functions for GCS instructions used by the kernel. In order to avoid ambitious toolchain requirements for GCS support these are manually encoded, this means we have fixed register numbers which will be a bit limiting for the compiler but none of these should be used in sufficiently fast paths for this to be a problem. Note that GCSSTTR is used to store to EL0. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-9-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/sysreg: Add definitions for architected GCS capsMark Brown
The architecture defines a format for guarded control stack caps, used to mark the top of an unused GCS in order to limit the potential for exploitation via stack switching. Add definitions associated with these. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-8-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-10-04arm64/mm: Restructure arch_validate_flags() for extensibilityMark Brown
Currently arch_validate_flags() is written in a very non-extensible fashion, returning immediately if MTE is not supported and writing the MTE check as a direct return. Since we will want to add more checks for GCS refactor the existing code to be more extensible, no functional change intended. Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241001-arm64-gcs-v13-3-222b78d87eee@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>