summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)Author
2024-02-22arm64: remove unneeded BUILD_BUG_ON assertionDawei Li
Since commit c02433dd6de3 ("arm64: split thread_info from task stack"), CONFIG_THREAD_INFO_IN_TASK is enabled unconditionally for arm64. So remove this always-true assertion from arch_dup_task_struct. Signed-off-by: Dawei Li <dawei.li@shingroup.cn> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20240202040211.3118918-1-dawei.li@shingroup.cn Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-20arm64: vdso: Use generic union vdso_data_storeAnna-Maria Behnsen
There is already a generic union definition for vdso_data_store in vdso datapage header. Use this definition to prevent code duplication. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20240219153939.75719-6-anna-maria@linutronix.de
2024-02-20arm64: kretprobes: acquire the regs via a BRK exceptionMark Rutland
On arm64, kprobes always take an exception and so create a struct pt_regs through the usual exception entry logic. Similarly kretprobes taskes and exception for function entry, but for function returns it uses a trampoline which attempts to create a struct pt_regs without taking an exception. This is problematic for a few reasons, including: 1) The kretprobes trampoline neither saves nor restores all of the portions of PSTATE. Before invoking the handler it saves a number of portions of PSTATE, and after returning from the handler it restores NZCV before returning to the original return address provided by the handler. 2) The kretprobe trampoline constructs the PSTATE value piecemeal from special purpose registers as it cannot read all of PSTATE atomically without taking an exception. This is somewhat fragile, and it's not possible to reliably recover PSTATE information which only exists on some physical CPUs (e.g. when SSBS support is mismatched). Today the kretprobes trampoline does not record: - BTYPE - SSBS - ALLINT - SS - PAN - UAO - DIT - TCO ... and this will only get worse with future architecture extensions which add more PSTATE bits. 3) The kretprobes trampoline doesn't store portions of struct pt_regs (e.g. the PMR value when using pseudo-NMIs). Due to this, helpers which operate on a struct pt_regs, such as interrupts_enabled(), may not work correctly. 4) The function entry and function exit handlers run in different contexts. The entry handler will always be run in a debug exception context (which is currently treated as an NMI), but the return will be treated as whatever context the instrumented function was executed in. The differences between these contexts are liable to cause problems (e.g. as the two can be differently interruptible or preemptible, adversely affecting synchronization between the handlers). 5) As the kretprobes trampoline runs in the same context as the code being probed, it is subject to the same single-stepping context, which may not be desirable if this is being driven by the kprobes handlers. Overall, this is fragile, painful to maintain, and gets in the way of supporting other things (e.g. RELIABLE_STACKTRACE, FEAT_NMI). This patch addresses these issues by replacing the kretprobes trampoline with a `BRK` instruction, and using an exception boundary to acquire and restore the regs, in the same way as the regular kprobes trampoline. Ive tested this atop v6.8-rc3: | KTAP version 1 | 1..1 | KTAP version 1 | # Subtest: kprobes_test | # module: test_kprobes | 1..7 | ok 1 test_kprobe | ok 2 test_kprobes | ok 3 test_kprobe_missed | ok 4 test_kretprobe | ok 5 test_kretprobes | ok 6 test_stacktrace_on_kretprobe | ok 7 test_stacktrace_on_nested_kretprobe | # kprobes_test: pass:7 fail:0 skip:0 total:7 | # Totals: pass:7 fail:0 skip:0 total:7 | ok 1 kprobes_test Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20240208145916.2004154-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-20arm64: Unmask Debug + SError in do_notify_resume()Mark Rutland
When returning to a user context, the arm64 entry code masks all DAIF exceptions before handling pending work in exit_to_user_mode_prepare() and do_notify_resume(), where it will transiently unmask all DAIF exceptions. This is a holdover from the old entry assembly, which conservatively masked all DAIF exceptions, and it's only necessary to mask interrupts at this point during the exception return path, so long as we subsequently mask all DAIF exceptions before the actual exception return. While most DAIF manipulation follows a save...restore sequence, the manipulation in do_notify_resume() is the other way around, unmasking all DAIF exceptions before masking them again. This is unfortunate as we unnecessarily mask Debug and SError exceptions, and it would be nice to remove this special case to make DAIF manipulation simpler and most consistent. This patch changes exit_to_user_mode_prepare() and do_notify_resume() to only mask interrupts while handling pending work, masking other DAIF exceptions after this has completed. This removes the unusual DAIF manipulation and allows Debug and SError exceptions to be taken for a slightly longer window during the exception return path. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240206123848.1696480-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Itaru Kitayama <itaru.kitayama@linux.dev>
2024-02-20arm64: Move do_notify_resume() to entry-common.cMark Rutland
Currently do_notify_resume() lives in arch/arm64/kernel/signal.c, but it would make more sense for it to live in entry-common.c as it handles more than signals, and is coupled with the rest of the return-to-userspace sequence (e.g. with unusual DAIF masking that matches the exception return requirements). Move do_notify_resume() to entry-common.c. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240206123848.1696480-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Itaru Kitayama <itaru.kitayama@linux.dev>
2024-02-20arm64: Simplify do_notify_resume() DAIF maskingMark Rutland
In do_notify_resume, we handle _TIF_NEED_RESCHED differently from all other flags, leaving IRQ+FIQ masked when calling into schedule(). This masking is a historical artifact, and it is not currently necessary to mask IRQ+FIQ when calling into schedule (as evidenced by the generic exit_to_user_mode_loop(), which unmasks IRQs before checking _TIF_NEED_RESCHED and calling schedule()). This patch removes the special case for _TIF_NEED_RESCHED, moving this check into the main loop such that schedule() will be called from a regular process context with IRQ+FIQ unmasked. This is a minor simplification to do_notify_resume() and brings it into line with the generic exit_to_user_mode_loop() logic. This will also aid subsequent rework of DAIF management. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240206123848.1696480-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Itaru Kitayama <itaru.kitayama@linux.dev>
2024-02-20arm64/sme: Restore SMCR_EL1.EZT0 on exit from suspendMark Brown
The fields in SMCR_EL1 reset to an architecturally UNKNOWN value. Since we do not otherwise manage the traps configured in this register at runtime we need to reconfigure them after a suspend in case nothing else was kind enough to preserve them for us. Do so for SMCR_EL1.EZT0. Fixes: d4913eee152d ("arm64/sme: Add basic enumeration for SME2") Reported-by: Jackson Cooper-Driver <Jackson.Cooper-Driver@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240213-arm64-sme-resume-v3-2-17e05e493471@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-20arm64/sme: Restore SME registers on exit from suspendMark Brown
The fields in SMCR_EL1 and SMPRI_EL1 reset to an architecturally UNKNOWN value. Since we do not otherwise manage the traps configured in this register at runtime we need to reconfigure them after a suspend in case nothing else was kind enough to preserve them for us. The vector length will be restored as part of restoring the SME state for the next SME using task. Fixes: a1f4ccd25cc2 ("arm64/sme: Provide Kconfig for SME") Reported-by: Jackson Cooper-Driver <Jackson.Cooper-Driver@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240213-arm64-sme-resume-v3-1-17e05e493471@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-19arm64: Use Signed/Unsigned enums for TGRAN{4,16,64} and VARangeMarc Zyngier
Open-coding the feature matching parameters for LVA/LVA2 leads to issues with upcoming changes to the cpufeature code. By making TGRAN{4,16,64} and VARange signed/unsigned as per the architecture, we can use the existing macros, making the feature match robust against those changes. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: add support for WXN memory translation attributeArd Biesheuvel
The AArch64 virtual memory system supports a global WXN control, which can be enabled to make all writable mappings implicitly no-exec. This is a useful hardening feature, as it prevents mistakes in managing page table permissions from being exploited to attack the system. When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is completely under our control, and has been cleaned up to allow WXN to be enabled from boot onwards. EL0 is not under our control, but given that widely deployed security features such as selinux or PaX already limit the ability of user space to create mappings that are writable and executable at the same time, the impact of enabling this for EL0 is expected to be limited. (For this reason, common user space libraries that have a legitimate need for manipulating executable code already carry fallbacks such as [0].) If enabled at compile time, the feature can still be disabled at boot if needed, by passing arm64.nowxn on the kernel command line. [0] https://github.com/libffi/libffi/blob/master/src/closures.c#L440 Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20240214122845.2033971-88-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: Enable 52-bit virtual addressing for 4k and 16k granule configsArd Biesheuvel
Update Kconfig to permit 4k and 16k granule configurations to be built with 52-bit virtual addressing, now that all the prerequisites are in place. While at it, update the feature description so it matches on the appropriate feature bits depending on the page size. For simplicity, let's just keep ARM64_HAS_VA52 as the feature name. Note that LPA2 based 52-bit virtual addressing requires 52-bit physical addressing support to be enabled as well, as programming TCR.TxSZ to values below 16 is not allowed unless TCR.DS is set, which is what activates the 52-bit physical addressing support. While supporting the converse (52-bit physical addressing without 52-bit virtual addressing) would be possible in principle, let's keep things simple, by only allowing these features to be enabled at the same time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-85-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: Add support for folding PUDs at runtimeArd Biesheuvel
In order to support LPA2 on 16k pages in a way that permits non-LPA2 systems to run the same kernel image, we have to be able to fall back to at most 48 bits of virtual addressing. Falling back to 48 bits would result in a level 0 with only 2 entries, which is suboptimal in terms of TLB utilization. So instead, let's fall back to 47 bits in that case. This means we need to be able to fold PUDs dynamically, similar to how we fold P4Ds for 48 bit virtual addressing on LPA2 with 4k pages. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-81-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: Enable LPA2 at boot if supported by the systemArd Biesheuvel
Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA (which is supported with 64k pages only), given that some page table descriptor bits change meaning in this case. To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations. Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that needs to be set, and which changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, let's pivot through another temporary mapping. This avoids the need to reintroduce manipulations of the page tables with the MMU and caches disabled. To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-78-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversionArd Biesheuvel
Add support for 5 level paging in the G-to-nG routine that creates its own temporary page tables to traverse the swapper page tables. Also add support for running the 5 level configuration with the top level folded at runtime, to support CPUs that do not implement the LPA2 extension. While at it, wire up the level skipping logic so it will also trigger on 4 level configurations with LPA2 enabled at build time but not active at runtime, as we'll fall back to 3 level paging in that case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-77-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: Add feature override support for LVAArd Biesheuvel
Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for the boot code that creates the kernel mapping to take it into account. Given that LPA2 implies LVA, disabling the latter should disable the former as well. So override the ID_AA64MMFR0.TGran field of the current page size as well if it advertises support for 52-bit addressing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-71-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: Handle LVA support as a CPU featureArd Biesheuvel
Currently, we detect CPU support for 52-bit virtual addressing (LVA) extremely early, before creating the kernel page tables or enabling the MMU. We cannot override the feature this early, and so large virtual addressing is always enabled on CPUs that implement support for it if the software support for it was enabled at build time. It also means we rely on non-trivial code in asm to deal with this feature. Given that both the ID map and the TTBR1 mapping of the kernel image are guaranteed to be 48-bit addressable, it is not actually necessary to enable support this early, and instead, we can model it as a CPU feature. That way, we can rely on code patching to get the correct TCR.T1SZ values programmed on secondary boot and resume from suspend. On the primary boot path, we simply enable the MMU with 48-bit virtual addressing initially, and update TCR.T1SZ if LVA is supported from C code, right before creating the kernel mapping. Given that TTBR1 still points to reserved_pg_dir at this point, updating TCR.T1SZ should be safe without the need for explicit TLB maintenance. Since this gets rid of all accesses to the vabits_actual variable from asm code that occurred before TCR.T1SZ had been programmed, we no longer have a need for this variable, and we can replace it with a C expression that produces the correct value directly, based on the value of TCR.T1SZ. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-70-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: omit redundant remap of kernel imageArd Biesheuvel
Now that the early kernel mapping is created with all the right attributes and segment boundaries, there is no longer a need to recreate it and switch to it. This also means we no longer have to copy the kasan shadow or some parts of the fixmap from one set of page tables to the other. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-68-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: kernel: Create initial ID map from C codeArd Biesheuvel
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things like LPA2 or WXN more difficult than necessary. Also, it is parameterized like the rest of the MM code to run with a configurable number of levels, which is rather pointless, given that all AArch64 CPUs implement support for 48-bit virtual addressing, and that many systems exist with DRAM located outside of the 39-bit addressable range, which is the only smaller VA size that is widely used, and we need additional tricks to make things work in that combination. So let's bite the bullet, and rip out all the asm macros, and fiddly code, and replace it with a C implementation based on the newly added routines for creating the early kernel VA mappings. And while at it, create the initial ID map based on 48-bit virtual addressing as well, regardless of the number of configured levels for the kernel proper. Note that this code may execute with the MMU and caches disabled, and is therefore not permitted to make unaligned accesses. This shouldn't generally happen in any case for the algorithm as implemented, but to be sure, let's pass -mstrict-align to the compiler just in case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-66-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: Use 48-bit virtual addressing for the permanent ID mapArd Biesheuvel
Even though we support loading kernels anywhere in 48-bit addressable physical memory, we create the ID maps based on the number of levels that we happened to configure for the kernel VA and user VA spaces. The reason for this is that the PGD/PUD/PMD based classification of translation levels, along with the associated folding when the number of levels is less than 5, does not permit creating a page table hierarchy of a set number of levels. This means that, for instance, on 39-bit VA kernels we need to configure an additional level above PGD level on the fly, and 36-bit VA kernels still only support 47-bit virtual addressing with this trick applied. Now that we have a separate helper to populate page table hierarchies that does not define the levels in terms of PUDS/PMDS/etc at all, let's reuse it to create the permanent ID map with a fixed VA size of 48 bits. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-64-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: Move early kernel mapping routines into C codeArd Biesheuvel
The asm version of the kernel mapping code works fine for creating a coarse grained identity map, but for mapping the kernel down to its exact boundaries with the right attributes, it is not suitable. This is why we create a preliminary RWX kernel mapping first, and then rebuild it from scratch later on. So let's reimplement this in C, in a way that will make it unnecessary to create the kernel page tables yet another time in paging_init(). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-63-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: mm: Make kaslr_requires_kpti() a static inlineArd Biesheuvel
In preparation for moving the first assignment of arm64_use_ng_mappings to an earlier stage in the boot, ensure that kaslr_requires_kpti() is accessible without relying on the core kernel's view on whether or not KASLR is enabled. So make it a static inline, and move the kaslr_enabled() check out of it and into the callers, one of which will disappear in a subsequent patch. Once/when support for the obsolete ThunderX 1 platform is dropped, this check reduces to a E0PD feature check on the local CPU. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-61-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: move memstart_offset_seed handling to C codeArd Biesheuvel
Now that we can set BSS variables from the early code running from the ID map, we can set memstart_offset_seed directly from the C code that derives the value instead of passing it back and forth between C and asm code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-60-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: idreg-override: Create a pseudo feature for rodata=offArd Biesheuvel
Add rodata=off to the set of kernel command line options that is parsed early using the CPU feature override detection code, so we can easily refer to it when creating the kernel mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-57-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: kaslr: Use feature override instead of parsing the cmdline againArd Biesheuvel
The early kaslr code open codes the detection of 'nokaslr' on the kernel command line, and this is no longer necessary now that the feature detection code, which also looks for the same string, executes before this code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-56-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: cpufeature: Add helper to test for CPU feature overridesArd Biesheuvel
Add some helpers to extract and apply feature overrides to the bare idreg values. This involves inspecting the value and mask of the specific field that we are interested in, given that an override value/mask pair might be invalid for one field but valid for another. Then, wire up the new helper for the hVHE test - note that we can drop the sysreg test here, as the override will be invalid when trying to enable hVHE on non-VHE capable hardware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-55-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: move dynamic shadow call stack patching into early C runtimeArd Biesheuvel
Once we update the early kernel mapping code to only map the kernel once with the right permissions, we can no longer perform code patching via this mapping. So move this code to an earlier stage of the boot, right after applying the relocations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-54-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: Run feature override detection before mapping the kernelArd Biesheuvel
To permit the feature overrides to be taken into account before the KASLR init code runs and the kernel mapping is created, move the detection code to an earlier stage in the boot. In a subsequent patch, this will be taken advantage of by merging the preliminary and permanent mappings of the kernel text and data into a single one that gets created and relocated before start_kernel() is called. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-53-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: Move feature overrides into the BSS sectionArd Biesheuvel
In order to allow the CPU feature override detection code to run even earlier, move the feature override global variables into BSS, which is the only part of the static kernel image that is mapped read-write in the initial ID map. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-52-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: Clear BSS and the kernel page tables in one goArd Biesheuvel
We will move the CPU feature overrides into BSS in a subsequent patch, and this requires that BSS is zeroed before the feature override detection code runs. So let's map BSS read-write in the ID map, and zero it via this mapping. Since the kernel page tables are right next to it, and also zeroed via the ID map, let's drop the separate clear_page_tables() function, and just zero everything in one go. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-51-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: kernel: Remove early fdt remap codeArd Biesheuvel
The early FDT remap code is no longer used so let's drop it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-50-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: idreg-override: Move to early mini C runtimeArd Biesheuvel
We will want to parse the ID register overrides even earlier, so that we can take them into account before creating the kernel mapping. So migrate the code and make it work in the context of the early C runtime. We will move the invocation to an earlier stage in a subsequent patch. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-49-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: head: move relocation handling to C codeArd Biesheuvel
Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplement it in C. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-48-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: kernel: Don't rely on objcopy to make code under pi/ __initArd Biesheuvel
We will add some code under pi/ that contains global variables that should not end up in __initdata, as they will not be writable via the initial ID map. So only rely on objcopy for making the libfdt code __init, and use explicit annotations for the rest. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-47-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-16arm64: kernel: Manage absolute relocations in code built under pi/Ard Biesheuvel
The mini C runtime runs before relocations are processed, and so it cannot rely on statically initialized pointer variables. Add a check to ensure that such code does not get introduced by accident, by going over the relocations in each object, identifying the ones that operate on data sections that are part of the executable image, and raising an error if any relocations of type R_AARCH64_ABS64 exist. Note that such relocations are permitted in other places (e.g., debug sections) and will never occur in compiler generated code sections when using the small code model, so only check sections that have SHF_ALLOC set and SHF_EXECINSTR cleared. To accommodate cases where statically initialized symbol references are unavoidable, introduce a special case for ELF input data sections that have ".rodata.prel64" in their names, and in these cases, instead of rejecting any encountered ABS64 relocations, convert them into PREL64 relocations, which don't require any runtime fixups. Note that the code in question must still be modified to deal with this, as it needs to convert the 64-bit signed offsets into absolute addresses before use. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-46-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-02-15arm64/sve: Lower the maximum allocation for the SVE ptrace regsetMark Brown
Doug Anderson observed that ChromeOS crashes are being reported which include failing allocations of order 7 during core dumps due to ptrace allocating storage for regsets: chrome: page allocation failure: order:7, mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null),cpuset=urgent,mems_allowed=0 ... regset_get_alloc+0x1c/0x28 elf_core_dump+0x3d8/0xd8c do_coredump+0xeb8/0x1378 with further investigation showing that this is: [ 66.957385] DOUG: Allocating 279584 bytes which is the maximum size of the SVE regset. As Doug observes it is not entirely surprising that such a large allocation of contiguous memory might fail on a long running system. The SVE regset is currently sized to hold SVE registers with a VQ of SVE_VQ_MAX which is 512, substantially more than the architectural maximum of 16 which we might see even in a system emulating the limits of the architecture. Since we don't expose the size we tell the regset core externally let's define ARCH_SVE_VQ_MAX with the actual architectural maximum and use that for the regset, we'll still overallocate most of the time but much less so which will be helpful even if the core is fixed to not require contiguous allocations. Specify ARCH_SVE_VQ_MAX in terms of the maximum value that can be written into ZCR_ELx.LEN (where this is set in the hardware). For consistency update the maximum SME vector length to be specified in the same style while we are at it. We could also teach the ptrace core about runtime discoverable regset sizes but that would be a more invasive change and this is being observed in practical systems. Reported-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Mark Brown <broonie@kernel.org> Tested-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20240213-arm64-sve-ptrace-regset-size-v2-1-c7600ca74b9b@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-15arm64: Subscribe Microsoft Azure Cobalt 100 to ARM Neoverse N2 errataEaswar Hariharan
Add the MIDR value of Microsoft Azure Cobalt 100, which is a Microsoft implemented CPU based on r0p0 of the ARM Neoverse N2 CPU, and therefore suffers from all the same errata. CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Easwar Hariharan <eahariha@linux.microsoft.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240214175522.2457857-1-eahariha@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-15arm64: cpufeatures: Fix FEAT_NV check when checking for FEAT_NV1Marc Zyngier
Using this_cpu_has_cap() has the potential to go wrong when used system-wide on a preemptible kernel. Instead, use the __system_matches_cap() helper when checking for FEAT_NV in the FEAT_NV1 probing helper. Fixes: 3673d01a2f55 ("arm64: cpufeatures: Only check for NV1 if NV is present") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/kvmarm/86bk8k5ts3.wl-maz@kernel.org/ Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-12arm64: cpufeatures: Only check for NV1 if NV is presentMarc Zyngier
We handle ID_AA64MMFR4_EL1.E2H0 being 0 as NV1 being present. However, this is only true if FEAT_NV is implemented. Add the required check to has_nv1(), avoiding spuriously advertising NV1 on HW that doesn't have NV at all. Fixes: da9af5071b25 ("arm64: cpufeature: Detect HCR_EL2.NV1 being RES0") Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240212144736.1933112-3-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-12arm64: cpufeatures: Add missing ID_AA64MMFR4_EL1 to __read_sysreg_by_encoding()Marc Zyngier
When triggering a CPU hotplug scenario, we reparse the CPU feature with SCOPE_LOCAL_CPU, for which we use __read_sysreg_by_encoding() to get the HW value for this CPU. As it turns out, we're missing the handling for ID_AA64MMFR4_EL1, and trigger a BUG(). Funnily enough, Marek isn't completely happy about that. Add the damn register to the list. Fixes: 805bb61f8279 ("arm64: cpufeature: Add ID_AA64MMFR4_EL1 handling") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240212144736.1933112-2-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-09arm64/signal: Don't assume that TIF_SVE means we saved SVE stateMark Brown
When we are in a syscall we will only save the FPSIMD subset even though the task still has access to the full register set, and on context switch we will only remove TIF_SVE when loading the register state. This means that the signal handling code should not assume that TIF_SVE means that the register state is stored in SVE format, it should instead check the format that was recorded during save. Fixes: 8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240130-arm64-sve-signal-regs-v2-1-9fc6f9502782@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-09arm64: kaslr: Adjust randomization range dynamicallyArd Biesheuvel
Currently, we base the KASLR randomization range on a rough estimate of the available space in the upper VA region: the lower 1/4th has the module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O ranges, and so we pick a random location in the remaining space in the middle. Once we enable support for 5-level paging with 4k pages, this no longer works: the vmemmap region, being dimensioned to cover a 52-bit linear region, takes up so much space in the upper VA region (the size of which is based on a 48-bit VA space for compatibility with non-LVA hardware) that the region above the vmalloc region takes up more than a quarter of the available space. So instead of a heuristic, let's derive the randomization range from the actual boundaries of the vmalloc region. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20231213084024.2367360-16-ardb@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com>
2024-02-08KVM: arm64: Handle Apple M2 as not having HCR_EL2.NV1 implementedMarc Zyngier
Although the Apple M2 family of CPUs can have HCR_EL2.NV1 being set and clear, with the change in trap behaviour being OK, they explode spectacularily on an EL2 S1 page table using the nVHE format. This is no good. Let's pretend this HW doesn't have NV1, and move along. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-11-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-08arm64: Treat HCR_EL2.E2H as RES1 when ID_AA64MMFR4_EL1.E2H0 is negativeMarc Zyngier
For CPUs that have ID_AA64MMFR4_EL1.E2H0 as negative, it is important to avoid the boot path that sets HCR_EL2.E2H=0. Fortunately, we already have this path to cope with fruity CPUs. Tweak init_el2 to look at ID_AA64MMFR4_EL1.E2H0 first. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-8-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-08arm64: cpufeature: Detect HCR_EL2.NV1 being RES0Marc Zyngier
A variant of FEAT_E2H0 not being implemented exists in the form of HCR_EL2.E2H being RES1 *and* HCR_EL2.NV1 being RES0 (indicating that only VHE is supported on the host and nested guests). Add the necessary infrastructure for this new CPU capability. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-7-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-08arm64: cpufeature: Add ID_AA64MMFR4_EL1 handlingMarc Zyngier
Add ID_AA64MMFR4_EL1 to the list of idregs the kernel knows about, and describe the E2H0 field. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-6-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-08arm64: cpufeature: Correctly display signed override valuesMarc Zyngier
When a field gets overriden, the kernel indicates the result of the override in dmesg. This works well with unsigned fields, but results in a pretty ugly output when the field is signed. Truncate the field to its width before displaying it. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-4-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-02-08arm64: cpufeatures: Correctly handle signed valuesMarc Zyngier
Although we've had signed values for some features such as PMUv3 and FP, the code that handles the comparaison with some limit has a couple of annoying issues: - the min_field_value is always unsigned, meaning that we cannot easily compare it with a negative value - it is not possible to have a range of values, let alone a range of negative values Fix this by: - adding an upper limit to the comparison, defaulting to all bits being set to the maximum positive value - ensuring that the signess of the min and max values are taken into account A ARM64_CPUID_FIELDS_NEG() macro is provided for signed features, but nothing is using it yet. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240122181344.258974-3-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-01-30arm64: vdso32: Remove unused vdso32-offsets.hKevin Brodsky
Commit 2d071968a405 ("arm64: compat: Remove 32-bit sigreturn code from the vDSO") removed all VDSO_* symbols in the compat vDSO. As a result, vdso32-offsets.h is now empty and therefore unused. Time to remove it. Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Link: https://lore.kernel.org/r/20240129154748.1727759-1-kevin.brodsky@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-01-30arm64: scs: Disable LTO for SCS patching codeArd Biesheuvel
Full LTO takes the '-mbranch-protection=none' passed to the compiler when generating the dynamic shadow call stack patching code as a hint to stop emitting PAC instructions altogether. (Thin LTO appears unaffected by this) Work around this by disabling LTO for the compilation unit, which appears to convince the linker that it should still use PAC in the rest of the kernel.. Fixes: 3b619e22c460 ("arm64: implement dynamic shadow call stack for Clang") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20240123133052.1417449-6-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-01-30arm64: Revert "scs: Work around full LTO issue with dynamic SCS"Ard Biesheuvel
This reverts commit 8c5a19cb17a71e ("arm64: scs: Work around full LTO issue with dynamic SCS"), which did not quite fix the issue as intended. Apparently, -fno-unwind-tables is ignored for the final full LTO link when it is set on any of the objects, resulting in an early boot crash due to the SCS patching code patching itself, and attempting to pop the return address from the shadow stack while the associated push was still a PACIASP instruction when it executed. Reported-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20240123133052.1417449-5-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>