summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-06-27mm: ioremap: Use more sensible name in ioremap_prot()Kefeng Wang
Use more meaningful and sensible naming phys_addr instead addr in ioremap_prot(). Suggested-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Baoquan He <bhe@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220610092255.32445-1-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-27ARM: mm: kill unused runtime hook arch_iounmap()Kefeng Wang
Since the following commits, v5.4 commit 59d3ae9a5bf6 ("ARM: remove Intel iop33x and iop13xx support") v5.11 commit 3e3f354bc383 ("ARM: remove ebsa110 platform") The runtime hook arch_iounmap() on ARM is useless, kill arch_iounmap() and __iounmap(). Cc: Russell King <linux@armlinux.org.uk> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/20220607125027.44946-2-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-27perf: hisi: Extract hisi_pmu_initChen Jun
Extract the initialization code of hisi_pmu->pmu into a function Signed-off-by: Chen Jun <chenjun102@huawei.com> Link: https://lore.kernel.org/r/20220516131601.48383-1-chenjun102@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-27arm64: Copy the task argument to unwind_stateMadhavan T. Venkataraman
Copy the task argument passed to arch_stack_walk() to unwind_state so that it can be passed to unwind functions via unwind_state rather than as a separate argument. The task is a fundamental part of the unwind state. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220617180219.20352-3-madvenka@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-27arm64: Split unwind_init()Madhavan T. Venkataraman
unwind_init() is currently a single function that initializes all of the unwind state. Split it into the following functions and call them appropriately: - unwind_init_from_regs() - initialize from regs passed by caller. - unwind_init_from_caller() - initialize for the current task from the caller of arch_stack_walk(). - unwind_init_from_task() - initialize from the saved state of a task other than the current task. In this case, the other task must not be running. This is done for two reasons: - the different ways of initializing are clear - specialized code can be added to each initializer in the future. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220617180219.20352-2-madvenka@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-27arm64/signal: Clean up SVE/SME feature checking inconsistencyMark Brown
Currently when restoring signal state we check to see if SVE is supported in restore_sigframe() but check to see if SVE is supported inside restore_sve_fpsimd_context(). This makes no real difference since SVE is always supported in systems with SME but looks a bit untidy and makes things slightly harder to follow, move the SVE check next to the SME one in restore_sve_fpsimd_context(). Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220624172108.555000-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: setup: drop early FDT pointer helpersArd Biesheuvel
We no longer need to call into the kernel to map the FDT before calling into the kernel so let's drop the helpers we added for this. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-22-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: avoid relocating the kernel twice for KASLRArd Biesheuvel
Currently, when KASLR is in effect, we set up the kernel virtual address space twice: the first time, the KASLR seed is looked up in the device tree, and the kernel virtual mapping is torn down and recreated again, after which the relocations are applied a second time. The latter step means that statically initialized global pointer variables will be reset to their initial values, and to ensure that BSS variables are not set to values based on the initial translation, they are cleared again as well. All of this is needed because we need the command line (taken from the DT) to tell us whether or not to randomize the virtual address space before entering the kernel proper. However, this code has expanded little by little and now creates global state unrelated to the virtual randomization of the kernel before the mapping is torn down and set up again, and the BSS cleared for a second time. This has created some issues in the past, and it would be better to avoid this little dance if possible. So instead, let's use the temporary mapping of the device tree, and execute the bare minimum of code to decide whether or not KASLR should be enabled, and what the seed is. Only then, create the virtual kernel mapping, clear BSS, etc and proceed as normal. This avoids the issues around inconsistent global state due to BSS being cleared twice, and is generally more maintainable, as it permits us to defer all the remaining DT parsing and KASLR initialization to a later time. This means the relocation fixup code runs only a single time as well, allowing us to simplify the RELR handling code too, which is not idempotent and was therefore required to keep track of the offset that was applied the first time around. Note that this means we have to clone a pair of FDT library objects, so that we can control how they are built - we need the stack protector and other instrumentation disabled so that the code can tolerate being called this early. Note that only the kernel page tables and the temporary stack are mapped read-write at this point, which ensures that the early code does not modify any global state inadvertently. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-21-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: kaslr: defer initialization to initcall where permittedArd Biesheuvel
The early KASLR init code runs extremely early, and anything that could be deferred until later should be. So let's defer the randomization of the module region until much later - this also simplifies the arithmetic, given that we no longer have to reason about the link time vs load time placement of the core kernel explicitly. Also get rid of the global status variable, and infer the status reported by the diagnostic print from other KASLR related context. While at it, get rid of the special case for KASAN without KASAN_VMALLOC, which never occurs in practice. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-20-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: record CPU boot mode after enabling the MMUArd Biesheuvel
In order to avoid having to touch memory with the MMU and caches disabled, and therefore having to invalidate it from the caches explicitly, just defer storing the value until after the MMU has been turned on, unless we are giving up with an error. While at it, move the associated variable definitions into C code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-19-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: populate kernel page tables with MMU and caches onArd Biesheuvel
Now that we can access the entire kernel image via the ID map, we can execute the page table population code with the MMU and caches enabled. The only thing we need to ensure is that translations via TTBR1 remain disabled while we are updating the page tables the second time around, in case KASLR wants them to be randomized. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-18-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: factor out TTBR1 assignment into a macroArd Biesheuvel
Create a macro load_ttbr1 to avoid having to repeat the same instruction sequence 3 times in a subsequent patch. No functional change intended. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-17-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: idreg-override: use early FDT mapping in ID mapArd Biesheuvel
Instead of calling into the kernel to map the FDT into the kernel page tables before even calling start_kernel(), let's switch to the initial, temporary mapping of the device tree that has been added to the ID map. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-16-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: create a temporary FDT mapping in the initial ID mapArd Biesheuvel
We need to access the DT very early to get at the command line and the KASLR seed, which currently means we rely on some hacks to call into the kernel before really calling into the kernel, which is undesirable. So instead, let's create a mapping for the FDT in the initial ID map, which is feasible now that it has been extended to cover more than a single page or block, and can be updated in place to remap other output addresses. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-15-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: use relative references to the RELA and RELR tablesArd Biesheuvel
Formerly, we had to access the RELA and RELR tables via the kernel mapping that was being relocated, and so deriving the start and end addresses using ADRP/ADD references was not possible, as the relocation code runs from the ID map. Now that we map the entire kernel image via the ID map, we can simplify this, and just load the entries via the ID map as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-14-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: cover entire kernel image in initial ID mapArd Biesheuvel
As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by expanding the ID map so it covers the page tables as well as all executable code. This will allow us to populate the page tables with the MMU and caches on, and call KASLR init code before setting up the virtual mapping. Since this ID map is only needed at boot, create it as a temporary set of page tables, and populate the permanent ID map after enabling the MMU and caches. While at it, switch to read-only attributes for the where possible, as writable permissions are only needed for the initial kernel page tables. Note that on 4k granule configurations, the permanent ID map will now be reduced to a single page rather than a 2M block mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-13-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: add helper function to remap regions in early page tablesArd Biesheuvel
The asm macros used to create the initial ID map and kernel mappings don't support randomly remapping parts of the address space after it has been populated. What we can do, however, given that all block or page mappings are created at the final level, is take a subset of the mapped range and update its attributes or output address. This will permit us to make parts of these page tables read-only, or remap a part of it to cover the device tree. So add a helper that encapsulates this. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-12-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: mm: provide idmap pointer to cpu_replace_ttbr1()Ard Biesheuvel
In preparation for changing the way we initialize the permanent ID map, update cpu_replace_ttbr1() so we can use it with the initial ID map as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-11-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: pass ID map root table address to __enable_mmu()Ard Biesheuvel
We will be adding an initial ID map that covers the entire kernel image, so we will pass the actual ID map root table to use to __enable_mmu(), rather than hard code it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-10-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: kernel: drop unnecessary PoC cache clean+invalidateArd Biesheuvel
Some early boot code runs before the virtual placement of the kernel is finalized, and we used to go back to the very start and recreate the ID map along with the page tables describing the virtual kernel mapping, and this involved setting some global variables with the caches off. In order to ensure that global state created by the KASLR code is not corrupted by the cache invalidation that occurs in that case, we needed to clean those global variables to the PoC explicitly. This is no longer needed now that the ID map is created only once (and the associated global variable updates are no longer repeated). So drop the cache maintenance that is no longer necessary. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220624150651.1358849-9-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: split off idmap creation codeArd Biesheuvel
Split off the creation of the ID map page tables, so that we can avoid running it again unnecessarily when KASLR is in effect (which only randomizes the virtual placement). This will permit us to drop some explicit cache maintenance to the PoC which was necessary because the cache invalidation being performed on some global variables might otherwise clobber unrelated variables that happen to share a cacheline. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-8-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: switch to map_memory macro for the extended ID mapArd Biesheuvel
In a future patch, we will start using an ID map that covers the entire image, rather than a single page. This means that we need to deal with the pathological case of an extended ID map where the kernel image does not fit neatly inside a single entry at the root level, which means we will need to create additional table entries and map additional pages for page tables. The existing map_memory macro already takes care of most of that, so let's just extend it to deal with this case as well. While at it, drop the conditional branch on the value of T0SZ: we don't set the variable anymore in the entry code, and so we can just let the map_memory macro deal with the case where the output address exceeds VA_BITS. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-7-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: simplify page table mapping macros (slightly)Ard Biesheuvel
Simplify the macros in head.S that are used to set up the early page tables, by switching to immediates for the number of bits that are interpreted as the table index at each level. This makes it much easier to infer from the instruction stream what is going on, and reduces the number of instructions emitted substantially. Note that the extended ID map for cases where no additional level needs to be configured now uses a compile time size as well, which means that we interpret up to 10 bits as the table index at the root level (for 52-bit physical addressing), without taking into account whether or not this is supported on the current system. However, those bits can only be set if we are executing the image from an address that exceeds the 48-bit PA range, and are guaranteed to be cleared otherwise, and given that we are dealing with a mapping in the lower TTBR0 range of the address space, the result is therefore the same as if we'd mask off only 6 bits. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-6-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: drop idmap_ptrs_per_pgdArd Biesheuvel
The assignment of idmap_ptrs_per_pgd lacks any cache invalidation, even though it is updated with the MMU and caches disabled. However, we never bother to read the value again except in the very next instruction, and so we can just drop the variable entirely. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220624150651.1358849-5-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: move assignment of idmap_t0sz to C codeArd Biesheuvel
Setting idmap_t0sz involves fiddling with the caches if done with the MMU off. Since we will be creating an initial ID map with the MMU and caches off, and the permanent ID map with the MMU and caches on, let's move this assignment of idmap_t0sz out of the startup code, and replace it with a macro that simply issues the three instructions needed to calculate the value wherever it is needed before the MMU is turned on. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-4-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: mm: make vabits_actual a build time constant if possibleArd Biesheuvel
Currently, we only support 52-bit virtual addressing on 64k pages configurations, and in all other cases, vabits_actual is guaranteed to equal VA_BITS (== VA_BITS_MIN). So get rid of the variable entirely in that case. While at it, move the assignment out of the asm entry code - it has no need to be there. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220624150651.1358849-3-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: head: move kimage_vaddr variable into C fileArd Biesheuvel
This variable definition does not need to be in head.S so move it out. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220624150651.1358849-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24perf/marvell_cn10k: Fix TAD PMU register offsetTanmay Jagdale
The existing offset of TAD_PRF and TAD_PFC registers are incorrect. Hence, fix with the right register offsets. Also, drop read of TAD_PRF register in tad_pmu_event_counter_start() since we don't have to preserve any bit fields and always write an updated value. Signed-off-by: Tanmay Jagdale <tanmay@marvell.com> Link: https://lore.kernel.org/r/20220614171356.773967-1-tanmay@marvell.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24perf/marvell_cn10k: Remove useless license text when SPDX-License-Identifier ↵Christophe JAILLET
is already used An SPDX-License-Identifier is already in place. There is no need to duplicate part of the corresponding license. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/4a8016a6da9cc6815cfa0f97ae8d3dd862797bda.1654936653.git.christophe.jaillet@wanadoo.fr Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24arm64: entry: simplify trampoline data pageArd Biesheuvel
Get rid of some clunky open coded arithmetic on section addresses, by emitting the trampoline data variables into a separate, dedicated r/o data section, and putting it at the next page boundary. This way, we can access the literals via single LDR instruction. While at it, get rid of other, implicit literals, and use ADRP/ADD or MOVZ/MOVK sequences, as appropriate. Note that the latter are only supported for CONFIG_RELOCATABLE=n (which is usually the case if CONFIG_RANDOMIZE_BASE=n), so update the CPP conditionals to reflect this. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220622161010.3845775-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: trap implementation defined functionality in userspaceKristina Martsenko
The Arm v8.8 extension adds a new control FEAT_TIDCP1 that allows the kernel to disable all implementation-defined system registers and instructions in userspace. This can improve robustness against covert channels between processes, for example in cases where the firmware or hardware didn't disable that functionality by default. The kernel does not currently support any implementation-defined features, as there are no hwcaps for any such features, so disable all imp-def features unconditionally. Any use of imp-def instructions will result in a SIGILL being delivered to the process (same as for undefined instructions). Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220622115424.683520-1-kristina.martsenko@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23Documentation/arm64: update memory layout table.Andre Mueller
Commit b89ddf4cca43("arm64/bpf: Remove 128MB limit for BPF JIT programs") removes the bpf jit region from the memory layout of the Aarch64 architecture. However, it forgets to update the documentation accordingly. - Remove the bpf jit region. - Fix the Start and End addresses of the modules region. - Fix the Start address of the vmalloc region. Signed-off-by: Andre Mueller <am@emlix.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220621081651.61755-1-am@emlix.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: kcsan: Support detecting more missing memory barriersKefeng Wang
As "kcsan: Support detecting a subset of missing memory barriers"[1] introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects more missing memory barrier, but arm64 don't have KCSAN instrumentation for barriers, so the new selftest test_barrier() and test cases for memory barrier instrumentation in kcsan_test module will fail, even panic on selftest. Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h defined the final instrumented version of these barriers, which will fix the above issues. Note, barrier instrumentation that can be disabled via __no_kcsan with appropriate compiler-support (and not just with objtool help), see commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no objtool support exists"), it adds disable_sanitizer_instrumentation to __no_kcsan attribute which will remove all sanitizer instrumentation fully (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize. [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/ Acked-by: Marco Elver <elver@google.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220523113126.171714-3-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23asm-generic: Add memory barrier dma_mb()Kefeng Wang
The memory barrier dma_mb() is introduced by commit a76a37777f2c ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), which is used to ensure that prior (both reads and writes) accesses to memory by a CPU are ordered w.r.t. a subsequent MMIO write. Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20220523113126.171714-2-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: boot: add zstd supportJisheng Zhang
Support build the zstd compressed Image.zst. Similar as other compressed formats, the Image.zst is not self-decompressing and the bootloader still needs to handle decompression before launching the kernel image. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20220619170657.2657-1-jszhang@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: cpufeature: Allow different PMU versions in ID_DFR0_EL1Alexandru Elisei
Commit b20d1ba3cf4b ("arm64: cpufeature: allow for version discrepancy in PMU implementations") made it possible to run Linux on a machine with PMUs with different versions without tainting the kernel. The patch relaxed the restriction only for the ID_AA64DFR0_EL1.PMUVer field, and missed doing the same for ID_DFR0_EL1.PerfMon , which also reports the PMU version, but for the AArch32 state. For example, with Linux running on two clusters with different PMU versions, the kernel is tainted when bringing up secondaries with the following message: [ 0.097027] smp: Bringing up secondary CPUs ... [..] [ 0.142805] Detected PIPT I-cache on CPU4 [ 0.142805] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000004011088, CPU4: 0x00000005011088 [ 0.143555] CPU features: Unsupported CPU feature variation detected. [ 0.143702] GICv3: CPU4: found redistributor 10000 region 0:0x000000002f180000 [ 0.143702] GICv3: CPU4: using allocated LPI pending table @0x00000008800d0000 [ 0.144888] CPU4: Booted secondary processor 0x0000010000 [0x410fd0f0] The boot CPU implements FEAT_PMUv3p1 (ID_DFR0_EL1.PerfMon, bits 27:24, is 0b0100), but CPU4, part of the other cluster, implements FEAT_PMUv3p4 (ID_DFR0_EL1.PerfMon = 0b0101). Treat the PerfMon field as FTR_NONSTRICT and FTR_EXACT to pass the sanity check and to match how PMUVer is treated for the 64bit ID register. Fixes: b20d1ba3cf4b ("arm64: cpufeature: allow for version discrepancy in PMU implementations") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20220617111332.203061-1-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: mm: install KPTI nG mappings with MMU enabledArd Biesheuvel
In cases where we unmap the kernel while running in user space, we rely on ASIDs to distinguish the minimal trampoline from the full kernel mapping, and this means we must use non-global attributes for those mappings, to ensure they are scoped by ASID and will not hit in the TLB inadvertently. We only do this when needed, as this is generally more costly in terms of TLB pressure, and so we boot without these non-global attributes, and apply them to all existing kernel mappings once all CPUs are up and we know whether or not the non-global attributes are needed. At this point, we cannot simply unmap and remap the entire address space, so we have to update all existing block and page descriptors in place. Currently, we go through a lot of trouble to perform these updates with the MMU and caches off, to avoid violating break before make (BBM) rules imposed by the architecture. Since we make changes to page tables that are not covered by the ID map, we gain access to those descriptors by disabling translations altogether. This means that the stores to memory are issued with device attributes, and require extra care in terms of coherency, which is costly. We also rely on the ID map to access a shared flag, which requires the ID map to be executable and writable at the same time, which is another thing we'd prefer to avoid. So let's switch to an approach where we replace the kernel mapping with a minimal mapping of a few pages that can be used for a minimal, ad-hoc fixmap that we can use to map each page table in turn as we traverse the hierarchy. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220609174320.4035379-3-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: kpti-ng: simplify page table traversal logicArd Biesheuvel
Simplify the KPTI G-to-nG asm helper code by: - pulling the 'table bit' test into the get/put macros so we can combine them and incorporate the entire loop; - moving the 'table bit' test after the update of bit #11 so we no longer need separate next_xxx and skip_xxx labels; - redefining the pmd/pud register aliases and the next_pmd/next_pud labels instead of branching to them if the number of configured page table levels is less than 3 or 4, respectively. No functional change intended, except for the fact that we now descend into a next level table after setting bit #11 on its descriptor but this should make no difference in practice. While at it, switch to .L prefixed local labels so they don't clutter up the symbol tables, kallsyms, etc, and clean up the indentation for legibility. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220609174320.4035379-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64/sme: Expose SMIDR through sysfsMark Brown
We currently expose MIDR and REVID to userspace through sysfs to enable it to make decisions based on the specific implementation. Since SME supports implementations where streaming mode is provided by a separate hardware unit called a SMCU it provides a similar ID register SMIDR. Expose it to userspace via sysfs when the system supports SME along with the other ID registers. Since we disable the SME priority mapping feature if it is supported by hardware we currently mask out the SMPS bit which reports that it is supported. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220607132857.1358361-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: compat: Move kuser32.S to .rodata sectionChen Zhongjin
Kuser code should be inside .rodata. Now code in kuser32.S is inside .text section and never executed. Move it to .rodata. Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> Link: https://lore.kernel.org/r/20220531015350.233827-1-chenzhongjin@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: stacktrace: use non-atomic __set_bitAndrey Konovalov
Use the non-atomic version of set_bit() in arch/arm64/kernel/stacktrace.c, as there is no concurrent accesses to frame->prev_type. This speeds up stack trace collection and improves the boot time of Generic KASAN by 2-5%. Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/23dfa36d1cc91e4a1059945b7834eac22fb9854d.1653317461.git.andreyknvl@google.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: kasan: do not instrument stacktrace.cAndrey Konovalov
Disable KASAN instrumentation of arch/arm64/kernel/stacktrace.c. This speeds up Generic KASAN by 5-20%. As a side-effect, KASAN is now unable to detect bugs in the stack trace collection code. This is taken as an acceptable downside. Also replace READ_ONCE_NOCHECK() with READ_ONCE() in stacktrace.c. As the file is now not instrumented, there is no need to use the NOCHECK version of READ_ONCE(). Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/c4c944a2a905e949760fbeb29258185087171708.1653317461.git.andreyknvl@google.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23perf/arm-cci: fix typo in commentJulia Lawall
Spelling mistake (triple letters) in comment. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Link: https://lore.kernel.org/r/20220521111145.81697-67-Julia.Lawall@inria.fr Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23drivers/perf:Directly use ida_alloc()/free()keliu
Use ida_alloc()/ida_free() instead of deprecated ida_simple_get()/ida_simple_remove() . Signed-off-by: keliu <liuke94@huawei.com> Link: https://lore.kernel.org/r/20220519080127.147030-2-liuke94@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23drivers/perf: Directly use ida_alloc()/free()keliu
Use ida_alloc()/ida_free() instead of deprecated ida_simple_get()/ida_simple_remove() . Signed-off-by: keliu <liuke94@huawei.com> Link: https://lore.kernel.org/r/20220519080127.147030-1-liuke94@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: select TRACE_IRQFLAGS_NMI_SUPPORTMark Rutland
Due to an oversight, on arm64 lockdep IRQ state tracking doesn't work as intended in NMI context. This demonstrably results in bogus warnings from lockdep, and in theory could mask a variety of issues. On arm64, we've consistently tracked IRQ flag state for NMIs (and saved/restored the state of the interrupted context) since commit: f0cd5ac1e4c53cb6 ("arm64: entry: fix NMI {user, kernel}->kernel transitions") That commit fixed most lockdep issues with NMI by virtue of the save/restore of the lockdep state of the interrupted context. However, for lockdep IRQ state tracking to consistently take effect in NMI context it has been necessary to select TRACE_IRQFLAGS_NMI_SUPPORT since commit: ed00495333ccc80f ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs") As arm64 does not select TRACE_IRQFLAGS_NMI_SUPPORT, this means that the lockdep state can be stale in NMI context, and some uses of that state can consume stale data. When an NMI is taken arm64 entry code will call arm64_enter_nmi(). This will enter NMI context via __nmi_enter() before calling lockdep_hardirqs_off() to inform lockdep that IRQs have been masked. Where TRACE_IRQFLAGS_NMI_SUPPORT is not selected, lockdep_hardirqs_off() will not update lockdep state if called in NMI context. Thus if IRQs were enabled in the original context, lockdep will continue to believe that IRQs are enabled despite the call to lockdep_hardirqs_off(). However, the lockdep_assert_*() checks do take effect in NMI context, and will consume the stale lockdep state. If an NMI is taken from a context which had IRQs enabled, and during the handling of the NMI something calls lockdep_assert_irqs_disabled(), this will result in a spurious warning based upon the stale lockdep state. This can be seen when using perf with GICv3 pseudo-NMIs. Within the perf NMI handler we may attempt a uaccess to record the userspace callchain, and is this faults the el1_abort() call in the nested context will call exit_to_kernel_mode() when returning, which has a lockdep_assert_irqs_disabled() assertion: | # ./perf record -a -g sh | ------------[ cut here ]------------ | WARNING: CPU: 0 PID: 164 at arch/arm64/kernel/entry-common.c:73 exit_to_kernel_mode+0x118/0x1ac | Modules linked in: | CPU: 0 PID: 164 Comm: perf Not tainted 5.18.0-rc5 #1 | Hardware name: linux,dummy-virt (DT) | pstate: 004003c5 (nzcv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : exit_to_kernel_mode+0x118/0x1ac | lr : el1_abort+0x80/0xbc | sp : ffff8000080039f0 | pmr_save: 000000f0 | x29: ffff8000080039f0 x28: ffff6831054e4980 x27: ffff683103adb400 | x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001 | x23: 00000000804000c5 x22: 00000000000000c0 x21: 0000000000000001 | x20: ffffbd51e635ec44 x19: ffff800008003a60 x18: 0000000000000000 | x17: ffffaadf98d23000 x16: ffff800008004000 x15: 0000ffffd14f25c0 | x14: 0000000000000000 x13: 00000000000018eb x12: 0000000000000040 | x11: 000000000000001e x10: 000000002b820020 x9 : 0000000100110000 | x8 : 000000000045cac0 x7 : 0000ffffd14f25c0 x6 : ffffbd51e639b000 | x5 : 00000000000003e5 x4 : ffffbd51e58543b0 x3 : 0000000000000001 | x2 : ffffaadf98d23000 x1 : ffff6831054e4980 x0 : 0000000100110000 | Call trace: | exit_to_kernel_mode+0x118/0x1ac | el1_abort+0x80/0xbc | el1h_64_sync_handler+0xa4/0xd0 | el1h_64_sync+0x74/0x78 | __arch_copy_from_user+0xa4/0x230 | get_perf_callchain+0x134/0x1e4 | perf_callchain+0x7c/0xa0 | perf_prepare_sample+0x414/0x660 | perf_event_output_forward+0x80/0x180 | __perf_event_overflow+0x70/0x13c | perf_event_overflow+0x1c/0x30 | armv8pmu_handle_irq+0xe8/0x160 | armpmu_dispatch_irq+0x2c/0x70 | handle_percpu_devid_fasteoi_nmi+0x7c/0xbc | generic_handle_domain_nmi+0x3c/0x60 | gic_handle_irq+0x1dc/0x310 | call_on_irq_stack+0x2c/0x54 | do_interrupt_handler+0x80/0x94 | el1_interrupt+0xb0/0xe4 | el1h_64_irq_handler+0x18/0x24 | el1h_64_irq+0x74/0x78 | lockdep_hardirqs_off+0x50/0x120 | trace_hardirqs_off+0x38/0x214 | _raw_spin_lock_irq+0x98/0xa0 | pipe_read+0x1f8/0x404 | new_sync_read+0x140/0x150 | vfs_read+0x190/0x1dc | ksys_read+0xdc/0xfc | __arm64_sys_read+0x20/0x30 | invoke_syscall+0x48/0x114 | el0_svc_common.constprop.0+0x158/0x17c | do_el0_svc+0x28/0x90 | el0_svc+0x60/0x150 | el0t_64_sync_handler+0xa4/0x130 | el0t_64_sync+0x19c/0x1a0 | irq event stamp: 483 | hardirqs last enabled at (483): [<ffffbd51e636aa24>] _raw_spin_unlock_irqrestore+0xa4/0xb0 | hardirqs last disabled at (482): [<ffffbd51e636acd0>] _raw_spin_lock_irqsave+0xb0/0xb4 | softirqs last enabled at (468): [<ffffbd51e5216f58>] put_cpu_fpsimd_context+0x28/0x70 | softirqs last disabled at (466): [<ffffbd51e5216ed4>] get_cpu_fpsimd_context+0x0/0x5c | ---[ end trace 0000000000000000 ]--- Note that as lockdep_assert_irqs_disabled() uses WARN_ON_ONCE(), and this uses a BRK, the warning is logged with the real PSTATE at the time of the warning, which clearly has DAIF.I set, meaning IRQs (and pseudo-NMIs) were definitely masked and the warning is spurious. Fix this by selecting TRACE_IRQFLAGS_NMI_SUPPORT such that the existing entry tracking takes effect, as we had originally intended when the arm64 entry code was fixed for transitions to/from NMI. Arguably the lockdep_assert_*() functions should have the same NMI checks as the rest of the code to prevent spurious warnings when TRACE_IRQFLAGS_NMI_SUPPORT is not selected, but the real fix for any architecture is to explicitly handle the transitions to/from NMI in the entry code. Fixes: f0cd5ac1e4c5 ("arm64: entry: fix NMI {user, kernel}->kernel transitions") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220511131733.4074499-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arch: make TRACE_IRQFLAGS_NMI_SUPPORT genericMark Rutland
On most architectures, IRQ flag tracing is disabled in NMI context, and architectures need to define and select TRACE_IRQFLAGS_NMI_SUPPORT in order to enable this. Commit: 859d069ee1ddd878 ("lockdep: Prepare for NMI IRQ state tracking") Permitted IRQ flag tracing in NMI context, allowing lockdep to work in NMI context where an architecture had suitable entry logic. At the time, most architectures did not have such suitable entry logic, and this broke lockdep on such architectures. Thus, this was partially disabled in commit: ed00495333ccc80f ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs") ... with architectures needing to select TRACE_IRQFLAGS_NMI_SUPPORT to enable IRQ flag tracing in NMI context. Currently TRACE_IRQFLAGS_NMI_SUPPORT is defined under arch/x86/Kconfig.debug. Move it to arch/Kconfig so architectures can select it without having to provide their own definition. Since the regular TRACE_IRQFLAGS_SUPPORT is selected by arch/x86/Kconfig, the select of TRACE_IRQFLAGS_NMI_SUPPORT is moved there too. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220511131733.4074499-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: vdso32: enable orphan handling for VDSOJoey Gouly
Like vmlinux, enable orphan-handling for the compat VDSO32. This can catch subtle errors that might arise from unexpected sections being included. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20220510095834.32394-5-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: vdso32: put ELF related sections in the linker scriptJoey Gouly
Use macros from vmlinux.lds.h to explicitly name sections that are included in the compat VDSO32 output. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20220510095834.32394-4-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-06-23arm64: vdso: enable orphan handling for VDSOJoey Gouly
Like vmlinux, enable orphan-handling for the VDSO. This can catch subtle errors that might arise from unexpected sections being included. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20220510095834.32394-3-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>