summaryrefslogtreecommitdiff
path: root/arch/arm/kernel/entry-armv.S
AgeCommit message (Collapse)Author
2023-08-14Merge branch 'devel-stable' into for-nextRussell King (Oracle)
2023-07-06Merge tag 'asm-generic-6.5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic updates from Arnd Bergmann: "These are cleanups for architecture specific header files: - the comments in include/linux/syscalls.h have gone out of sync and are really pointless, so these get removed - The asm/bitsperlong.h header no longer needs to be architecture specific on modern compilers, so use a generic version for newer architectures that use new enough userspace compilers - A cleanup for virt_to_pfn/virt_to_bus to have proper type checking, forcing the use of pointers" * tag 'asm-generic-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: syscalls: Remove file path comments from headers tools arch: Remove uapi bitsperlong.h of hexagon and microblaze asm-generic: Unify uapi bitsperlong.h for arm64, riscv and loongarch m68k/mm: Make pfn accessors static inlines arm64: memory: Make virt_to_pfn() a static inline ARM: mm: Make virt_to_pfn() a static inline asm-generic/page.h: Make pfn accessors static inlines xen/netback: Pass (void *) to virt_to_page() netfs: Pass a pointer to virt_to_page() cifs: Pass a pointer to virt_to_page() in cifsglob cifs: Pass a pointer to virt_to_page() riscv: mm: init: Pass a pointer to virt_to_page() ARC: init: Pass a pointer to virt_to_pfn() in init m68k: Pass a pointer to virt_to_pfn() virt_to_page() fs/proc/kcore.c: Pass a pointer to virt_addr_valid()
2023-06-12arm: update in-source documentation referencesJonathan Corbet
The Arm documentation has moved to Documentation/arch/arm; update references within arch/arm to match. Cc: Russell King <linux@armlinux.org.uk> Cc: Alim Akhtar <alim.akhtar@samsung.com> Cc: Patrice Chotard <patrice.chotard@foss.st.com> Cc: linux-doc@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2023-05-29ARM: mm: Make virt_to_pfn() a static inlineLinus Walleij
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>. The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros. The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h> arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions. Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly. Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/ Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2023-05-17ARM: entry: Make asm coproc dispatch code NWFPE onlyArd Biesheuvel
Now that we can dispatch all VFP and iWMMXT related undef exceptions using undef hooks implemented in C code, we no longer need the asm entry code that takes care of this unless we are using FPE, so we can move it into the FPE entry code. As this means it is ARM only, we can remove the Thumb2 specific decorations as well. It also means the non-standard, asm-only calling convention where returning via LR means failure and returning via R9 means success is now only used on legacy platforms that lack any kind of function return prediction, avoiding the associated performance impact. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17ARM: iwmmxt: Use undef hook to enable coprocessor for taskArd Biesheuvel
Define a undef hook to deal with undef exceptions triggered by iwmmxt instructions that were issued with the coprocessor disabled. This removes the dependency on the coprocessor dispatch code in entry-armv.S, which will be made NWFPE-only in a subsequent patch. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17ARM: entry: Disregard Thumb undef exception in coproc dispatchArd Biesheuvel
Now that the only remaining coprocessor instructions being handled via the dispatch in entry-armv.S are ones that only exist in a ARM (A32) encoding, we can simplify the handling of Thumb undef exceptions, and send them straight to the undefined instruction handlers in C code. This also means we can drop the code that partially decodes the instruction to decide whether it is a 16-bit or 32-bit Thumb instruction: this is all taken care of by the undef hook. Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17ARM: vfp: Use undef hook for handling VFP exceptionsArd Biesheuvel
Now that the VFP support code has been reimplemented as a C function that takes a struct pt_regs pointer and an opcode, we can use the existing undef_hook framework to deal with undef exceptions triggered by VFP instructions instead of having special handling in assembler. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17ARM: kernel: Get rid of thread_info::used_cp[] arrayArd Biesheuvel
We keep track of which coprocessor triggered a fault in the used_cp[] array in thread_info, but this data is never used anywhere. So let's remove it. Linus did some digging and found out that the last user of this field was removed in commit bb1a773d5b6b ("kill unused dump_fpu() instances"). Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-05-20ARM: 9201/1: spectre-bhb: rely on linker to emit cross-section literal loadsArd Biesheuvel
The assembler does not permit 'LDR PC, <sym>' when the symbol lives in a different section, which is why we have been relying on rather fragile open-coded arithmetic to load the address of the vector_swi routine into the program counter using a single LDR instruction in the SWI slot in the vector table. The literal was moved to a different section to in commit 19accfd373847 ("ARM: move vector stubs") to ensure that the vector stubs page does not need to be mapped readable for user space, which is the case for the vector page itself, as it carries the kuser helpers as well. So the cross-section literal load is open-coded, and this relies on the address of vector_swi to be at the very start of the vector stubs page, and we won't notice if we got it wrong until booting the kernel and see it break. Fortunately, it was guaranteed to break, so this was fragile but not problematic. Now that we have added two other variants of the vector table, we have 3 occurrences of the same trick, and so the size of our ISA/compiler/CPU validation space has tripled, in a way that may cause regressions to only be observed once booting the image in question on a CPU that exercises a particular vector table. So let's switch to true cross section references, and let the linker fix them up like it fixes up all the other cross section references in the vector page. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20ARM: 9200/1: spectre-bhb: avoid cross-subsection jump using a numbered labelArd Biesheuvel
In order to minimize potential confusion regarding numbered labels appearing in a different order in the assembler output due to the use of subsections, use a named local label to jump back into the vector handler code from the associated loop8 mitigation sequence. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequenceArd Biesheuvel
The loop8 mitigation for Spectre-BHB only requires a CPU local DSB rather than a systemwide one, which is much more costly. And by the same reasoning as why it is justified to omit the ISB after BPIALL, we can also elide the ISB and rely on the exception return for the context synchronization. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20ARM: 9198/1: spectre-bhb: simplify BPIALL vector macroArd Biesheuvel
The BPIALL mitigation for Spectre-BHB adds a single instruction to the handler sequence that doesn't clobber any registers. Given that these sequences are 10 instructions long, they don't fit neatly into a cacheline anyway, so we can simply move that single instruction to the start of the unmitigated one, and rearrange the symbol names accordingly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20ARM: 9195/1: entry: avoid explicit literal loadsArd Biesheuvel
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into registers without having to rely on literal loads that go via the D-cache. For older cores, we now support a similar arrangement, based on PC-relative group relocations. This means we can elide most literal loads entirely from the entry path, by switching to the ldr_va macro to emit the appropriate sequence depending on the target architecture revision. While at it, switch to the bl_r macro for invoking the right PABT/DABT helpers instead of setting the LR register explicitly, which does not play well with cores that speculate across function returns. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-18ARM: 9197/1: spectre-bhb: fix loop8 sequence for Thumb2Ard Biesheuvel
In Thumb2, 'b . + 4' produces a branch instruction that uses a narrow encoding, and so it does not jump to the following instruction as expected. So use W(b) instead. Fixes: 6c7cb60bff7a ("ARM: fix Thumb2 regression with Spectre BHB") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-03-23Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: "Updates for IRQ stacks and virtually mapped stack support, and ftrace: - Support for IRQ and vmap'ed stacks This covers all the work related to implementing IRQ stacks and vmap'ed stacks for all 32-bit ARM systems that are currently supported by the Linux kernel, including RiscPC and Footbridge. It has been submitted for review in four different waves: - IRQ stacks support for v7 SMP systems [0] - vmap'ed stacks support for v7 SMP systems[1] - extending support for both IRQ stacks and vmap'ed stacks for all remaining configurations, including v6/v7 SMP multiplatform kernels and uniprocessor configurations including v7-M [2] - fixes and updates in [3] - ftrace fixes and cleanups Make all flavors of ftrace available on all builds, regardless of ISA choice, unwinder choice or compiler [4]: - use ADD not POP where possible - fix a couple of Thumb2 related issues - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness - enable the graph tracer with the EABI unwinder - avoid clobbering frame pointer registers to make Clang happy - Fixes for the above" [0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/ [1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/ [2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/ [3] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/ [4] https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/ * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits) ARM: fix building NOMMU ARMv4/v5 kernels ARM: unwind: only permit stack switch when unwinding call_with_stack() ARM: Revert "unwind: dump exception stack from calling frame" ARM: entry: fix unwinder problems caused by IRQ stacks ARM: unwind: set frame.pc correctly for current-thread unwinding ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel" ARM: mach-bcm: disable ftrace in SMC invocation routines ARM: cacheflush: avoid clobbering the frame pointer ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds ARM: ftrace: enable the graph tracer with the EABI unwinder ARM: unwind: track location of LR value in stack frame ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST ARM: ftrace: avoid unnecessary literal loads ARM: ftrace: avoid redundant loads or clobbering IP ARM: ftrace: use trampolines to keep .init.text in branching range ARM: ftrace: use ADD not POP to counter PUSH at entry ARM: ftrace: ensure that ADR takes the Thumb bit into account ARM: make get_current() and __my_cpu_offset() __always_inline ...
2022-03-11ARM: fix Thumb2 regression with Spectre BHBRussell King (Oracle)
When building for Thumb2, the vectors make use of a local label. Sadly, the Spectre BHB code also uses a local label with the same number which results in the Thumb2 reference pointing at the wrong place. Fix this by changing the number used for the Spectre BHB local label. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-11ARM: entry: fix unwinder problems caused by IRQ stacksArd Biesheuvel
The IRQ stacks series made some changes to the unwinder, to permit unwinding across different stacks. This is needed because otherwise, the call stack would terminate at the point where the stack switch between the task stack and the IRQ stack occurs, which would defeat any diagnostics that rely on timer interrupts, such as RCU stall detection. Unfortunately, getting the unwind annotations correct turns out to be difficult, given that this now involves a frame pointer which needs to point into the right location in the task stack when unwinding from the IRQ stack. Getting this wrong for an exception handling routine results in the stack pointer to be unwound from the wrong location, causing any subsequent unwind attempts to cause all kinds of issues, as reported by Naresh here [0]. So let's simplify this, by deferring the stack switch to call_with_stack(), which already has the correct unwind annotations, and removing all the complicated handling of the stack frame from the IRQ exception entrypoint itself. [0] https://lore.kernel.org/all/CA+G9fYtpy8VgK+ag6OsA9TDrwi5YGU4hu7GM8xwpO7v6LrCD4Q@mail.gmail.com/ Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-03-05ARM: Spectre-BHB workaroundRussell King (Oracle)
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-01-25ARM: entry: avoid clobbering R9 in IRQ handlerArd Biesheuvel
Avoid using R9 in the IRQ handler code, as the entry code uses it for tsk, and expects it to remain untouched between the IRQ entry and exit code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2021-12-17ARM: 9169/1: entry: fix Thumb2 bug in iWMMXt exception handlingArd Biesheuvel
The Thumb2 version of the FP exception handling entry code treats the register holding the CP number (R8) differently, resulting in the iWMMXT CP number check to be incorrect. Fix this by unifying the ARM and Thumb2 code paths, and switch the order of the additions of the TI_USED_CP offset and the shifted CP index. Cc: <stable@vger.kernel.org> Fixes: b86040a59feb ("Thumb-2: Implementation of the unified start-up and exceptions code") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-12-06ARM: implement THREAD_INFO_IN_TASK for uniprocessor systemsArd Biesheuvel
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well. To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06ARM: percpu: add SMP_ON_UP supportArd Biesheuvel
Permit the use of the TPIDRPRW system register for carrying the per-CPU offset in generic SMP configurations that also target non-SMP capable ARMv6 cores. This uses the SMP_ON_UP code patching framework to turn all TPIDRPRW accesses into reads/writes of entry #0 in the __per_cpu_offset array. While at it, switch over some existing direct TPIDRPRW accesses in asm code to invocations of a new helper that is patched in the same way when necessary. Note that CPU_V6+SMP without SMP_ON_UP results in a kernel that does not boot on v6 CPUs without SMP extensions, so add this dependency to Kconfig as well. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06ARM: assembler: add optimized ldr/str macros to load variables from memoryArd Biesheuvel
We will be adding variable loads to various hot paths, so it makes sense to add a helper macro that can load variables from asm code without the use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT pairs, but on earlier cores, this requires a bit of hackery to emit a instruction sequence that implements this using a sequence of ADD/LDR instructions. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06ARM: entry: preserve thread_info pointer in switch_toArd Biesheuvel
Tweak the UP stack protector handling code so that the thread info pointer is preserved in R7 until set_current is called. This is needed for a subsequent patch that implements THREAD_INFO_IN_TASK and set_current for UP as well. This also means we will prefer the per-task protector on UP systems that implement the thread ID registers, so tweak the preprocessor conditionals to reflect this. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06ARM: remove old-style irq entryArnd Bergmann
The last user of arch_irq_handler_default is gone now, so the entry-macro-multi.S file and all references to mach/entry-macro.S can be removed, as well as the asm_do_IRQ() entrypoint into the interrupt handling routines implemented in C. Note: The ARMv7-M entry still uses its own top-level IRQ entry, calling nvic_handle_irq() from assembly. This could be changed to go through generic_handle_arch_irq() as well, but it's unclear to me if there are any benefits. Signed-off-by: Arnd Bergmann <arnd@arndb.de> [ardb: keep irq_handler macro as it carries all the IRQ stack handling] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
2021-12-03ARM: implement support for vmap'ed stacksArd Biesheuvel
Wire up the generic support for managing task stack allocations via vmalloc, and implement the entry code that detects whether we faulted because of a stack overrun (or future stack overrun caused by pushing the pt_regs array) While this adds a fair amount of tricky entry asm code, it should be noted that it only adds a TST + branch to the svc_entry path. The code implementing the non-trivial handling of the overflow stack is emitted out-of-line into the .text section. Since on ARM, we rely on do_translation_fault() to keep PMD level page table entries that cover the vmalloc region up to date, we need to ensure that we don't hit such a stale PMD entry when accessing the stack. So we do a dummy read from the new stack while still running from the old one on the context switch path, and bump the vmalloc_seq counter when PMD level entries in the vmalloc range are modified, so that the MM switch fetches the latest version of the entries. Note that we need to increase the per-mode stack by 1 word, to gain some space to stash a GPR until we know it is safe to touch the stack. However, due to the cacheline alignment of the struct, this does not actually increase the memory footprint of the struct stack array at all. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03ARM: entry: rework stack realignment code in svc_entryArd Biesheuvel
The original Thumb-2 enablement patches updated the stack realignment code in svc_entry to work around the lack of a STMIB instruction in Thumb-2, by subtracting 4 from the frame size, inverting the sense of the misaligment check, and changing to a STMIA instruction and a final stack push of a 4 byte quantity that results in the stack becoming aligned at the end of the sequence. It also pushes and pops R0 to the stack in order to have a temp register that Thumb-2 allows in general purpose ALU instructions, as TST using SP is not permitted. Both are a bit problematic for vmap'ed stacks, as using the stack is only permitted after we decide that we did not overflow the stack, or have already switched to the overflow stack. As for the alignment check: the current approach creates a corner case where, if the initial SUB of SP ends up right at the start of the stack, we will end up subtracting another 8 bytes and overflowing it. This means we would need to add the overflow check *after* the SUB that deliberately misaligns the stack. However, this would require us to keep local state (i.e., whether we performed the subtract or not) across the overflow check, but without any GPRs or stack available. So let's switch to an approach where we don't use the stack, and where the alignment check of the stack pointer occurs in the usual way, as this is guaranteed not to result in overflow. This means we will be able to do the overflow check first. While at it, switch to R1 so the mode stack pointer in R0 remains accessible. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03ARM: switch_to: clean up Thumb2 code pathArd Biesheuvel
The load-multiple instruction that essentially performs the switch_to operation in ARM mode, by loading all callee save registers as well the stack pointer and the program counter, is split into 3 separate loads for Thumb-2, with the IP register used as a temporary to capture the value of R4 before it gets overwritten. We can clean this up a bit, by sticking with a single LDMIA instruction, but one that pops SP and PC into IP and LR, respectively, and by using ordinary move register and branch instructions to get those values into SP and PC. This also allows us to move the set_current call closer to the assignment of SP, reducing the window where those are mutually out of sync. This is especially relevant for CONFIG_VMAP_STACK, which is being introduced in a subsequent patch, where we need to issue a load that might fault from the new stack while running from the old one, to ensure that stale PMD entries in the VMALLOC space are synced up. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03ARM: implement IRQ stacksArd Biesheuvel
Now that we no longer rely on the stack pointer to access the current task struct or thread info, we can implement support for IRQ stacks cleanly as well. Define a per-CPU IRQ stack and switch to this stack when taking an IRQ, provided that we were not already using that stack in the interrupted context. This is never the case for IRQs taken from user space, but ones taken while running in the kernel could fire while one taken from user space has not completed yet. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-11-02Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: - Rejig task/thread info to place thread info in task struct - Amba bus cleanups (removing unused functions) - Handle Amba device probe without IRQ domains - Parse linux,usable-memory-range in decompressor - Mark OCRAM as read-only after initialisation - Refactor page fault handling - Fix PXN handling with LPAE kernels - Warning and build fixes from Arnd * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (32 commits) ARM: 9151/1: Thumb2: avoid __builtin_thread_pointer() on Clang ARM: 9150/1: Fix PID_IN_CONTEXTIDR regression when THREAD_INFO_IN_TASK=y ARM: 9147/1: add printf format attribute to early_print() ARM: 9146/1: RiscPC needs older gcc version ARM: 9145/1: patch: fix BE32 compilation ARM: 9144/1: forbid ftrace with clang and thumb2_kernel ARM: 9143/1: add CONFIG_PHYS_OFFSET default values ARM: 9142/1: kasan: work around LPAE build warning ARM: 9140/1: allow compile-testing without machine record ARM: 9137/1: disallow CONFIG_THUMB with ARMv4 ARM: 9136/1: ARMv7-M uses BE-8, not BE-32 ARM: 9135/1: kprobes: address gcc -Wempty-body warning ARM: 9101/1: sa1100/assabet: convert LEDs to gpiod APIs ARM: 9131/1: mm: Fix PXN process with LPAE feature ARM: 9130/1: mm: Provide die_kernel_fault() helper ARM: 9126/1: mm: Kill page table base print in show_pte() ARM: 9127/1: mm: Cleanup access_error() ARM: 9129/1: mm: Kill task_struct argument for __do_page_fault() ARM: 9128/1: mm: Refactor the __do_page_fault() ARM: imx6: mark OCRAM mapping read-only ...
2021-10-25irq: arm: perform irqentry in entry codeMark Rutland
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/arm perform all the irqentry accounting in its entry code. For configurations with CONFIG_GENERIC_IRQ_MULTI_HANDLER, we can use generic_handle_arch_irq(). Other than asm_do_IRQ(), all C calls to handle_IRQ() are from irqchip handlers which will be called from generic_handle_arch_irq(), so to avoid double accounting IRQ entry, the entry logic is moved from handle_IRQ() into asm_do_IRQ(). For ARMv7M the entry assembly is tightly coupled with the NVIC irqchip, and while the entry code should logically live under arch/arm/, moving the entry logic there makes things more convoluted. So for now, place the entry logic in the NVIC irqchip, but separated into a separate function to make the split of responsibility clear. For all other configurations without CONFIG_GENERIC_IRQ_MULTI_HANDLER, IRQ entry is already handled in arch code, and requires no changes. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-09-27ARM: smp: Enable THREAD_INFO_IN_TASKArd Biesheuvel
Now that we no longer rely on thread_info living at the base of the task stack to be able to access the 'current' pointer, we can wire up the generic support for moving thread_info into the task struct itself. Note that this requires us to update the cpu field in thread_info explicitly, now that the core code no longer does so. Ideally, we would switch the percpu code to access the cpu field in task_struct instead, but this unleashes #include circular dependency hell. Co-developed-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27ARM: smp: Store current pointer in TPIDRURO register if availableArd Biesheuvel
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while running in the kernel. This removes the need to access it via thread_info, which is located at the base of the stack, but will be moved out of there in a subsequent patch. Use the __builtin_thread_pointer() helper when available - this will help GCC understand that reloading the value within the same function is not necessary, even when using the per-task stack protector (which also generates accesses via the TLS register). For example, the generated code below loads TPIDRURO only once, and uses it to access both the stack canary and the preempt_count fields. <do_one_initcall>: e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr} ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3} 4606 mov r6, r0 b094 sub sp, #80 ; 0x50 f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary 9313 str r3, [sp, #76] ; 0x4c f8d4 8004 ldr.w r8, [r4, #4] <- preempt count Co-developed-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-08-04ARM: ep93xx: remove MaverickCrunch supportArnd Bergmann
The MaverickCrunch support for ep93xx never made it into glibc and was removed from gcc in its 4.8 release in 2012. It is now one of the last parts of arch/arm/ that fails to build with the clang integrated assembler, which is unlikely to ever want to support it. The two alternatives are to force the use of binutils/gas when building the crunch support, or to remove it entirely. According to Hartley Sweeten: "Martin Guy did a lot of work trying to get the maverick crunch working but I was never able to successfully use it for anything. It "kind" of works but depending on the EP93xx silicon revision there are still a number of hardware bugs that either give imprecise or garbage results. I have no problem with removing the kernel support for the maverick crunch." Unless someone else comes up with a good reason to keep it around, remove it now. This touches mostly the ep93xx platform, but removes a bit of code from ARM common ptrace and signal frame handling as well. If there are remaining users of MaverickCrunch, they can use LTS kernels for at least another five years before kernel support ends. Link: https://lore.kernel.org/linux-arm-kernel/20210802141245.1146772-1-arnd@kernel.org/ Link: https://lore.kernel.org/linux-arm-kernel/20210226164345.3889993-1-arnd@kernel.org/ Link: https://github.com/ClangBuiltLinux/linux/issues/1272 Link: https://gcc.gnu.org/legacy-ml/gcc/2008-03/msg01063.html Cc: "Martin Guy" <martinwguy@martinwguy@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-12-08ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel modeArd Biesheuvel
There are a couple of problems with the exception entry code that deals with FP exceptions (which are reported as UND exceptions) when building the kernel in Thumb2 mode: - the conditional branch to vfp_kmode_exception in vfp_support_entry() may be out of range for its target, depending on how the linker decides to arrange the sections; - when the UND exception is taken in kernel mode, the emulation handling logic is entered via the 'call_fpe' label, which means we end up using the wrong value/mask pairs to match and detect the NEON opcodes. Since UND exceptions in kernel mode are unlikely to occur on a hot path (as opposed to the user mode version which is invoked for VFP support code and lazy restore), we can use the existing undef hook machinery for any kernel mode instruction emulation that is needed, including calling the existing vfp_kmode_exception() routine for unexpected cases. So drop the call to call_fpe, and instead, install an undef hook that will get called for NEON and VFP instructions that trigger an UND exception in kernel mode. While at it, make sure that the PC correction is accurate for the execution mode where the exception was taken, by checking the PSR Thumb bit. Cc: Dmitry Osipenko <digetx@gmail.com> Cc: Kees Cook <keescook@chromium.org> Fixes: eff8728fe698 ("vmlinux.lds.h: Add PGO and AutoFDO input sections") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-27ARM: 9015/2: Define the virtual space of KASan's shadow regionLinus Walleij
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB addressable by a 32bit architecture) out of the virtual address space to use as shadow memory for KASan as follows: +----+ 0xffffffff | | | | |-> Static kernel image (vmlinux) BSS and page table | |/ +----+ PAGE_OFFSET | | | | |-> Loadable kernel modules virtual address space area | |/ +----+ MODULES_VADDR = KASAN_SHADOW_END | | | | |-> The shadow area of kernel virtual address. | |/ +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the | | shadow address of MODULES_VADDR | | | | | | | | |-> The user space area in lowmem. The kernel address | | | sanitizer do not use this space, nor does it map it. | | | | | | | | | | | | | |/ ------ 0 0 .. TASK_SIZE is the memory that can be used by shared userspace/kernelspace. It us used for userspace processes and for passing parameters and memory buffers in system calls etc. We do not need to shadow this area. KASAN_SHADOW_START: This value begins with the MODULE_VADDR's shadow address. It is the start of kernel virtual space. Since we have modules to load, we need to cover also that area with shadow memory so we can find memory bugs in modules. KASAN_SHADOW_END This value is the 0x100000000's shadow address: the mapping that would be after the end of the kernel memory at 0xffffffff. It is the end of kernel address sanitizer shadow area. It is also the start of the module area. KASAN_SHADOW_OFFSET: This value is used to map an address to the corresponding shadow address by the following formula: shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; As you would expect, >> 3 is equal to dividing by 8, meaning each byte in the shadow memory covers 8 bytes of kernel memory, so one bit shadow memory per byte of kernel memory is used. The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending on the VMSPLIT layout of the system: the kernel and userspace can split up lowmem in different ways according to needs, so we calculate the shadow offset depending on this. When kasan is enabled, the definition of TASK_SIZE is not an 8-bit rotated constant, so we need to modify the TASK_SIZE access code in the *.s file. The kernel and modules may use different amounts of memory, according to the VMSPLIT configuration, which in turn determines the PAGE_OFFSET. We use the following KASAN_SHADOW_OFFSETs depending on how the virtual memory is split up: - 0x1f000000 if we have 1G userspace / 3G kernelspace split: - The kernel address space is 3G (0xc0000000) - PAGE_OFFSET is then set to 0x40000000 so the kernel static image (vmlinux) uses addresses 0x40000000 .. 0xffffffff - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x3f000000 so the modules use addresses 0x3f000000 .. 0x3fffffff - So the addresses 0x3f000000 .. 0xffffffff need to be covered with shadow memory. That is 0xc1000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x18200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x26e00000, to KASAN_SHADOW_END at 0x3effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x3f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3) KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000 KASAN_SHADOW_OFFSET = 0x1f000000 - 0x5f000000 if we have 2G userspace / 2G kernelspace split: - The kernel space is 2G (0x80000000) - PAGE_OFFSET is set to 0x80000000 so the kernel static image uses 0x80000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x7f000000 so the modules use addresses 0x7f000000 .. 0x7fffffff - So the addresses 0x7f000000 .. 0xffffffff need to be covered with shadow memory. That is 0x81000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x10200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x6ee00000, to KASAN_SHADOW_END at 0x7effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x7f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3) KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000 KASAN_SHADOW_OFFSET = 0x5f000000 - 0x9f000000 if we have 3G userspace / 1G kernelspace split, and this is the default split for ARM: - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xc0000000 so the kernel static image uses 0xc0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xbf000000 so the modules use addresses 0xbf000000 .. 0xbfffffff - So the addresses 0xbf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x41000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x08200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xb6e00000, to KASAN_SHADOW_END at 0xbfffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xbf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3) KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000 KASAN_SHADOW_OFFSET = 0x9f000000 - 0x8f000000 if we have 3G userspace / 1G kernelspace with full 1 GB low memory (VMSPLIT_3G_OPT): - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xb0000000 so the kernel static image uses 0xb0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xaf000000 so the modules use addresses 0xaf000000 .. 0xaffffff - So the addresses 0xaf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x51000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x0a200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xa4e00000, to KASAN_SHADOW_END at 0xaeffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xaf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3) KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000 KASAN_SHADOW_OFFSET = 0x8f000000 - The default value of 0xffffffff for KASAN_SHADOW_OFFSET is an error value. We should always match one of the above shadow offsets. When we do this, TASK_SIZE will sometimes get a bit odd values that will not fit into immediate mov assembly instructions. To account for this, we need to rewrite some assembly using TASK_SIZE like this: - mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE or - cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0 this is done to avoid the immediate #TASK_SIZE that need to fit into a limited number of bits. Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q Reported-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Abbott Liu <liuwenliang@huawei.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.hRussell King
Consolidate the user access assembly code to asm/uaccess-asm.h. This moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable, uaccess_disable, uaccess_save, uaccess_restore macros, and creates two new ones for exception entry and exit - uaccess_entry and uaccess_exit. This makes the uaccess_save and uaccess_restore macros private to asm/uaccess-asm.h. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-12-08sched/rt, ARM: Use CONFIG_PREEMPTIONThomas Gleixner
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the entry code, cache over to use CONFIG_PREEMPTION and add output in show_stack() for PREEMPT_RT. [bigeasy: +traps.c] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20191015191821.11479-2-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-15docs: arm: convert docs to ReST and rename to *.rstMauro Carvalho Chehab
Converts ARM the text files to ReST, preparing them to be an architecture book. The conversion is actually: - add blank lines and identation in order to identify paragraphs; - fix tables markups; - add some lists markups; - mark literal blocks; - adjust title markups. At its new index.rst, let's add a :orphan: while this is not linked to the main index.rst file, in order to avoid build warnings. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Reviewed-by Corentin Labbe <clabbe.montjoie@gmail.com> # For sun4i-ss
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-26ARM: 8844/1: use unified assembler in assembly filesStefan Agner
Use unified assembler syntax (UAL) in assembly files. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-08-03ARM: Convert to GENERIC_IRQ_MULTI_HANDLERPalmer Dabbelt
Converts the ARM interrupt code to use the recently added GENERIC_IRQ_MULTI_HANDLER, which is essentially just a copy of ARM's existhing MULTI_IRQ_HANDLER. The only changes are: * handle_arch_irq is now defined in a generic C file instead of an arm-specific assembly file. * handle_arch_irq is now marked as __ro_after_init. Signed-off-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux@armlinux.org.uk Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: jonas@southpole.se Cc: stefan.kristiansson@saunalahti.fi Cc: shorne@gmail.com Cc: jason@lakedaemon.net Cc: marc.zyngier@arm.com Cc: Arnd Bergmann <arnd@arndb.de> Cc: nicolas.pitre@linaro.org Cc: vladimir.murzin@arm.com Cc: keescook@chromium.org Cc: jinb.park7@gmail.com Cc: yamada.masahiro@socionext.com Cc: alexandre.belloni@bootlin.com Cc: pombredanne@nexb.com Cc: Greg KH <gregkh@linuxfoundation.org> Cc: kstewart@linuxfoundation.org Cc: jhogan@kernel.org Cc: mark.rutland@arm.com Cc: ard.biesheuvel@linaro.org Cc: james.morse@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: openrisc@lists.librecores.org Link: https://lkml.kernel.org/r/20180622170126.6308-3-palmer@sifive.com
2018-06-14Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variablesLinus Torvalds
The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-12-17ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit codeRussell King
Avoid adding kprobes to any of the kernel entry/exit or startup assembly code, or code in the identity-mapped region. This code does not conform to the standard C conventions, which means that the expectations of the kprobes code is not forfilled. Placing kprobes at some of these locations results in the kernel trying to return to userspace addresses while retaining the CPU in kernel mode. Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-09-09Merge branches 'fixes' and 'misc' into for-linusRussell King
2017-08-14ARM: align .data sectionRussell King
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12, failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215: 0xc0019e20 <+0>: ldr r1, [pc, #788] 0xc0019e24 <+4>: ldr r0, [r1] <== here with r1 containing 0xc06f82cd, which is the address of "clean_addr". Examination of the System.map shows: c06f22c8 D user_pmd_table c06f22cc d __warned.19178 c06f22cd d clean_addr indicating that a .data.unlikely section has appeared just before the .data section from proc-xscale.S. According to objdump -h, it appears that our assembly files default their .data alignment to 2**0, which is bad news if the preceding .data section size is not power-of-2 aligned at link time. Add the appropriate .align directives to all assembly files in arch/arm that are missing them where we require an appropriate alignment. Reported-by: Robert Jarzmik <robert.jarzmik@free.fr> Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-30ARM: Prepare for randomized task_structArnd Bergmann
With the new task struct randomization, we can run into a build failure for certain random seeds, which will place fields beyond the allow immediate size in the assembly: arch/arm/kernel/entry-armv.S: Assembler messages: arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096) Only two constants in asm-offset.h are affected, and I'm changing both of them here to work correctly in all configurations. One more macro has the problem, but is currently unused, so this removes it instead of adding complexity. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> [kees: Adjust commit log slightly] Signed-off-by: Kees Cook <keescook@chromium.org>
2016-08-09ARM: fix address limit restoration for undefined instructionsRussell King
During boot, sometimes the kernel will test to see if an instruction causes an undefined instruction exception. Unfortunately, the exit path for these exceptions did not restore the address limit, which causes the rootfs mount code to fail. Fix the missing address limit restoration. Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-07-07ARM: save and reset the address limit when entering an exceptionRussell King
When we enter an exception, the current address limit should not apply to the exception context: if the exception context wishes to access kernel space via the user accessors (eg, perf code), it must explicitly request such access. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>