summaryrefslogtreecommitdiff
path: root/arch/arm/kernel/asm-offsets.c
AgeCommit message (Collapse)Author
2024-02-20vdso/ARM: Make union vdso_data_store available for all architecturesAnna-Maria Behnsen
The vDSO data page "union vdso_data_store" is defined in an ARM specific header file and also defined in several other places. Move the definition from the ARM header file into the generic vdso datapage header to make it also usable for others and to prevent code duplication. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20240219153939.75719-5-anna-maria@linutronix.de
2023-08-14Merge branch 'devel-stable' into for-nextRussell King (Oracle)
2023-05-29ARM: mm: Make virt_to_pfn() a static inlineLinus Walleij
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>. The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros. The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h> arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions. Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly. Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/ Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2023-05-17ARM: kernel: Get rid of thread_info::used_cp[] arrayArd Biesheuvel
We keep track of which coprocessor triggered a fault in the used_cp[] array in thread_info, but this data is never used anywhere. So let's remove it. Linus did some digging and found out that the last user of this field was removed in commit bb1a773d5b6b ("kill unused dump_fpu() instances"). Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-01-11ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabledArd Biesheuvel
In a subsequent patch, we will relax the kernel mode NEON policy, and permit kernel mode NEON to be used not only from task context, as is permitted today, but also from softirq context. Given that softirqs may trigger over the back of any IRQ unless they are explicitly disabled, we need to address the resulting races in the VFP state handling, by disabling softirq processing in two distinct but related cases: - kernel mode NEON will leave the FPU disabled after it completes, so any kernel code sequence that enables the FPU and subsequently accesses its registers needs to disable softirqs until it completes; - kernel_neon_begin() will preserve the userland VFP state in memory, and if it interrupts the ordinary VFP state preserve sequence, the latter will resume execution with the VFP registers corrupted, and happily continue saving them to memory. Given that disabling softirqs also disables preemption, we can replace the existing preempt_disable/enable occurrences in the VFP state handling asm code with new macros that dis/enable softirqs instead. In the VFP state handling C code, add local_bh_disable/enable() calls in those places where the VFP state is preserved. One thing to keep in mind is that, once we allow NEON use in softirq context, the result of any such interruption is that the FPEXC_EN bit in the FPEXC register will be cleared, and vfp_current_hw_state[cpu] will be NULL. This means that any sequence that [conditionally] clears FPEXC_EN and/or sets vfp_current_hw_state[cpu] to NULL does not need to run with softirqs disabled, as the result will be the same. Furthermore, the handling of THREAD_NOTIFY_SWITCH is guaranteed to run with IRQs disabled, and so it does not need protection from softirq interruptions either. Tested-by: Martin Willi <martin@strongswan.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-12-06ARM: implement THREAD_INFO_IN_TASK for uniprocessor systemsArd Biesheuvel
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well. To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-09-27ARM: smp: Enable THREAD_INFO_IN_TASKArd Biesheuvel
Now that we no longer rely on thread_info living at the base of the task stack to be able to access the 'current' pointer, we can wire up the generic support for moving thread_info into the task struct itself. Note that this requires us to update the cpu field in thread_info explicitly, now that the core code no longer does so. Ideally, we would switch the percpu code to access the cpu field in task_struct instead, but this unleashes #include circular dependency hell. Co-developed-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Keith Packard <keithpac@amazon.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27gcc-plugins: arm-ssp: Prepare for THREAD_INFO_IN_TASK supportArd Biesheuvel
We will be enabling THREAD_INFO_IN_TASK support for ARM, which means that we can no longer load the stack canary value by masking the stack pointer and taking the copy that lives in thread_info. Instead, we will be able to load it from the task_struct directly, by using the TPIDRURO register which will hold the current task pointer when THREAD_INFO_IN_TASK is in effect. This is much more straight-forward, and allows us to declutter this code a bit while at it. Note that this means that ARMv6 (non-v6K) SMP systems can no longer use this feature, but those are quite rare to begin with, so this is a reasonable trade off. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-09Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM development updates from Russell King: - Rename "mod_init" and "mod_exit" so that initcall debug output is actually useful (Randy Dunlap) - Update maintainers entries for linux-arm-kernel to indicate it is moderated for non-subscribers (Randy Dunlap) - Move install rules to arch/arm/Makefile (Masahiro Yamada) - Drop unnecessary ARCH_NR_GPIOS definition (Linus Walleij) - Don't warn about atags_to_fdt() stack size (David Heidelberg) - Speed up unaligned copy_{from,to}_kernel_nofault (Arnd Bergmann) - Get rid of set_fs() usage (Arnd Bergmann) - Remove checks for GCC prior to v4.6 (Geert Uytterhoeven) * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9118/1: div64: Remove always-true __div64_const32_is_OK() duplicate ARM: 9117/1: asm-generic: div64: Remove always-true __div64_const32_is_OK() ARM: 9116/1: unified: Remove check for gcc < 4 ARM: 9110/1: oabi-compat: fix oabi epoll sparse warning ARM: 9113/1: uaccess: remove set_fs() implementation ARM: 9112/1: uaccess: add __{get,put}_kernel_nofault ARM: 9111/1: oabi-compat: rework fcntl64() emulation ARM: 9114/1: oabi-compat: rework sys_semtimedop emulation ARM: 9108/1: oabi-compat: rework epoll_wait/epoll_pwait emulation ARM: 9107/1: syscall: always store thread_info->abi_syscall ARM: 9109/1: oabi-compat: add epoll_pwait handler ARM: 9106/1: traps: use get_kernel_nofault instead of set_fs() ARM: 9115/1: mm/maccess: fix unaligned copy_{from,to}_kernel_nofault ARM: 9105/1: atags_to_fdt: don't warn about stack size ARM: 9103/1: Drop ARCH_NR_GPIOS definition ARM: 9102/1: move theinstall rules to arch/arm/Makefile ARM: 9100/1: MAINTAINERS: mark all linux-arm-kernel@infradead list as moderated ARM: 9099/1: crypto: rename 'mod_init' & 'mod_exit' functions to be module-specific
2021-08-20ARM: 9113/1: uaccess: remove set_fs() implementationArnd Bergmann
There are no remaining callers of set_fs(), so just remove it along with all associated code that operates on thread_info->addr_limit. There are still further optimizations that can be done: - In get_user(), the address check could be moved entirely into the out of line code, rather than passing a constant as an argument, - I assume the DACR handling can be simplified as we now only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN is set, but not during set_fs(). Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-20ARM: 9107/1: syscall: always store thread_info->abi_syscallArnd Bergmann
The system call number is used in a a couple of places, in particular ptrace, seccomp and /proc/<pid>/syscall. The last one apparently never worked reliably on ARM for tasks that are not currently getting traced. Storing the syscall number in the normal entry path makes it work, as well as allowing us to see if the current system call is for OABI compat mode, which is the next thing I want to hook into. Since the thread_info->syscall field is not just the number any more, it is now renamed to abi_syscall. In kernels that enable both OABI and EABI, the upper bits of this field encode 0x900000 (__NR_OABI_SYSCALL_BASE) for OABI tasks, while normal EABI tasks do not set the upper bits. This makes it possible to implement the in_oabi_syscall() helper later. All other users of thread_info->syscall go through the syscall_get_nr() helper, which in turn filters out the ABI bits. Note that the ABI information is lost with PTRACE_SET_SYSCALL, so one cannot set the internal number to a particular version, but this was already the case. We could change it to let gdb encode the ABI type along with the syscall in a CONFIG_OABI_COMPAT-enabled kernel, but that itself would be a (backwards-compatible) ABI change, so I don't do it here. Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-04ARM: ep93xx: remove MaverickCrunch supportArnd Bergmann
The MaverickCrunch support for ep93xx never made it into glibc and was removed from gcc in its 4.8 release in 2012. It is now one of the last parts of arch/arm/ that fails to build with the clang integrated assembler, which is unlikely to ever want to support it. The two alternatives are to force the use of binutils/gas when building the crunch support, or to remove it entirely. According to Hartley Sweeten: "Martin Guy did a lot of work trying to get the maverick crunch working but I was never able to successfully use it for anything. It "kind" of works but depending on the EP93xx silicon revision there are still a number of hardware bugs that either give imprecise or garbage results. I have no problem with removing the kernel support for the maverick crunch." Unless someone else comes up with a good reason to keep it around, remove it now. This touches mostly the ep93xx platform, but removes a bit of code from ARM common ptrace and signal frame handling as well. If there are remaining users of MaverickCrunch, they can use LTS kernels for at least another five years before kernel support ends. Link: https://lore.kernel.org/linux-arm-kernel/20210802141245.1146772-1-arnd@kernel.org/ Link: https://lore.kernel.org/linux-arm-kernel/20210226164345.3889993-1-arnd@kernel.org/ Link: https://github.com/ClangBuiltLinux/linux/issues/1272 Link: https://gcc.gnu.org/legacy-ml/gcc/2008-03/msg01063.html Cc: "Martin Guy" <martinwguy@martinwguy@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-04-18ARM: 9075/1: kernel: Fix interrupted SMC callsManivannan Sadhasivam
On Qualcomm ARM32 platforms, the SMC call can return before it has completed. If this occurs, the call can be restarted, but it requires using the returned session ID value from the interrupted SMC call. The ARM32 SMCC code already has the provision to add platform specific quirks for things like this. So let's make use of it and add the Qualcomm specific quirk (ARM_SMCCC_QUIRK_QCOM_A6) used by the QCOM_SCM driver. This change is similar to the below one added for ARM64 a while ago: commit 82bcd087029f ("firmware: qcom: scm: Fix interrupted SCM calls") Without this change, the Qualcomm ARM32 platforms like SDX55 will return -EINVAL for SMC calls used for modem firmware loading and validation. Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-05ARM: kexec: fix oops after TLB are invalidatedRussell King
Giancarlo Ferrari reports the following oops while trying to use kexec: Unable to handle kernel paging request at virtual address 80112f38 pgd = fd7ef03e [80112f38] *pgd=0001141e(bad) Internal error: Oops: 80d [#1] PREEMPT SMP ARM ... This is caused by machine_kexec() trying to set the kernel text to be read/write, so it can poke values into the relocation code before copying it - and an interrupt occuring which changes the page tables. The subsequent writes then hit read-only sections that trigger a data abort resulting in the above oops. Fix this by copying the relocation code, and then writing the variables into the destination, thereby avoiding the need to make the kernel text read/write. Reported-by: Giancarlo Ferrari <giancarlo.ferrari89@gmail.com> Tested-by: Giancarlo Ferrari <giancarlo.ferrari89@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-07-08Raise gcc version requirement to 4.9Linus Torvalds
I realize that we fairly recently raised it to 4.8, but the fact is, 4.9 is a much better minimum version to target. We have a number of workarounds for actual bugs in pre-4.9 gcc versions (including things like internal compiler errors on ARM), but we also have some syntactic workarounds for lacking features. In particular, raising the minimum to 4.9 means that we can now just assume _Generic() exists, which is likely the much better replacement for a lot of very convoluted built-time magic with conditionals on sizeof and/or __builtin_choose_expr() with same_type() etc. Using _Generic also means that you will need to have a very recent version of 'sparse', but thats easy to build yourself, and much less of a hassle than some old gcc version can be. The latest (in a long string) of reasons for minimum compiler version upgrades was commit 5435f73d5c4a ("efi/x86: Fix build with gcc 4"). Ard points out that RHEL 7 uses gcc-4.8, but the people who stay back on old RHEL versions persumably also don't build their own kernels anyway. And maybe they should cross-built or just have a little side affair with a newer compiler? Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-03-24arm: Remove 32bit KVM host supportMarc Zyngier
That's it. Remove all references to KVM itself, and document that although it is no more, the ABI between SVC and HYP still exists. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-12ARM: smp: add support for per-task stack canariesArd Biesheuvel
On ARM, we currently only change the value of the stack canary when switching tasks if the kernel was built for UP. On SMP kernels, this is impossible since the stack canary value is obtained via a global symbol reference, which means a) all running tasks on all CPUs must use the same value b) we can only modify the value when no kernel stack frames are live on any CPU, which is effectively never. So instead, use a GCC plugin to add a RTL pass that replaces each reference to the address of the __stack_chk_guard symbol with an expression that produces the address of the 'stack_canary' field that is added to struct thread_info. This way, each task will use its own randomized value. Cc: Russell King <linux@armlinux.org.uk> Cc: Kees Cook <keescook@chromium.org> Cc: Emese Revfy <re.emese@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Laura Abbott <labbott@redhat.com> Cc: kernel-hardening@lists.openwall.com Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-08-22include/linux/compiler*.h: make compiler-*.h mutually exclusiveNick Desaulniers
Commit cafa0010cd51 ("Raise the minimum required gcc version to 4.6") recently exposed a brittle part of the build for supporting non-gcc compilers. Both Clang and ICC define __GNUC__, __GNUC_MINOR__, and __GNUC_PATCHLEVEL__ for quick compatibility with code bases that haven't added compiler specific checks for __clang__ or __INTEL_COMPILER. This is brittle, as they happened to get compatibility by posing as a certain version of GCC. This broke when upgrading the minimal version of GCC required to build the kernel, to a version above what ICC and Clang claim to be. Rather than always including compiler-gcc.h then undefining or redefining macros in compiler-intel.h or compiler-clang.h, let's separate out the compiler specific macro definitions into mutually exclusive headers, do more proper compiler detection, and keep shared definitions in compiler_types.h. Fixes: cafa0010cd51 ("Raise the minimum required gcc version to 4.6") Reported-by: Masahiro Yamada <yamada.masahiro@socionext.com> Suggested-by: Eli Friedman <efriedma@codeaurora.org> Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variablesLinus Torvalds
The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-19ARM: 8757/1: NOMMU: Support PMSAv8 MPUVladimir Murzin
ARMv8R/M architecture defines new memory protection scheme - PMSAv8 which is not compatible with PMSAv7. Key differences to PMSAv7 are: - Region geometry is defined by base and limit addresses - Addresses need to be either 32 or 64 byte aligned - No region priority due to overlapping regions are not allowed - It is unified, i.e. no distinction between data/instruction regions - Memory attributes are controlled via MAIR This patch implements support for PMSAv8 MPU defined by ARMv8R/M architecture. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-05-19ARM: 8754/1: NOMMU: Move PMSAv7 MPU under it's own namespaceVladimir Murzin
We are going to support different MPU which programming model is not compatible to PMSAv7, so move PMSAv7 MPU under it's own namespace. Tested-by: Szemz? András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8708/1: NOMMU: Rework MPU to be mostly done in CVladimir Murzin
Currently, there are several issues with how MPU is setup: 1. We won't boot if MPU is missing 2. We won't boot if use XIP 3. Further extension of MPU setup requires asm skills The 1st point can be relaxed, so we can continue with boot CPU even if MPU is missed and fail boot for secondaries only. To address the 2nd point we could create region covering CONFIG_XIP_PHYS_ADDR - _end and that might work for the first stage of MPU enable, but due to MPU's alignment requirement we could cover too much, IOW we need more flexibility in how we're partitioning memory regions... and it'd be hardly possible to archive because of the 3rd point. This patch is trying to address 1st and 3rd issues and paves the path for 2nd and further improvements. The most visible change introduced with this patch is that we start using mpu_rgn_info array (as it was supposed?), so change in MPU setup done by boot CPU is recorded there and feed to secondaries. It allows us to keep minimal region setup for boot CPU and do the rest in C. Since we start programming MPU regions in C evaluation of MPU constrains (number of regions supported and minimal region order) can be done once, which in turn open possibility to free-up "probe" region early. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-09-10ARM: signal handling support for FDPIC_FUNCPTRS functionsNicolas Pitre
Signal handlers are not direct function pointers but pointers to function descriptor in that case. Therefore we must retrieve the actual function address and load the GOT value into r9 from the descriptor before branching to the actual handler. If a restorer is provided, we also have to load its address and GOT from its descriptor. That descriptor address and the code to load it is pushed onto the stack to be executed as soon as the signal handler returns. However, to be compatible with NX stacks, the FDPIC bounce code is also copied to the signal page along with the other code stubs. Therefore this code must get at the descriptor address whether it executes from the stack or the signal page. To do so we use the stack pointer which points at the signal stack frame where the descriptor address was stored. Because the rt signal frame is different from the simpler frame, two versions of the bounce code are needed, and two variants (ARM and Thumb) as well. The asm-offsets facility is used to determine the actual offset in the signal frame for each version, meaning that struct sigframe and rt_sigframe had to be moved to a separate file. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Mickael GUENE <mickael.guene@st.com> Tested-by: Vincent Abriou <vincent.abriou@st.com> Tested-by: Andras Szemzo <szemzo.andras@gmail.com>
2016-07-07ARM: save and reset the address limit when entering an exceptionRussell King
When we enter an exception, the current address limit should not apply to the exception context: if the exception context wishes to access kernel space via the user accessors (eg, perf code), it must explicitly request such access. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22ARM: introduce svc_pt_regs structureRussell King
Since the privileged mode pt_regs are an extended version of the saved userland pt_regs, introduce a new svc_pt_regs structure to describe this layout. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22ARM: rename S_FRAME_SIZE to PT_REGS_SIZERussell King
S_FRAME_SIZE is no longer the size of the kernel stack frame, so this name is misleading. It is the size of the kernel pt_regs structure. Name it so. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-02-29ARM: KVM: Remove unused hyp_pc fieldMarc Zyngier
This field was never populated, and the panic code already does something similar. Delete the related code. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-02-29ARM: KVM: Cleanup asm-offsets.cMarc Zyngier
Since we don't have much assembler left, most of the KVM stuff in asm-offsets.c is now superfluous. Let's get rid of it. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-02-29ARM: KVM: Move GP registers into the CPU context structureMarc Zyngier
Continuing our rework of the CPU context, we now move the GP registers into the CPU context structure. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-02-29ARM: KVM: Move CP15 array into the CPU context structureMarc Zyngier
Continuing our rework of the CPU context, we now move the CP15 array into the CPU context structure. As this causes quite a bit of churn, we introduce the vcpu_cp15() macro that abstract the location of the actual array. This will probably help next time we have to revisit that code. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-02-29ARM: KVM: Move VFP registers to a CPU context structureMarc Zyngier
In order to turn the WS code into something that looks a bit more like the arm64 version, move the VFP registers into a CPU context container for both the host and the guest. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-04-15Merge branch 'exec_domain_rip_v2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc Pull exec domain removal from Richard Weinberger: "This series removes execution domain support from Linux. The idea behind exec domains was to support different ABIs. The feature was never complete nor stable. Let's rip it out and make the kernel signal handling code less complicated" * 'exec_domain_rip_v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (27 commits) arm64: Removed unused variable sparc: Fix execution domain removal Remove rest of exec domains. arch: Remove exec_domain from remaining archs arc: Remove signal translation and exec_domain xtensa: Remove signal translation and exec_domain xtensa: Autogenerate offsets in struct thread_info x86: Remove signal translation and exec_domain unicore32: Remove signal translation and exec_domain um: Remove signal translation and exec_domain tile: Remove signal translation and exec_domain sparc: Remove signal translation and exec_domain sh: Remove signal translation and exec_domain s390: Remove signal translation and exec_domain mn10300: Remove signal translation and exec_domain microblaze: Remove signal translation and exec_domain m68k: Remove signal translation and exec_domain m32r: Remove signal translation and exec_domain m32r: Autogenerate offsets in struct thread_info frv: Remove signal translation and exec_domain ...
2015-04-14Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: "Included in this update are both some long term fixes and some new features. Fixes: - An integer overflow in the calculation of ELF_ET_DYN_BASE. - Avoiding OOMs for high-order IOMMU allocations - SMP requires the data cache to be enabled for synchronisation primitives to work, so prevent the CPU_DCACHE_DISABLE option being visible on SMP builds. - A bug going back 10+ years in the noMMU ARM94* CPU support code, where it corrupts registers. Found by folk getting Linux running on their cameras. - Versatile Express needs an errata workaround enabled for CPU hot-unplug to work. Features: - Clean up module linker by handling out of range relocations separately from relocation cases we don't handle. - Fix a long term bug in the pci_mmap_page_range() code, which we hope won't impact userspace (we hope there's no users of the existing broken interface.) - Don't map DMA coherent allocations when we don't have a MMU. - Drop experimental status for SMP_ON_UP. - Warn when DT doesn't specify ePAPR mandatory cache properties. - Add documentation concerning how we find the start of physical memory for AUTO_ZRELADDR kernels, detailing why we have chosen the mask and the implications of changing it. - Updates from Ard Biesheuvel to address some issues with large kernels (such as allyesconfig) failing to link. - Allow hibernation to work on modern (ARMv7) CPUs - this appears to have never worked in the past on these CPUs. - Enable IRQ_SHOW_LEVEL, which changes the /proc/interrupts output format (hopefully without userspace breaking... let's hope that if it causes someone a problem, they tell us.) - Fix tegra-ahb DT offsets. - Rework ARM errata 643719 code (and ARMv7 flush_cache_louis()/ flush_dcache_all()) code to be more efficient, and enable this errata workaround by default for ARMv7+SMP CPUs. This complements the Versatile Express fix above. - Rework ARMv7 context code for errata 430973, so that only Cortex A8 CPUs are impacted by the branch target buffer flush when this errata is enabled. Also update the help text to indicate that all r1p* A8 CPUs are impacted. - Switch ARM to the generic show_mem() implementation, it conveys all the information which we were already reporting. - Prevent slow timer sources being used for udelay() - timers running at less than 1MHz are not useful for this, and can cause udelay() to return immediately, without any wait. Using such a slow timer is silly. - VDSO support for 32-bit ARM, mainly for gettimeofday() using the ARM architected timer. - Perf support for Scorpion performance monitoring units" vdso semantic conflict fixed up as per linux-next. * 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (52 commits) ARM: update errata 430973 documentation to cover Cortex A8 r1p* ARM: ensure delay timer has sufficient accuracy for delays ARM: switch to use the generic show_mem() implementation ARM: proc-v7: avoid errata 430973 workaround for non-Cortex A8 CPUs ARM: enable ARM errata 643719 workaround by default ARM: cache-v7: optimise test for Cortex A9 r0pX devices ARM: cache-v7: optimise branches in v7_flush_cache_louis ARM: cache-v7: consolidate initialisation of cache level index ARM: cache-v7: shift CLIDR to extract appropriate field before masking ARM: cache-v7: use movw/movt instructions ARM: allow 16-bit instructions in ALT_UP() ARM: proc-arm94*.S: fix setup function ARM: vexpress: fix CPU hotplug with CT9x4 tile. ARM: 8276/1: Make CPU_DCACHE_DISABLE depend on !SMP ARM: 8335/1: Documentation: DT bindings: Tegra AHB: document the legacy base address ARM: 8334/1: amba: tegra-ahb: detect and correct bogus base address ARM: 8333/1: amba: tegra-ahb: fix register offsets in the macros ARM: 8339/1: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility ARM: 8337/1: mm: Do not invoke OOM for higher order IOMMU DMA allocations ...
2015-04-12arm: Remove signal translation and exec_domainRichard Weinberger
As execution domain support is gone we can remove signal translation from the signal code and remove exec_domain from thread_info. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27ARM: 8330/1: add VDSO user-space codeNathan Lynch
Place VDSO-related user-space code in arch/arm/kernel/vdso/. It is almost completely written in C with some assembly helpers to load the data page address, sample the counter, and fall back to system calls when necessary. The VDSO can service gettimeofday and clock_gettime when CONFIG_ARM_ARCH_TIMER is enabled and the architected timer is present (and correctly configured). It reads the CP15-based virtual counter to compute high-resolution timestamps. Of particular note is that a post-processing step ("vdsomunge") is necessary to produce a shared object which is architecturally allowed to be used by both soft- and hard-float EABI programs. The 2012 edition of the ARM ABI defines Tag_ABI_VFP_args = 3 "Code is compatible with both the base and VFP variants; the user did not permit non-variadic functions to pass FP parameters/results." Unfortunately current toolchains do not support this tag, which is ideally what we would use. The best available option is to ensure that both EF_ARM_ABI_FLOAT_SOFT and EF_ARM_ABI_FLOAT_HARD are unset in the ELF header's e_flags, indicating that the shared object is "old" and should be accepted for backward compatibility's sake. While binutils < 2.24 appear to produce a vdso.so with both flags clear, 2.24 always sets EF_ARM_ABI_FLOAT_SOFT, with no way to inhibit this behavior. So we have to fix things up with a custom post-processing step. In fact, the VDSO code in glibc does much less validation (including checking these flags) than the code for handling conventional file-backed shared libraries, so this is a bit moot unless glibc's VDSO code becomes more strict. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-12arm/arm64: KVM: Kill CONFIG_KVM_ARM_{VGIC,TIMER}Christoffer Dall
We can definitely decide at run-time whether to use the GIC and timers or not, and the extra code and data structures that we allocate space for is really negligable with this config option, so I don't think it's worth the extra complexity of always having to define stub static inlines. The !CONFIG_KVM_ARM_VGIC/TIMER case is pretty much an untested code path anyway, so we're better off just getting rid of it. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2014-10-19ARM: Blacklist GCC 4.8.0 to GCC 4.8.2 - PR58854Russell King
These stock GCC versions miscompile the kernel by incorrectly optimising the function epilogue code - by first increasing the stack pointer, and then loading entries from below the stack. This means that an opportune interrupt or exception will corrupt the current function's saved state, which may result in the parent function seeing different register values. As this bug has been known to result in corrupted filesystems, and these buggy compiler versions seem to be frequently used, we have little option but to blacklist these compiler versions. Distributions may have fixed PR58854, but as their compilers are totally indistinguishable from the buggy versions, it is unfortunate that this also results in those also being blacklisted. Given the filesystem corruption potential of the original, this is the lesser evil. People who want to build with their fixed compiler versions will need to adjust the kernel source. (Distros need to think about the implications of fixing such a compiler bug, and consider how to ensure that their fixed compiler versions can be detected if they wish to avoid this.) Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-11KVM: arm/arm64: vgic: move GICv2 registers to their own structureMarc Zyngier
In order to make way for the GICv3 registers, move the v2-specific registers to their own structure. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-03-03ARM: KVM: introduce per-vcpu HYP Configuration RegisterMarc Zyngier
So far, KVM/ARM used a fixed HCR configuration per guest, except for the VI/VF/VA bits to control the interrupt in absence of VGIC. With the upcoming need to dynamically reconfigure trapping, it becomes necessary to allow the HCR to be changed on a per-vcpu basis. The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR field, initialized at setup time. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2013-06-20ARM: kernel: implement stack pointer save array through MPIDR hashingLorenzo Pieralisi
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-05-05Merge tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm updates from Gleb Natapov: "Highlights of the updates are: general: - new emulated device API - legacy device assignment is now optional - irqfd interface is more generic and can be shared between arches x86: - VMCS shadow support and other nested VMX improvements - APIC virtualization and Posted Interrupt hardware support - Optimize mmio spte zapping ppc: - BookE: in-kernel MPIC emulation with irqfd support - Book3S: in-kernel XICS emulation (incomplete) - Book3S: HV: migration fixes - BookE: more debug support preparation - BookE: e6500 support ARM: - reworking of Hyp idmaps s390: - ioeventfd for virtio-ccw And many other bug fixes, cleanups and improvements" * tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (204 commits) kvm: Add compat_ioctl for device control API KVM: x86: Account for failing enable_irq_window for NMI window request KVM: PPC: Book3S: Add API for in-kernel XICS emulation kvm/ppc/mpic: fix missing unlock in set_base_addr() kvm/ppc: Hold srcu lock when calling kvm_io_bus_read/write kvm/ppc/mpic: remove users kvm/ppc/mpic: fix mmio region lists when multiple guests used kvm/ppc/mpic: remove default routes from documentation kvm: KVM_CAP_IOMMU only available with device assignment ARM: KVM: iterate over all CPUs for CPU compatibility check KVM: ARM: Fix spelling in error message ARM: KVM: define KVM_ARM_MAX_VCPUS unconditionally KVM: ARM: Fix API documentation for ONE_REG encoding ARM: KVM: promote vfp_host pointer to generic host cpu context ARM: KVM: add architecture specific hook for capabilities ARM: KVM: perform HYP initilization for hotplugged CPUs ARM: KVM: switch to a dual-step HYP init code ARM: KVM: rework HYP page table freeing ARM: KVM: enforce maximum size for identity mapped code ARM: KVM: move to a KVM provided HYP idmap ...
2013-04-28ARM: KVM: promote vfp_host pointer to generic host cpu contextMarc Zyngier
We use the vfp_host pointer to store the host VFP context, should the guest start using VFP itself. Actually, we can use this pointer in a more generic way to store CPU speficic data, and arm64 is using it to dump the whole host state before switching to the guest. Simply rename the vfp_host field to host_cpu_context, and the corresponding type to kvm_cpu_context_t. No change in functionnality. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-04-25Merge branch 'kvm-arm-cleanup' from git://github.com/columbia/linux-kvm-arm.gitGleb Natapov
2013-04-25Merge branch 'mcpm' of git://git.linaro.org/people/nico/linux into devel-stableRussell King
2013-04-24ARM: mcpm_head.S: vlock-based first man electionDave Martin
Instead of requiring the first man to be elected in advance (which can be suboptimal in some situations), this patch uses a per- cluster mutex to co-ordinate selection of the first man. This should also make it more feasible to reuse this code path for asynchronous cluster resume (as in CPUidle scenarios). We must ensure that the vlock data doesn't share a cacheline with anything else, or dirty cache eviction could corrupt it. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-04-24ARM: mcpm: introduce helpers for platform coherency exit/setupDave Martin
This provides helper methods to coordinate between CPUs coming down and CPUs going up, as well as documentation on the used algorithms, so that cluster teardown and setup operations are not done for a cluster simultaneously. For use in the power_down() implementation: * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu) * __mcpm_outbound_enter_critical(unsigned int cluster) * __mcpm_outbound_leave_critical(unsigned int cluster) * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu) The power_up_setup() helper should do platform-specific setup in preparation for turning the CPU on, such as invalidating local caches or entering coherency. It must be assembler for now, since it must run before the MMU can be switched on. It is passed the affinity level for which initialization should be performed. Because the mcpm_sync_struct content is looked-up and modified with the cache enabled or disabled depending on the code path, it is crucial to always ensure proper cache maintenance to update main memory right away. The sync_cache_*() helpers are used to that end. Also, in order to prevent a cached writer from interfering with an adjacent non-cached writer, we ensure each state variable is located to a separate cache line. Thanks to Nicolas Pitre and Achin Gupta for the help with this patch. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-03-06ARM: KVM: abstract fault register accessesMarc Zyngier
Instead of directly accessing the fault registers, use proper accessors so the core code can be shared. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2013-03-03ARM: 7659/1: mm: make mm->context.id an atomic64_t variableWill Deacon
mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-11ARM: KVM: arch_timers: Add timer world switchMarc Zyngier
Do the necessary save/restore dance for the timers in the world switch code. In the process, allow the guest to read the physical counter, which is useful for its own clock_event_device. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>