summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2019-12-06arm64: entry: refine comment of stack overflow checkHeyi Guo
Stack overflow checking can be done by testing sp & (1 << THREAD_SHIFT) only for the stacks are aligned to (2 << THREAD_SHIFT) with size of (1 << THREAD_SIZE), and this is the case when CONFIG_VMAP_STACK is set. Fix the code comment to avoid confusion. Cc: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Heyi Guo <guoheyi@huawei.com> [catalin.marinas@arm.com: Updated comment following Mark's suggestion] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-06arm64: ftrace: fix ifdefferyMark Rutland
When I tweaked the ftrace entry assembly in commit: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") ... my ifdeffery tweaks left ftrace_graph_caller undefined for CONFIG_DYNAMIC_FTRACE && CONFIG_FUNCTION_GRAPH_TRACER when ftrace is based on mcount. The kbuild test robot reported that this issue is detected at link time: | arch/arm64/kernel/entry-ftrace.o: In function `skip_ftrace_call': | arch/arm64/kernel/entry-ftrace.S:238: undefined reference to `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:238:(.text+0x3c): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol | `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:243: undefined reference to `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:243:(.text+0x54): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol | `ftrace_graph_caller' This patch fixes the ifdeffery so that the mcount version of ftrace_graph_caller doesn't depend on CONFIG_DYNAMIC_FTRACE. At the same time, a redundant #else is removed from the ifdeffery for the patchable-function-entry version of ftrace_graph_caller. Fixes: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Torsten Duwe <duwe@lst.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-06arm64: KVM: Invoke compute_layout() before alternatives are appliedSebastian Andrzej Siewior
compute_layout() is invoked as part of an alternative fixup under stop_machine(). This function invokes get_random_long() which acquires a sleeping lock on -RT which can not be acquired in this context. Rename compute_layout() to kvm_compute_layout() and invoke it before stop_machine() applies the alternatives. Add a __init prefix to kvm_compute_layout() because the caller has it, too (and so the code can be discarded after boot). Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-06arm64: Validate tagged addresses in access_ok() called from kernel threadsCatalin Marinas
__range_ok(), invoked from access_ok(), clears the tag of the user address only if CONFIG_ARM64_TAGGED_ADDR_ABI is enabled and the thread opted in to the relaxed ABI. The latter sets the TIF_TAGGED_ADDR thread flag. In the case of asynchronous I/O (e.g. io_submit()), the access_ok() may be called from a kernel thread. Since kernel threads don't have TIF_TAGGED_ADDR set, access_ok() will fail for valid tagged user addresses. Example from the ffs_user_copy_worker() thread: use_mm(io_data->mm); ret = ffs_copy_to_iter(io_data->buf, ret, &io_data->data); unuse_mm(io_data->mm); Relax the __range_ok() check to always untag the user address if called in the context of a kernel thread. The user pointers would have already been checked via aio_setup_rw() -> import_{single_range,iovec}() at the time of the asynchronous I/O request. Fixes: 63f0c6037965 ("arm64: Introduce prctl() options to control the tagged user addresses ABI") Cc: <stable@vger.kernel.org> # 5.4.x- Cc: Will Deacon <will@kernel.org> Reported-by: Evgenii Stepanov <eugenis@google.com> Tested-by: Evgenii Stepanov <eugenis@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-04arm64: mm: Fix column alignment for UXN in kernel_page_tablesMark Brown
UXN is the only individual PTE bit other than the PTE_ATTRINDX_MASK ones which doesn't have both a set and a clear value provided, meaning that the columns in the table won't all be aligned. The PTE_ATTRINDX_MASK values are all both mutually exclusive and longer so are listed last to make a single final column for those values. Ensure everything is aligned by providing a clear value for UXN. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-04arm64: insn: consistently handle exit textMark Rutland
A kernel built with KASAN && FTRACE_WITH_REGS && !MODULES, produces a boot-time splat in the bowels of ftrace: | [ 0.000000] ftrace: allocating 32281 entries in 127 pages | [ 0.000000] ------------[ cut here ]------------ | [ 0.000000] WARNING: CPU: 0 PID: 0 at kernel/trace/ftrace.c:2019 ftrace_bug+0x27c/0x328 | [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc3-00008-g7f08ae53a7e3 #13 | [ 0.000000] Hardware name: linux,dummy-virt (DT) | [ 0.000000] pstate: 60000085 (nZCv daIf -PAN -UAO) | [ 0.000000] pc : ftrace_bug+0x27c/0x328 | [ 0.000000] lr : ftrace_init+0x640/0x6cc | [ 0.000000] sp : ffffa000120e7e00 | [ 0.000000] x29: ffffa000120e7e00 x28: ffff00006ac01b10 | [ 0.000000] x27: ffff00006ac898c0 x26: dfffa00000000000 | [ 0.000000] x25: ffffa000120ef290 x24: ffffa0001216df40 | [ 0.000000] x23: 000000000000018d x22: ffffa0001244c700 | [ 0.000000] x21: ffffa00011bf393c x20: ffff00006ac898c0 | [ 0.000000] x19: 00000000ffffffff x18: 0000000000001584 | [ 0.000000] x17: 0000000000001540 x16: 0000000000000007 | [ 0.000000] x15: 0000000000000000 x14: ffffa00010432770 | [ 0.000000] x13: ffff940002483519 x12: 1ffff40002483518 | [ 0.000000] x11: 1ffff40002483518 x10: ffff940002483518 | [ 0.000000] x9 : dfffa00000000000 x8 : 0000000000000001 | [ 0.000000] x7 : ffff940002483519 x6 : ffffa0001241a8c0 | [ 0.000000] x5 : ffff940002483519 x4 : ffff940002483519 | [ 0.000000] x3 : ffffa00011780870 x2 : 0000000000000001 | [ 0.000000] x1 : 1fffe0000d591318 x0 : 0000000000000000 | [ 0.000000] Call trace: | [ 0.000000] ftrace_bug+0x27c/0x328 | [ 0.000000] ftrace_init+0x640/0x6cc | [ 0.000000] start_kernel+0x27c/0x654 | [ 0.000000] random: get_random_bytes called from print_oops_end_marker+0x30/0x60 with crng_init=0 | [ 0.000000] ---[ end trace 0000000000000000 ]--- | [ 0.000000] ftrace faulted on writing | [ 0.000000] [<ffffa00011bf393c>] _GLOBAL__sub_D_65535_0___tracepoint_initcall_level+0x4/0x28 | [ 0.000000] Initializing ftrace call sites | [ 0.000000] ftrace record flags: 0 | [ 0.000000] (0) | [ 0.000000] expected tramp: ffffa000100b3344 This is due to an unfortunate combination of several factors. Building with KASAN results in the compiler generating anonymous functions to register/unregister global variables against the shadow memory. These functions are placed in .text.startup/.text.exit, and given mangled names like _GLOBAL__sub_{I,D}_65535_0_$OTHER_SYMBOL. The kernel linker script places these in .init.text and .exit.text respectively, which are both discarded at runtime as part of initmem. Building with FTRACE_WITH_REGS uses -fpatchable-function-entry=2, which also instruments KASAN's anonymous functions. When these are discarded with the rest of initmem, ftrace removes dangling references to these call sites. Building without MODULES implicitly disables STRICT_MODULE_RWX, and causes arm64's patch_map() function to treat any !core_kernel_text() symbol as something that can be modified in-place. As core_kernel_text() is only true for .text and .init.text, with the latter depending on system_state < SYSTEM_RUNNING, we'll treat .exit.text as something that can be patched in-place. However, .exit.text is mapped read-only. Hence in this configuration the ftrace init code blows up while trying to patch one of the functions generated by KASAN. We could try to filter out the call sites in .exit.text rather than initializing them, but this would be inconsistent with how we handle .init.text, and requires hooking into core bits of ftrace. The behaviour of patch_map() is also inconsistent today, so instead let's clean that up and have it consistently handle .exit.text. This patch teaches patch_map() to handle .exit.text at init time, preventing the boot-time splat above. The flow of patch_map() is reworked to make the logic clearer and minimize redundant conditionality. Fixes: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Torsten Duwe <duwe@suse.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-12-04arm64: mm: Fix initialisation of DMA zones on non-NUMA systemsWill Deacon
John reports that the recently merged commit 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board: | Booting Linux on physical CPU 0x0000000000 [0x517f803c] | Linux version 5.4.0-mainline-10675-g957a03b9e38f | Machine model: Thundercomm Dragonboard 845c | [...] | Built 1 zonelists, mobility grouping on. Total pages: -188245 | Kernel command line: earlycon | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c | init=/init androidboot.boot_devices=soc/1d84000.ufshc | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2 | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c | androidboot.baseband=sda | msm_drm.dsi_display0=dsi_lt9611_1080_video_display: | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init | | <hangs indefinitely here> This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle memblocks that fall entirely within the ZONE_DMA region and erroneously ends up trying to add a negatively-sized region into the following ZONE_DMA32, which is later interpreted as a large unsigned region by the core MM code. Rework the non-NUMA implementation of zone_sizes_init() so that the start address of the memblock being processed is adjusted according to the end of the previous zone, which is then range-checked before updating the hole information of subsequent zones. Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: John Stultz <john.stultz@linaro.org> Tested-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-14arm64: Kconfig: add a choice for endiannessAnders Roxell
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CPU_BIG_ENDIAN gets enabled. Which tends not to be what most people want. Another concern that has come up is that ACPI isn't built for an allmodconfig kernel today since that also depends on !CPU_BIG_ENDIAN. Rework so that we introduce a 'choice' and default the choice to CPU_LITTLE_ENDIAN. That means that when we build an allmodconfig kernel it will default to CPU_LITTLE_ENDIAN that most people tends to want. Reviewed-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-11arm64: Kconfig: make CMDLINE_FORCE depend on CMDLINEAnders Roxell
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CMDLINE_FORCE gets enabled. Which forces the user to pass the full cmdline to CONFIG_CMDLINE="...". Rework so that CONFIG_CMDLINE_FORCE gets set only if CONFIG_CMDLINE is set to something except an empty string. Suggested-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-08Merge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup', ↵Catalin Marinas
'for-next/zone-dma', 'for-next/relax-icc_pmr_el1-sync', 'for-next/double-page-fault', 'for-next/misc', 'for-next/kselftest-arm64-signal' and 'for-next/kaslr-diagnostics' into for-next/core * for-next/elf-hwcap-docs: : Update the arm64 ELF HWCAP documentation docs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s] docs/arm64: cpu-feature-registers: Documents missing visible fields docs/arm64: elf_hwcaps: Document HWCAP_SB docs/arm64: elf_hwcaps: sort the HWCAP{, 2} documentation by ascending value * for-next/smccc-conduit-cleanup: : SMC calling convention conduit clean-up firmware: arm_sdei: use common SMCCC_CONDUIT_* firmware/psci: use common SMCCC_CONDUIT_* arm: spectre-v2: use arm_smccc_1_1_get_conduit() arm64: errata: use arm_smccc_1_1_get_conduit() arm/arm64: smccc/psci: add arm_smccc_1_1_get_conduit() * for-next/zone-dma: : Reintroduction of ZONE_DMA for Raspberry Pi 4 support arm64: mm: reserve CMA and crashkernel in ZONE_DMA32 dma/direct: turn ARCH_ZONE_DMA_BITS into a variable arm64: Make arm64_dma32_phys_limit static arm64: mm: Fix unused variable warning in zone_sizes_init mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type' arm64: use both ZONE_DMA and ZONE_DMA32 arm64: rename variables used to calculate ZONE_DMA32's size arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys() * for-next/relax-icc_pmr_el1-sync: : Relax ICC_PMR_EL1 (GICv3) accesses when ICC_CTLR_EL1.PMHE is clear arm64: Document ICC_CTLR_EL3.PMHE setting requirements arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear * for-next/double-page-fault: : Avoid a double page fault in __copy_from_user_inatomic() if hw does not support auto Access Flag mm: fix double page fault on arm64 if PTE_AF is cleared x86/mm: implement arch_faults_on_old_pte() stub on x86 arm64: mm: implement arch_faults_on_old_pte() on arm64 arm64: cpufeature: introduce helper cpu_has_hw_af() * for-next/misc: : Various fixes and clean-ups arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist arm64: mm: Remove MAX_USER_VA_BITS definition arm64: mm: simplify the page end calculation in __create_pgd_mapping() arm64: print additional fault message when executing non-exec memory arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill() arm64: pgtable: Correct typo in comment arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1 arm64: cpufeature: Fix typos in comment arm64/mm: Poison initmem while freeing with free_reserved_area() arm64: use generic free_initrd_mem() arm64: simplify syscall wrapper ifdeffery * for-next/kselftest-arm64-signal: : arm64-specific kselftest support with signal-related test-cases kselftest: arm64: fake_sigreturn_misaligned_sp kselftest: arm64: fake_sigreturn_bad_size kselftest: arm64: fake_sigreturn_duplicated_fpsimd kselftest: arm64: fake_sigreturn_missing_fpsimd kselftest: arm64: fake_sigreturn_bad_size_for_magic0 kselftest: arm64: fake_sigreturn_bad_magic kselftest: arm64: add helper get_current_context kselftest: arm64: extend test_init functionalities kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht] kselftest: arm64: mangle_pstate_invalid_daif_bits kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils kselftest: arm64: extend toplevel skeleton Makefile * for-next/kaslr-diagnostics: : Provide diagnostics on boot for KASLR arm64: kaslr: Check command line before looking for a seed arm64: kaslr: Announce KASLR status on boot
2019-11-08arm64: kaslr: Check command line before looking for a seedMark Brown
Now that we print diagnostics at boot the reason why we do not initialise KASLR matters. Currently we check for a seed before we check if the user has explicitly disabled KASLR on the command line which will result in misleading diagnostics so reverse the order of those checks. We still parse the seed from the DT early so that if the user has both provided a seed and disabled KASLR on the command line we still mask the seed on the command line. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-08arm64: kaslr: Announce KASLR status on bootMark Brown
Currently the KASLR code is silent at boot unless it forces on KPTI in which case a message will be printed for that. This can lead to users incorrectly believing their system has the feature enabled when it in fact does not, and if they notice the problem the lack of any diagnostics makes it harder to understand the problem. Add an initcall which prints a message showing the status of KASLR during boot to make the status clear. This is particularly useful in cases where we don't have a seed. It seems to be a relatively common error for system integrators and administrators to enable KASLR in their configuration but not provide the seed at runtime, often due to seed provisioning breaking at some later point after it is initially enabled and verified. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-08Merge branch 'for-next/perf' into for-next/coreCatalin Marinas
- Support for additional PMU topologies on HiSilicon platforms - Support for CCN-512 interconnect PMU - Support for AXI ID filtering in the IMX8 DDR PMU - Support for the CCPI2 uncore PMU in ThunderX2 - Driver cleanup to use devm_platform_ioremap_resource() * for-next/perf: drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform perf/imx_ddr: Dump AXI ID filter info to userspace docs/perf: Add AXI ID filter capabilities information perf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus perf/imx_ddr: Add enhanced AXI ID filter support bindings: perf: imx-ddr: Add new compatible string docs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk arm64: perf: Simplify the ARMv8 PMUv3 event attributes drivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver. Documentation: perf: Update documentation for ThunderX2 PMU uncore driver Documentation: Add documentation for CCN-512 DTS binding perf: arm-ccn: Enable stats for CCN-512 interconnect perf/smmuv3: use devm_platform_ioremap_resource() to simplify code perf/arm-cci: use devm_platform_ioremap_resource() to simplify code perf/arm-ccn: use devm_platform_ioremap_resource() to simplify code perf: xgene: use devm_platform_ioremap_resource() to simplify code perf: hisi: use devm_platform_ioremap_resource() to simplify code
2019-11-07Merge branch 'arm64/ftrace-with-regs' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux into for-next/core FTRACE_WITH_REGS support for arm64. * 'arm64/ftrace-with-regs' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: ftrace: minimize ifdeffery arm64: implement ftrace with regs arm64: asm-offsets: add S_FP arm64: insn: add encoder for MOV (register) arm64: module/ftrace: intialize PLT at load time arm64: module: rework special section handling module/ftrace: handle patchable-function-entry ftrace: add ftrace_init_nop()
2019-11-07arm64: mm: reserve CMA and crashkernel in ZONE_DMA32Nicolas Saenz Julienne
With the introduction of ZONE_DMA in arm64 we moved the default CMA and crashkernel reservation into that area. This caused a regression on big machines that need big CMA and crashkernel reservations. Note that ZONE_DMA is only 1GB big. Restore the previous behavior as the wide majority of devices are OK with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA will configure it explicitly. Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: Qian Cai <cai@lca.pw> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-06arm64: ftrace: minimize ifdefferyMark Rutland
Now that we no longer refer to mod->arch.ftrace_trampolines in the body of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery, and make the code easier to follow. Likewise in ftrace_make_nop(). Let's do so. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: implement ftrace with regsTorsten Duwe
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and/or modified. This is a building block for live-patching, where a function's arguments may be forwarded to another function. This is also necessary to enable ftrace and in-kernel pointer authentication at the same time, as it allows the LR value to be captured and adjusted prior to signing. Using GCC's -fpatchable-function-entry=N option, we can have the compiler insert a configurable number of NOPs between the function entry point and the usual prologue. This also ensures functions are AAPCS compliant (e.g. disabling inter-procedural register allocation). For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the following: | unsigned long bar(void); | | unsigned long foo(void) | { | return bar() + 1; | } ... to: | <foo>: | nop | nop | stp x29, x30, [sp, #-16]! | mov x29, sp | bl 0 <bar> | add x0, x0, #0x1 | ldp x29, x30, [sp], #16 | ret This patch builds the kernel with -fpatchable-function-entry=2, prefixing each function with two NOPs. To trace a function, we replace these NOPs with a sequence that saves the LR into a GPR, then calls an ftrace entry assembly function which saves this and other relevant registers: | mov x9, x30 | bl <ftrace-entry> Since patchable functions are AAPCS compliant (and the kernel does not use x18 as a platform register), x9-x18 can be safely clobbered in the patched sequence and the ftrace entry code. There are now two ftrace entry functions, ftrace_regs_entry (which saves all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is allocated for each within modules. Signed-off-by: Torsten Duwe <duwe@suse.de> [Mark: rework asm, comments, PLTs, initialization, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Julien Thierry <jthierry@redhat.com> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: asm-offsets: add S_FPMark Rutland
So that assembly code can more easily manipulate the FP (x29) within a pt_regs, add an S_FP asm-offsets definition. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: insn: add encoder for MOV (register)Mark Rutland
For FTRACE_WITH_REGS, we're going to want to generate a MOV (register) instruction as part of the callsite intialization. As MOV (register) is an alias for ORR (shifted register), we can generate this with aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and difficult to read in-context. Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can write callers in a more straightforward way. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: module/ftrace: intialize PLT at load timeMark Rutland
Currently we lazily-initialize a module's ftrace PLT at runtime when we install the first ftrace call. To do so we have to apply a number of sanity checks, transiently mark the module text as RW, and perform an IPI as part of handling Neoverse-N1 erratum #1542419. We only expect the ftrace trampoline to point at ftrace_caller() (AKA FTRACE_ADDR), so let's simplify all of this by intializing the PLT at module load time, before the module loader marks the module RO and performs the intial I-cache maintenance for the module. Thus we can rely on the module having been correctly intialized, and can simplify the runtime work necessary to install an ftrace call in a module. This will also allow for the removal of module_disable_ro(). Tested by forcing ftrace_make_call() to use the module PLT, and then loading up a module after setting up ftrace with: | echo ":mod:<module-name>" > set_ftrace_filter; | echo function > current_tracer; | modprobe <module-name> Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is selected, we wrap its use along with most of module_init_ftrace_plt() with ifdeffery rather than using IS_ENABLED(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: module: rework special section handlingMark Rutland
When we load a module, we have to perform some special work for a couple of named sections. To do this, we iterate over all of the module's sections, and perform work for each section we recognize. To make it easier to handle the unexpected absence of a section, and to make the section-specific logic easer to read, let's factor the section search into a helper. Similar is already done in the core module loader, and other architectures (and ideally we'd unify these in future). If we expect a module to have an ftrace trampoline section, but it doesn't have one, we'll now reject loading the module. When ARM64_MODULE_PLTS is selected, any correctly built module should have one (and this is assumed by arm64's ftrace PLT code) and the absence of such a section implies something has gone wrong at build time. Subsequent patches will make use of the new helper. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Torsten Duwe <duwe@suse.de> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Torsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org>
2019-11-06arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelistRich Wiley
NVIDIA Carmel CPUs don't implement ID_AA64PFR0_EL1.CSV3 but aren't susceptible to Meltdown, so add Carmel to kpti_safe_list[]. Signed-off-by: Rich Wiley <rwiley@nvidia.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-06arm64: mm: Remove MAX_USER_VA_BITS definitionBhupesh Sharma
commit 9b31cf493ffa ("arm64: mm: Introduce MAX_USER_VA_BITS definition") introduced the MAX_USER_VA_BITS definition, which was used to support the arm64 mm use-cases where the user-space could use 52-bit virtual addresses whereas the kernel-space would still could a maximum of 48-bit virtual addressing. But, now with commit b6d00d47e81a ("arm64: mm: Introduce 52-bit Kernel VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence there is no longer any scenario where user VA != kernel VA size (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true). Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to VA_BITS (maximum VA space size) in all possible use-cases. Note that even though the 'vabits_actual' value would be 48 for arm64 hardware which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52 is enabled), VA_BITS would still be set to a value 52. Hence this change would be safe in all possible VA address space combinations. Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: linux-kernel@vger.kernel.org Cc: kexec@lists.infradead.org Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-06arm64: mm: simplify the page end calculation in __create_pgd_mapping()Masahiro Yamada
Calculate the page-aligned end address more simply. The local variable, "length" is unneeded. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-01arm64: perf: Simplify the ARMv8 PMUv3 event attributesShaokun Zhang
For each PMU event, there is a ARMV8_EVENT_ATTR(xx, XX) and &armv8_event_attr_xx.attr.attr. Let's redefine the ARMV8_EVENT_ATTR to simplify the armv8_pmuv3_event_attrs. Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> [will: Dropped unnecessary array syntax] Signed-off-by: Will Deacon <will@kernel.org>
2019-11-01dma/direct: turn ARCH_ZONE_DMA_BITS into a variableNicolas Saenz Julienne
Some architectures, notably ARM, are interested in tweaking this depending on their runtime DMA addressing limitations. Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-29arm64: print additional fault message when executing non-exec memoryXiang Zheng
When attempting to executing non-executable memory, the fault message shows: Unable to handle kernel read from unreadable memory at virtual address ffff802dac469000 This may confuse someone, so add a new fault message for instruction abort. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28Merge branch 'for-next/entry-s-to-c' into for-next/coreCatalin Marinas
Move the synchronous exception paths from entry.S into a C file to improve the code readability. * for-next/entry-s-to-c: arm64: entry-common: don't touch daif before bp-hardening arm64: Remove asmlinkage from updated functions arm64: entry: convert el0_sync to C arm64: entry: convert el1_sync to C arm64: add local_daif_inherit() arm64: Add prototypes for functions called by entry.S arm64: remove __exception annotations
2019-10-28arm64: Make arm64_dma32_phys_limit staticCatalin Marinas
This variable is only used in the arch/arm64/mm/init.c file for ZONE_DMA32 initialisation, no need to expose it. Reported-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28Merge branch 'kvm-arm64/erratum-1319367' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72 respectively suffer from errata 1319537 and 1319367, potentially resulting in TLB corruption if the CPU speculates an AT instruction while switching guests. The fix is slightly more involved since we don't have VHE to help us here, but the idea is the same: when switching a guest in, we must prevent any speculated AT from being able to parse the page tables until S2 is up and running. Only at this stage can we allow AT to take place. For this, we always restore the guest sysregs first, except for its SCTLR and TCR registers, which must be set with SCTLR.M=1 and TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB allocation. Once S2 is setup, we restore the guest's SCTLR and TCR. Similar things must be done on TLB invalidation... * 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: Enable and document ARM errata 1319367 and 1319537 arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs arm64: KVM: Reorder system register restoration and stage-2 activation arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
2019-10-28Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/coreCatalin Marinas
Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale instructions when software depends on prefetch-speculation-protection instead of explicit synchronization. [0] The workaround is to trap I-Cache maintenance and issue an inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the I-Cache maintenance isn't necessary. The core tells user-space it can skip it with CTR_EL0.DIC. We also have to trap this register to hide the bit forcing DIC-aware user-space to perform the maintenance. To avoid trapping all cache-maintenance, this workaround depends on a firmware component that only traps I-cache maintenance from EL0 and performs the workaround. For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance and performs the inner-shareable-TLBI that makes everything better. [0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice * for-next/neoverse-n1-stale-instr: arm64: Silence clang warning on mismatched value/register sizes arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419 arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
2019-10-28Merge remote-tracking branch 'arm64/for-next/fixes' into for-next/coreCatalin Marinas
This is required to solve the conflicts with subsequent merges of two more errata workaround branches. * arm64/for-next/fixes: arm64: tags: Preserve tags for addresses translated via TTBR1 arm64: mm: fix inverted PAR_EL1.F check arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled arm64: hibernate: check pgd table allocation arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled arm64: Fix kcore macros after 52-bit virtual addressing fallout arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected arm64: Avoid Cavium TX2 erratum 219 when switching TTBR arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
2019-10-28arm64: entry-common: don't touch daif before bp-hardeningJames Morse
The previous patches mechanically transformed the assembly version of entry.S to entry-common.c for synchronous exceptions. The C version of local_daif_restore() doesn't quite do the same thing as the assembly versions if pseudo-NMI is in use. In particular, | local_daif_restore(DAIF_PROCCTX_NOIRQ) will still allow pNMI to be delivered. This is not the behaviour do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not be possible for the PMU handler to run as an NMI until the bp-hardening sequence has run. The bp-hardening calls were placed where they are because this was the first C code to run after the relevant exceptions. As we've now moved that point earlier, move the checks and calls earlier too. This makes it clearer that this stuff runs before any kind of exception, and saves modifying PSTATE twice. Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Remove asmlinkage from updated functionsJames Morse
Now that the callers of these functions have moved into C, they no longer need the asmlinkage annotation. Remove it. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: entry: convert el0_sync to CMark Rutland
This is largely a 1-1 conversion of asm to C, with a couple of caveats. The el0_sync{_compat} switches explicitly handle all the EL0 debug cases, so el0_dbg doesn't have to try to bail out for unexpected EL1 debug ESR values. This also means that an unexpected vector catch from AArch32 is routed to el0_inv. We *could* merge the native and compat switches, which would make the diffstat negative, but I've tried to stay as close to the existing assembly as possible for the moment. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes. removed irq trace calls as the C helpers do this. renamed el0_dbg's use of FAR] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: entry: convert el1_sync to CMark Rutland
This patch converts the EL1 sync entry assembly logic to C code. Doing this will allow us to make changes in a slightly more readable way. A case in point is supporting kernel-first RAS. do_sea() should be called on the CPU that took the fault. Largely the assembly code is converted to C in a relatively straightforward manner. Since all sync sites share a common asm entry point, the ASM_BUG() instances are no longer required for effective backtraces back to assembly, and we don't need similar BUG() entries. The ESR_ELx.EC codes for all (supported) debug exceptions are now checked in the el1_sync_handler's switch statement, which renders the check in el1_dbg redundant. This both simplifies the el1_dbg handler, and makes the EL1 exception handling more robust to currently-unallocated ESR_ELx.EC encodings. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes, moved prototypes] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: add local_daif_inherit()Mark Rutland
Some synchronous exceptions can be taken from a number of contexts, e.g. where IRQs may or may not be masked. In the entry assembly for these exceptions, we use the inherit_daif assembly macro to ensure that we only mask those exceptions which were masked when the exception was taken. So that we can do the same from C code, this patch adds a new local_daif_inherit() function, following the existing local_daif_*() naming scheme. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [moved away from local_daif_restore()] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Add prototypes for functions called by entry.SJames Morse
Functions that are only called by assembly don't always have a C header file prototype. Add the prototypes before moving the assembly callers to C. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: remove __exception annotationsJames Morse
Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded stack frame") arm64 has not used the __exception annotation to dump the pt_regs during stack tracing. in_exception_text() has no callers. This annotation is only used to blacklist kprobes, it means the same as __kprobes. Section annotations like this require the functions to be grouped together between the start/end markers, and placed according to the linker script. For kprobes we also have NOKPROBE_SYMBOL() which logs the symbol address in a section that kprobes parses and blacklists at boot. Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of blacklisted symbols, and saves us from having an arm64 specific spelling of __kprobes. do_debug_exception() already has a NOKPROBE_SYMBOL() annotation. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Silence clang warning on mismatched value/register sizesCatalin Marinas
Clang reports a warning on the __tlbi(aside1is, 0) macro expansion since the value size does not match the register size specified in the inline asm. Construct the ASID value using the __TLBI_VADDR() macro. Fixes: 222fc0c8503d ("arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space") Reported-by: Nathan Chancellor <natechancellor@gmail.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-26arm64: Enable and document ARM errata 1319367 and 1319537Marc Zyngier
Now that everything is in place, let's get the ball rolling by allowing the corresponding config option to be selected. Also add the required information to silicon_errata.rst. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Prevent speculative S1 PTW when restoring vcpu contextMarc Zyngier
When handling erratum 1319367, we must ensure that the page table walker cannot parse the S1 page tables while the guest is in an inconsistent state. This is done as follows: On guest entry: - TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - all system registers are restored, except for TCR_EL1 and SCTLR_EL1 - stage-2 is restored - SCTLR_EL1 and TCR_EL1 are restored On guest exit: - SCTLR_EL1.M and TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - stage-2 is disabled - All host system registers are restored Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Disable EL1 PTW when invalidating S2 TLBsMarc Zyngier
When erratum 1319367 is being worked around, special care must be taken not to allow the page table walker to populate TLBs while we have the stage-2 translation enabled (which would otherwise result in a bizare mix of the host S1 and the guest S2). We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2 configuration, and clear the same bits after having disabled S2. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Reorder system register restoration and stage-2 activationMarc Zyngier
In order to prepare for handling erratum 1319367, we need to make sure that all system registers (and most importantly the registers configuring the virtual memory) are set before we enable stage-2 translation. This results in a minor reorganisation of the load sequence, without any functional change. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-25arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-spaceJames Morse
Compat user-space is unable to perform ICIMVAU instructions from user-space. Instead it uses a compat-syscall. Add the workaround for Neoverse-N1 #1542419 to this code path. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419James Morse
Systems affected by Neoverse-N1 #1542419 support DIC so do not need to perform icache maintenance once new instructions are cleaned to the PoU. For the errata workaround, the kernel hides DIC from user-space, so that the unnecessary cache maintenance can be trapped by firmware. To reduce the number of traps, produce a fake IminLine value based on PAGE_SIZE. Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419James Morse
Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()Yunfeng Ye
In cases like suspend-to-disk and suspend-to-ram, a large number of CPU cores need to be shut down. At present, the CPU hotplug operation is serialised, and the CPU cores can only be shut down one by one. In this process, if PSCI affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill() needs to wait for 10ms. If hundreds of CPU cores need to be shut down, it will take a long time. Normally, there is no need to wait 10ms in cpu_psci_cpu_kill(). So change the wait interval from 10 ms to max 1 ms and use usleep_range() instead of msleep() for more accurate timer. In addition, reducing the time interval will increase the messages output, so remove the "Retry ..." message, instead, track time and output to the the sucessful message. Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: pgtable: Correct typo in commentMark Brown
vmmemmap -> vmemmap Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: cpufeature: Fix typos in commentShaokun Zhang
Fix up one typos: CTR_E0 -> CTR_EL0 Cc: Will Deacon <will@kernel.org> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>