summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-10-28Merge branch 'for-next/entry-s-to-c' into for-next/coreCatalin Marinas
Move the synchronous exception paths from entry.S into a C file to improve the code readability. * for-next/entry-s-to-c: arm64: entry-common: don't touch daif before bp-hardening arm64: Remove asmlinkage from updated functions arm64: entry: convert el0_sync to C arm64: entry: convert el1_sync to C arm64: add local_daif_inherit() arm64: Add prototypes for functions called by entry.S arm64: remove __exception annotations
2019-10-28arm64: Make arm64_dma32_phys_limit staticCatalin Marinas
This variable is only used in the arch/arm64/mm/init.c file for ZONE_DMA32 initialisation, no need to expose it. Reported-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28Merge branch 'kvm-arm64/erratum-1319367' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72 respectively suffer from errata 1319537 and 1319367, potentially resulting in TLB corruption if the CPU speculates an AT instruction while switching guests. The fix is slightly more involved since we don't have VHE to help us here, but the idea is the same: when switching a guest in, we must prevent any speculated AT from being able to parse the page tables until S2 is up and running. Only at this stage can we allow AT to take place. For this, we always restore the guest sysregs first, except for its SCTLR and TCR registers, which must be set with SCTLR.M=1 and TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB allocation. Once S2 is setup, we restore the guest's SCTLR and TCR. Similar things must be done on TLB invalidation... * 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: Enable and document ARM errata 1319367 and 1319537 arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs arm64: KVM: Reorder system register restoration and stage-2 activation arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
2019-10-28Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/coreCatalin Marinas
Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale instructions when software depends on prefetch-speculation-protection instead of explicit synchronization. [0] The workaround is to trap I-Cache maintenance and issue an inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the I-Cache maintenance isn't necessary. The core tells user-space it can skip it with CTR_EL0.DIC. We also have to trap this register to hide the bit forcing DIC-aware user-space to perform the maintenance. To avoid trapping all cache-maintenance, this workaround depends on a firmware component that only traps I-cache maintenance from EL0 and performs the workaround. For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance and performs the inner-shareable-TLBI that makes everything better. [0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice * for-next/neoverse-n1-stale-instr: arm64: Silence clang warning on mismatched value/register sizes arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419 arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
2019-10-28Documentation: Add documentation for CCN-512 DTS bindingMarek Bykowski
Indicate the arm-ccn perf back-end supports now ccn-512. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Marek Bykowski <marek.bykowski@gmail.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-28perf: arm-ccn: Enable stats for CCN-512 interconnectMarek Bykowski
Add compatible string for the ARM CCN-512 interconnect Acked-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Marek Bykowski <marek.bykowski@gmail.com> Signed-off-by: Boleslaw Malecki <boleslaw.malecki@tieto.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-28Merge remote-tracking branch 'arm64/for-next/fixes' into for-next/coreCatalin Marinas
This is required to solve the conflicts with subsequent merges of two more errata workaround branches. * arm64/for-next/fixes: arm64: tags: Preserve tags for addresses translated via TTBR1 arm64: mm: fix inverted PAR_EL1.F check arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled arm64: hibernate: check pgd table allocation arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled arm64: Fix kcore macros after 52-bit virtual addressing fallout arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected arm64: Avoid Cavium TX2 erratum 219 when switching TTBR arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
2019-10-28arm64: entry-common: don't touch daif before bp-hardeningJames Morse
The previous patches mechanically transformed the assembly version of entry.S to entry-common.c for synchronous exceptions. The C version of local_daif_restore() doesn't quite do the same thing as the assembly versions if pseudo-NMI is in use. In particular, | local_daif_restore(DAIF_PROCCTX_NOIRQ) will still allow pNMI to be delivered. This is not the behaviour do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not be possible for the PMU handler to run as an NMI until the bp-hardening sequence has run. The bp-hardening calls were placed where they are because this was the first C code to run after the relevant exceptions. As we've now moved that point earlier, move the checks and calls earlier too. This makes it clearer that this stuff runs before any kind of exception, and saves modifying PSTATE twice. Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Remove asmlinkage from updated functionsJames Morse
Now that the callers of these functions have moved into C, they no longer need the asmlinkage annotation. Remove it. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: entry: convert el0_sync to CMark Rutland
This is largely a 1-1 conversion of asm to C, with a couple of caveats. The el0_sync{_compat} switches explicitly handle all the EL0 debug cases, so el0_dbg doesn't have to try to bail out for unexpected EL1 debug ESR values. This also means that an unexpected vector catch from AArch32 is routed to el0_inv. We *could* merge the native and compat switches, which would make the diffstat negative, but I've tried to stay as close to the existing assembly as possible for the moment. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes. removed irq trace calls as the C helpers do this. renamed el0_dbg's use of FAR] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: entry: convert el1_sync to CMark Rutland
This patch converts the EL1 sync entry assembly logic to C code. Doing this will allow us to make changes in a slightly more readable way. A case in point is supporting kernel-first RAS. do_sea() should be called on the CPU that took the fault. Largely the assembly code is converted to C in a relatively straightforward manner. Since all sync sites share a common asm entry point, the ASM_BUG() instances are no longer required for effective backtraces back to assembly, and we don't need similar BUG() entries. The ESR_ELx.EC codes for all (supported) debug exceptions are now checked in the el1_sync_handler's switch statement, which renders the check in el1_dbg redundant. This both simplifies the el1_dbg handler, and makes the EL1 exception handling more robust to currently-unallocated ESR_ELx.EC encodings. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes, moved prototypes] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: add local_daif_inherit()Mark Rutland
Some synchronous exceptions can be taken from a number of contexts, e.g. where IRQs may or may not be masked. In the entry assembly for these exceptions, we use the inherit_daif assembly macro to ensure that we only mask those exceptions which were masked when the exception was taken. So that we can do the same from C code, this patch adds a new local_daif_inherit() function, following the existing local_daif_*() naming scheme. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [moved away from local_daif_restore()] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Add prototypes for functions called by entry.SJames Morse
Functions that are only called by assembly don't always have a C header file prototype. Add the prototypes before moving the assembly callers to C. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: remove __exception annotationsJames Morse
Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded stack frame") arm64 has not used the __exception annotation to dump the pt_regs during stack tracing. in_exception_text() has no callers. This annotation is only used to blacklist kprobes, it means the same as __kprobes. Section annotations like this require the functions to be grouped together between the start/end markers, and placed according to the linker script. For kprobes we also have NOKPROBE_SYMBOL() which logs the symbol address in a section that kprobes parses and blacklists at boot. Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of blacklisted symbols, and saves us from having an arm64 specific spelling of __kprobes. do_debug_exception() already has a NOKPROBE_SYMBOL() annotation. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-28arm64: Silence clang warning on mismatched value/register sizesCatalin Marinas
Clang reports a warning on the __tlbi(aside1is, 0) macro expansion since the value size does not match the register size specified in the inline asm. Construct the ASID value using the __TLBI_VADDR() macro. Fixes: 222fc0c8503d ("arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space") Reported-by: Nathan Chancellor <natechancellor@gmail.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-26arm64: Enable and document ARM errata 1319367 and 1319537Marc Zyngier
Now that everything is in place, let's get the ball rolling by allowing the corresponding config option to be selected. Also add the required information to silicon_errata.rst. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Prevent speculative S1 PTW when restoring vcpu contextMarc Zyngier
When handling erratum 1319367, we must ensure that the page table walker cannot parse the S1 page tables while the guest is in an inconsistent state. This is done as follows: On guest entry: - TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - all system registers are restored, except for TCR_EL1 and SCTLR_EL1 - stage-2 is restored - SCTLR_EL1 and TCR_EL1 are restored On guest exit: - SCTLR_EL1.M and TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - stage-2 is disabled - All host system registers are restored Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Disable EL1 PTW when invalidating S2 TLBsMarc Zyngier
When erratum 1319367 is being worked around, special care must be taken not to allow the page table walker to populate TLBs while we have the stage-2 translation enabled (which would otherwise result in a bizare mix of the host S1 and the guest S2). We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2 configuration, and clear the same bits after having disabled S2. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-26arm64: KVM: Reorder system register restoration and stage-2 activationMarc Zyngier
In order to prepare for handling erratum 1319367, we need to make sure that all system registers (and most importantly the registers configuring the virtual memory) are set before we enable stage-2 translation. This results in a minor reorganisation of the load sequence, without any functional change. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-25arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-spaceJames Morse
Compat user-space is unable to perform ICIMVAU instructions from user-space. Instead it uses a compat-syscall. Add the workaround for Neoverse-N1 #1542419 to this code path. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419James Morse
Systems affected by Neoverse-N1 #1542419 support DIC so do not need to perform icache maintenance once new instructions are cleaned to the PoU. For the errata workaround, the kernel hides DIC from user-space, so that the unnecessary cache maintenance can be trapped by firmware. To reduce the number of traps, produce a fake IminLine value based on PAGE_SIZE. Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419James Morse
Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()Yunfeng Ye
In cases like suspend-to-disk and suspend-to-ram, a large number of CPU cores need to be shut down. At present, the CPU hotplug operation is serialised, and the CPU cores can only be shut down one by one. In this process, if PSCI affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill() needs to wait for 10ms. If hundreds of CPU cores need to be shut down, it will take a long time. Normally, there is no need to wait 10ms in cpu_psci_cpu_kill(). So change the wait interval from 10 ms to max 1 ms and use usleep_range() instead of msleep() for more accurate timer. In addition, reducing the time interval will increase the messages output, so remove the "Retry ..." message, instead, track time and output to the the sucessful message. Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: pgtable: Correct typo in commentMark Brown
vmmemmap -> vmemmap Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1Dave Martin
Commit d71be2b6c0e1 ("arm64: cpufeature: Detect SSBS and advertise to userspace") exposes ID_AA64PFR1_EL1 to userspace, but didn't update the documentation to match. Add it. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-25arm64: cpufeature: Fix typos in commentShaokun Zhang
Fix up one typos: CTR_E0 -> CTR_EL0 Cc: Will Deacon <will@kernel.org> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-18arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versionsMarc Zyngier
Rework the EL2 vector hardening that is only selected for A57 and A72 so that the table can also be used for ARM64_WORKAROUND_1319367. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-18mm: fix double page fault on arm64 if PTE_AF is clearedJia He
When we tested pmdk unit test [1] vmmalloc_fork TEST3 on arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. To reproduce the bug, the cmd is as follows after you deployed everything: make -C src/test/vmmalloc_fork/ TEST_TIME=60m check Below call trace is from arm64 do_page_fault for debugging purpose: [ 110.016195] Call trace: [ 110.016826] do_page_fault+0x5a4/0x690 [ 110.017812] do_mem_abort+0x50/0xb0 [ 110.018726] el1_da+0x20/0xc4 [ 110.019492] __arch_copy_from_user+0x180/0x280 [ 110.020646] do_wp_page+0xb0/0x860 [ 110.021517] __handle_mm_fault+0x994/0x1338 [ 110.022606] handle_mm_fault+0xe8/0x180 [ 110.023584] do_page_fault+0x240/0x690 [ 110.024535] do_mem_abort+0x50/0xb0 [ 110.025423] el0_da+0x20/0x24 The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." This patch fixes it by calling pte_mkyoung. Also, the parameter is changed because vmf should be passed to cow_user_page() Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error in case there can be some obscure use-case (by Kirill). [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork Signed-off-by: Jia He <justin.he@arm.com> Reported-by: Yibo Cai <Yibo.Cai@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-18x86/mm: implement arch_faults_on_old_pte() stub on x86Jia He
arch_faults_on_old_pte is a helper to indicate that it might cause page fault when accessing old pte. But on x86, there is feature to setting pte access flag by hardware. Hence implement an overriding stub which always returns false. Signed-off-by: Jia He <justin.he@arm.com> Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-18arm64: mm: implement arch_faults_on_old_pte() on arm64Jia He
On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. We don't always have a hardware-managed Access Flag on arm64. Hence implement arch_faults_on_old_pte on arm64 to indicate that it might cause page fault when accessing old pte. Signed-off-by: Jia He <justin.he@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-18arm64: cpufeature: introduce helper cpu_has_hw_af()Jia He
We unconditionally set the HW_AFDBM capability and only enable it on CPUs which really have the feature. But sometimes we need to know whether this cpu has the capability of HW AF. So decouple AF from DBM by a new helper cpu_has_hw_af(). If later we noticed a potential performance issue on this path, we can turn it into a static label as with other CPU features. Signed-off-by: Jia He <justin.he@arm.com> Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-17Merge branch 'errata/tx2-219' into for-next/fixesWill Deacon
Workaround for Cavium/Marvell ThunderX2 erratum #219. * errata/tx2-219: arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected arm64: Avoid Cavium TX2 erratum 219 when switching TTBR arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
2019-10-16arm64: tags: Preserve tags for addresses translated via TTBR1Will Deacon
Sign-extending TTBR1 addresses when converting to an untagged address breaks the documented POSIX semantics for mlock() in some obscure error cases where we end up returning -EINVAL instead of -ENOMEM as a direct result of rewriting the upper address bits. Rework the untagged_addr() macro to preserve the upper address bits for TTBR1 addresses and only clear the tag bits for user addresses. This matches the behaviour of the 'clear_address_tag' assembly macro, so rename that and align the implementations at the same time so that they use the same instruction sequences for the tag manipulation. Link: https://lore.kernel.org/stable/20191014162651.GF19200@arrakis.emea.arm.com/ Reported-by: Jan Stancek <jstancek@redhat.com> Tested-by: Jan Stancek <jstancek@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: mm: fix inverted PAR_EL1.F checkMark Rutland
When detecting a spurious EL1 translation fault, we have the CPU retry the translation using an AT S1E1R instruction, and inspect PAR_EL1 to determine if the fault was spurious. When PAR_EL1.F == 0, the AT instruction successfully translated the address without a fault, which implies the original fault was spurious. However, in this case we return false and treat the original fault as if it was not spurious. Invert the return value so that we treat such a case as spurious. Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 42f91093b043 ("arm64: mm: Ignore spurious translation faults taken from the kernel") Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_FYang Yingliang
The 'F' field of the PAR_EL1 register lives in bit 0, not bit 1. Fix the broken definition in 'sysreg.h'. Fixes: e8620cff9994 ("arm64: sysreg: Add some field definitions for PAR_EL1") Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabledJulien Thierry
Preempting from IRQ-return means that the task has its PSTATE saved on the stack, which will get restored when the task is resumed and does the actual IRQ return. However, enabling some CPU features requires modifying the PSTATE. This means that, if a task was scheduled out during an IRQ-return before all CPU features are enabled, the task might restore a PSTATE that does not include the feature enablement changes once scheduled back in. * Task 1: PAN == 0 ---| |--------------- | |<- return from IRQ, PSTATE.PAN = 0 | <- IRQ | +--------+ <- preempt() +-- ^ | reschedule Task 1, PSTATE.PAN == 1 * Init: --------------------+------------------------ ^ | enable_cpu_features set PSTATE.PAN on all CPUs Worse than this, since PSTATE is untouched when task switching is done, a task missing the new bits in PSTATE might affect another task, if both do direct calls to schedule() (outside of IRQ/exception contexts). Fix this by preventing preemption on IRQ-return until features are enabled on all CPUs. This way the only PSTATE values that are saved on the stack are from synchronous exceptions. These are expected to be fatal this early, the exception is BRK for WARN_ON(), but as this uses do_debug_exception() which keeps IRQs masked, it shouldn't call schedule(). Signed-off-by: Julien Thierry <julien.thierry@arm.com> [james: Replaced a really cool hack, with an even simpler static key in C. expanded commit message with Julien's cover-letter ascii art] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: mm: Fix unused variable warning in zone_sizes_initNathan Chancellor
When building arm64 allnoconfig, CONFIG_ZONE_DMA and CONFIG_ZONE_DMA32 get disabled so there is a warning about max_dma being unused. ../arch/arm64/mm/init.c:215:16: warning: unused variable 'max_dma' [-Wunused-variable] unsigned long max_dma = min; ^ 1 warning generated. Add __maybe_unused to make this clear to the compiler. Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-16arm64/mm: Poison initmem while freeing with free_reserved_area()Anshuman Khandual
Platform implementation for free_initmem() should poison the memory while freeing it up. Hence pass across POISON_FREE_INITMEM while calling into free_reserved_area(). The same is being followed in the generic fallback for free_initmem() and some other platforms overriding it. Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Reviewed-by: Steven Price <steven.price@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-16arm64: use generic free_initrd_mem()Mike Rapoport
arm64 calls memblock_free() for the initrd area in its implementation of free_initrd_mem(), but this call has no actual effect that late in the boot process. By the time initrd is freed, all the reserved memory is managed by the page allocator and the memblock.reserved is unused, so the only purpose of the memblock_free() call is to keep track of initrd memory for debugging and accounting. Without the memblock_free() call the only difference between arm64 and the generic versions of free_initrd_mem() is the memory poisoning. Move memblock_free() call to the generic code, enable it there for the architectures that define ARCH_KEEP_MEMBLOCK and use the generic implementation of free_initrd_mem() on arm64. Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> #arm64 Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-15arm64: Document ICC_CTLR_EL3.PMHE setting requirementsMarc Zyngier
It goes without saying, but better saying it: the kernel expects ICC_CTLR_EL3.PMHE to have the same value across all CPUs, and for that setting not to change during the lifetime of the kernel. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-15arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clearMarc Zyngier
The GICv3 architecture specification is incredibly misleading when it comes to PMR and the requirement for a DSB. It turns out that this DSB is only required if the CPU interface sends an Upstream Control message to the redistributor in order to update the RD's view of PMR. This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't the case in Linux. It can still be set from EL3, so some special care is required. But the upshot is that in the (hopefuly large) majority of the cases, we can drop the DSB altogether. This relies on a new static key being set if the boot CPU has PMHE set. The drawback is that this static key has to be exported to modules. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14arm64: hibernate: check pgd table allocationPavel Tatashin
There is a bug in create_safe_exec_page(), when page table is allocated it is not checked that table is allocated successfully: But it is dereferenced in: pgd_none(READ_ONCE(*pgdp)). Check that allocation was successful. Fixes: 82869ac57b5d ("arm64: kernel: Add support for hibernate/suspend-to-disk") Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-14arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabledJulien Grall
If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when read by userspace, despite being required by the architecture. Although this is theoretically a change in ABI, userspace will first check for the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field before probing the ID_AA64ZFR0_EL1 register. Given that these are reported correctly for this configuration, we can safely tighten up the current behaviour. Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n. Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace") Signed-off-by: Will Deacon <will@kernel.org>
2019-10-14mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type'Nicolas Saenz Julienne
These zones usage has evolved with time and the comments were outdated. This joins both ZONE_DMA and ZONE_DMA32 explanation and gives up to date examples on how they are used on different architectures. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14arm64: use both ZONE_DMA and ZONE_DMA32Nicolas Saenz Julienne
So far all arm64 devices have supported 32 bit DMA masks for their peripherals. This is not true anymore for the Raspberry Pi 4 as most of it's peripherals can only address the first GB of memory on a total of up to 4 GB. This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32 to be addressable with a 32 bit mask. So it was decided to re-introduce ZONE_DMA in arm64. ZONE_DMA will contain the lower 1G of memory, which is currently the memory area addressable by any peripheral on an arm64 device. ZONE_DMA32 will contain the rest of the 32 bit addressable memory. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14arm64: rename variables used to calculate ZONE_DMA32's sizeNicolas Saenz Julienne
Let the name indicate that they are used to calculate ZONE_DMA32's size as opposed to ZONE_DMA. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys()Nicolas Saenz Julienne
By the time we call zones_sizes_init() arm64_dma_phys_limit already contains the result of max_zone_dma_phys(). We use the variable instead of calling the function directly to save some precious cpu time. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14firmware: arm_sdei: use common SMCCC_CONDUIT_*Mark Rutland
Now that we have common definitions for SMCCC conduits, move the SDEI code over to them, and remove the SDEI-specific definitions. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: James Morse <james.morse@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14firmware/psci: use common SMCCC_CONDUIT_*Mark Rutland
Now that we have common SMCCC_CONDUIT_* definitions, migrate the PSCI code over to them, and kill off the old PSCI_CONDUIT_* definitions. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-14arm: spectre-v2: use arm_smccc_1_1_get_conduit()Mark Rutland
Now that we have arm_smccc_1_1_get_conduit(), we can hide the PSCI implementation details from the arm spectre-v2 code, so let's do so. As arm_smccc_1_1_get_conduit() implicitly checks that the SMCCC version is at least SMCCC_VERSION_1_1, we no longer need to check this explicitly where switch statements have a default case. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>