summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/assembler.h
AgeCommit message (Collapse)Author
2021-05-01Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm updates from Paolo Bonzini: "This is a large update by KVM standards, including AMD PSP (Platform Security Processor, aka "AMD Secure Technology") and ARM CoreSight (debug and trace) changes. ARM: - CoreSight: Add support for ETE and TRBE - Stage-2 isolation for the host kernel when running in protected mode - Guest SVE support when running in nVHE mode - Force W^X hypervisor mappings in nVHE mode - ITS save/restore for guests using direct injection with GICv4.1 - nVHE panics now produce readable backtraces - Guest support for PTP using the ptp_kvm driver - Performance improvements in the S2 fault handler x86: - AMD PSP driver changes - Optimizations and cleanup of nested SVM code - AMD: Support for virtual SPEC_CTRL - Optimizations of the new MMU code: fast invalidation, zap under read lock, enable/disably dirty page logging under read lock - /dev/kvm API for AMD SEV live migration (guest API coming soon) - support SEV virtual machines sharing the same encryption context - support SGX in virtual machines - add a few more statistics - improved directed yield heuristics - Lots and lots of cleanups Generic: - Rework of MMU notifier interface, simplifying and optimizing the architecture-specific code - a handful of "Get rid of oprofile leftovers" patches - Some selftests improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (379 commits) KVM: selftests: Speed up set_memory_region_test selftests: kvm: Fix the check of return value KVM: x86: Take advantage of kvm_arch_dy_has_pending_interrupt() KVM: SVM: Skip SEV cache flush if no ASIDs have been used KVM: SVM: Remove an unnecessary prototype declaration of sev_flush_asids() KVM: SVM: Drop redundant svm_sev_enabled() helper KVM: SVM: Move SEV VMCB tracking allocation to sev.c KVM: SVM: Explicitly check max SEV ASID during sev_hardware_setup() KVM: SVM: Unconditionally invoke sev_hardware_teardown() KVM: SVM: Enable SEV/SEV-ES functionality by default (when supported) KVM: SVM: Condition sev_enabled and sev_es_enabled on CONFIG_KVM_AMD_SEV=y KVM: SVM: Append "_enabled" to module-scoped SEV/SEV-ES control variables KVM: SEV: Mask CPUID[0x8000001F].eax according to supported features KVM: SVM: Move SEV module params/variables to sev.c KVM: SVM: Disable SEV/SEV-ES if NPT is disabled KVM: SVM: Free sev_asid_bitmap during init if SEV setup fails KVM: SVM: Zero out the VMCB array used to track SEV ASID association x86/sev: Drop redundant and potentially misleading 'sev_enabled' KVM: x86: Move reverse CPUID helpers to separate header file KVM: x86: Rename GPR accessors to make mode-aware variants the defaults ...
2021-04-15Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', ↵Catalin Marinas
'for-next/vdso', 'for-next/fiq', 'for-next/epan', 'for-next/kasan-vmalloc', 'for-next/fgt-boot-init', 'for-next/vhe-only' and 'for-next/neon-softirqs-disabled', remote-tracking branch 'arm64/for-next/perf' into for-next/core * for-next/misc: : Miscellaneous patches arm64/sve: Add compile time checks for SVE hooks in generic functions arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG. arm64/sve: Remove redundant system_supports_sve() tests arm64: mte: Remove unused mte_assign_mem_tag_range() arm64: Add __init section marker to some functions arm64/sve: Rework SVE access trap to convert state in registers docs: arm64: Fix a grammar error arm64: smp: Add missing prototype for some smp.c functions arm64: setup: name `tcr` register arm64: setup: name `mair` register arm64: stacktrace: Move start_backtrace() out of the header arm64: barrier: Remove spec_bar() macro arm64: entry: remove test_irqs_unmasked macro ARM64: enable GENERIC_FIND_FIRST_BIT arm64: defconfig: Use DEBUG_INFO_REDUCED * for-next/kselftest: : Various kselftests for arm64 kselftest: arm64: Add BTI tests kselftest/arm64: mte: Report filename on failing temp file creation kselftest/arm64: mte: Fix clang warning kselftest/arm64: mte: Makefile: Fix clang compilation kselftest/arm64: mte: Output warning about failing compiler kselftest/arm64: mte: Use cross-compiler if specified kselftest/arm64: mte: Fix MTE feature detection kselftest/arm64: mte: common: Fix write() warnings kselftest/arm64: mte: user_mem: Fix write() warning kselftest/arm64: mte: ksm_options: Fix fscanf warning kselftest/arm64: mte: Fix pthread linking kselftest/arm64: mte: Fix compilation with native compiler * for-next/xntable: : Add hierarchical XN permissions for all page tables arm64: mm: use XN table mapping attributes for user/kernel mappings arm64: mm: use XN table mapping attributes for the linear region arm64: mm: add missing P4D definitions and use them consistently * for-next/vdso: : Minor improvements to the compat vdso and sigpage arm64: compat: Poison the compat sigpage arm64: vdso: Avoid ISB after reading from cntvct_el0 arm64: compat: Allow signal page to be remapped arm64: vdso: Remove redundant calls to flush_dcache_page() arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages * for-next/fiq: : Support arm64 FIQ controller registration arm64: irq: allow FIQs to be handled arm64: Always keep DAIF.[IF] in sync arm64: entry: factor irq triage logic into macros arm64: irq: rework root IRQ handler registration arm64: don't use GENERIC_IRQ_MULTI_HANDLER genirq: Allow architectures to override set_handle_irq() fallback * for-next/epan: : Support for Enhanced PAN (execute-only permissions) arm64: Support execute-only permissions with Enhanced PAN * for-next/kasan-vmalloc: : Support CONFIG_KASAN_VMALLOC on arm64 arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled arm64: kaslr: support randomized module area with KASAN_VMALLOC arm64: Kconfig: support CONFIG_KASAN_VMALLOC arm64: kasan: abstract _text and _end to KERNEL_START/END arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC * for-next/fgt-boot-init: : Booting clarifications and fine grained traps setup arm64: Require that system registers at all visible ELs be initialized arm64: Disable fine grained traps on boot arm64: Document requirements for fine grained traps at boot * for-next/vhe-only: : Dealing with VHE-only CPUs (a.k.a. M1) arm64: Get rid of CONFIG_ARM64_VHE arm64: Cope with CPUs stuck in VHE mode arm64: cpufeature: Allow early filtering of feature override * arm64/for-next/perf: arm64: perf: Remove redundant initialization in perf_event.c perf/arm_pmu_platform: Clean up with dev_printk perf/arm_pmu_platform: Fix error handling perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors docs: perf: Address some html build warnings docs: perf: Add new description on HiSilicon uncore PMU v2 drivers/perf: hisi: Add support for HiSilicon PA PMU driver drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver drivers/perf: hisi: Update DDRC PMU for programmable counter drivers/perf: hisi: Add new functions for HHA PMU drivers/perf: hisi: Add new functions for L3C PMU drivers/perf: hisi: Add PMU version for uncore PMU drivers. drivers/perf: hisi: Refactor code for more uncore PMUs drivers/perf: hisi: Remove unnecessary check of counter index drivers/perf: Simplify the SMMUv3 PMU event attributes drivers/perf: convert sysfs sprintf family to sysfs_emit drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit() drivers/perf: convert sysfs snprintf family to sysfs_emit * for-next/neon-softirqs-disabled: : Run kernel mode SIMD with softirqs disabled arm64: fpsimd: run kernel mode NEON with softirqs disabled arm64: assembler: introduce wxN aliases for wN registers arm64: assembler: remove conditional NEON yield macros
2021-04-13Merge remote-tracking branch 'arm64/for-next/neon-softirqs-disabled' into ↵Marc Zyngier
kvmarm-master/next Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-12arm64: fpsimd: run kernel mode NEON with softirqs disabledArd Biesheuvel
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: introduce wxN aliases for wN registersArd Biesheuvel
The AArch64 asm syntax has this slightly tedious property that the names used in mnemonics to refer to registers depend on whether the opcode in question targets the entire 64-bits (xN), or only the least significant 8, 16 or 32 bits (wN). When writing parameterized code such as macros, this can be annoying, as macro arguments don't lend themselves to indexed lookups, and so generating a reference to wN in a macro that receives xN as an argument is problematic. For instance, an upcoming patch that modifies the implementation of the cond_yield macro to be able to refer to 32-bit registers would need to modify invocations such as cond_yield 3f, x8 to cond_yield 3f, 8 so that the second argument can be token pasted after x or w to emit the correct register reference. Unfortunately, this interferes with the self documenting nature of the first example, where the second argument is obviously a register, whereas in the second example, one would need to go and look at the code to find out what '8' means. So let's fix this by defining wxN aliases for all xN registers, which resolve to the 32-bit alias of each respective 64-bit register. This allows the macro implementation to paste the xN reference after a w to obtain the correct register name. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: remove conditional NEON yield macrosArd Biesheuvel
The users of the conditional NEON yield macros have all been switched to the simplified cond_yield macro, and so the NEON specific ones can be removed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-25KVM: arm64: Generate final CTR_EL0 value when running in Protected modeMarc Zyngier
In protected mode, late CPUs are not allowed to boot (enforced by the PSCI relay). We can thus specialise the read_ctr macro to always return a pre-computed, sanitised value. Special care is taken to prevent the use of this custome version outside of the protected mode. Reviewed-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-03-24arm64: Always keep DAIF.[IF] in syncHector Martin
Apple SoCs (A11 and newer) have some interrupt sources hardwired to the FIQ line. We implement support for this by simply treating IRQs and FIQs the same way in the interrupt vectors. To support these systems, the FIQ mask bit needs to be kept in sync with the IRQ mask bit, so both kinds of exceptions are masked together. No other platforms should be delivering FIQ exceptions right now, and we already unmask FIQ in normal process context, so this should not have an effect on other systems - if spurious FIQs were arriving, they would already panic the kernel. Signed-off-by: Hector Martin <marcan@marcan.st> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-19arm64: asm: Provide set_sctlr_el2 macroQuentin Perret
We will soon need to turn the EL2 stage 1 MMU on and off in nVHE protected mode, so refactor the set_sctlr_el1 macro to make it usable for that purpose. Acked-by: Will Deacon <will@kernel.org> Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-17-qperret@google.com
2021-02-12Merge branch 'for-next/crypto' into for-next/coreWill Deacon
Introduce a new macro to allow yielding the vector unit if preemption is required. The initial users of this are being merged via the crypto tree for 5.12. * for-next/crypto: arm64: assembler: add cond_yield macro
2021-02-08arm64: Turn the MMU-on sequence into a macroMarc Zyngier
Turning the MMU on is a popular sport in the arm64 kernel, and we do it more than once, or even twice. As we are about to add even more, let's turn it into a macro. No expected functional change. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-03arm64: assembler: add cond_yield macroArd Biesheuvel
Add a macro cond_yield that branches to a specified label when called if the TIF_NEED_RESCHED flag is set and decreasing the preempt count would make the task preemptible again, resulting in a schedule to occur. This can be used by kernel mode SIMD code that keeps a lot of state in SIMD registers, which would make chunking the input in order to perform the cond_resched() check from C code disproportionately costly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210203113626.220151-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-12-22kasan, arm64: expand CONFIG_KASAN checksAndrey Konovalov
Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-30kvm: arm64: Remove hyp_adr/ldr_this_cpuDavid Brazdil
The hyp_adr/ldr_this_cpu helpers were introduced for use in hyp code because they always needed to use TPIDR_EL2 for base, while adr/ldr_this_cpu from kernel proper would select between TPIDR_EL2 and _EL1 based on VHE/nVHE. Simplify this now that the hyp mode case can be handled using the __KVM_VHE/NVHE_HYPERVISOR__ macros. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-6-dbrazdil@google.com
2020-05-07arm64: asm: Provide a mechanism for generating ELF note for BTIMark Brown
ELF files built for BTI should have a program property note section which identifies them as such. The linker expects to find this note in all object files it is linking into a BTI annotated output, the compiler will ensure that this happens for C files but for assembler files we need to do this in the source so provide a macro which can be used for this purpose. To support likely future requirements for additional notes we split the defininition of the flags to set for BTI code from the macro that creates the note itself. This is mainly for use in the vDSO which should be a normal ELF shared library and should therefore include BTI annotations when built for BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200506195138.22086-9-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-03-25Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', ↵Catalin Marinas
'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core * for-next/memory-hotremove: : Memory hot-remove support for arm64 arm64/mm: Enable memory hot remove arm64/mm: Hold memory hotplug lock while walking for kernel page table dump * for-next/arm_sdei: : SDEI: fix double locking on return from hibernate and clean-up firmware: arm_sdei: clean up sdei_event_create() firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp firmware: arm_sdei: fix possible double-lock on hibernate error path firmware: arm_sdei: fix double-lock on hibernate with shared events * for-next/amu: : ARMv8.4 Activity Monitors support clocksource/drivers/arm_arch_timer: validate arch_timer_rate arm64: use activity monitors for frequency invariance cpufreq: add function to get the hardware max frequency Documentation: arm64: document support for the AMU extension arm64/kvm: disable access to AMU registers from kvm guests arm64: trap to EL1 accesses to AMU counters from EL0 arm64: add support for the AMU extension v1 * for-next/final-cap-helper: : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it arm64: kvm: hyp: use cpus_have_final_cap() arm64: cpufeature: add cpus_have_final_cap() * for-next/cpu_ops-cleanup: : cpu_ops[] access code clean-up arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed * for-next/misc: : Various fixes and clean-ups arm64: define __alloc_zeroed_user_highpage arm64/kernel: Simplify __cpu_up() by bailing out early arm64: remove redundant blank for '=' operator arm64: kexec_file: Fixed code style. arm64: add blank after 'if' arm64: fix spelling mistake "ca not" -> "cannot" arm64: entry: unmask IRQ in el0_sp() arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI) arm64: csum: Optimise IPv6 header checksum arch/arm64: fix typo in a comment arm64: remove gratuitious/stray .ltorg stanzas arm64: Update comment for ASID() macro arm64: mm: convert cpu_do_switch_mm() to C arm64: fix NUMA Kconfig typos * for-next/perf: : arm64 perf updates arm64: perf: Add support for ARMv8.5-PMU 64-bit counters KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 arm64: cpufeature: Extract capped perfmon fields arm64: perf: Clean up enable/disable calls perf: arm-ccn: Use scnprintf() for robustness arm64: perf: Support new DT compatibles arm64: perf: Refactor PMU init callbacks perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
2020-03-06arm64: trap to EL1 accesses to AMU counters from EL0Ionela Voinescu
The activity monitors extension is an optional extension introduced by the ARMv8.4 CPU architecture. In order to access the activity monitors counters safely, if desired, the kernel should detect the presence of the extension through the feature register, and mediate the access. Therefore, disable direct accesses to activity monitors counters from EL0 (userspace) and trap them to EL1 (kernel). To be noted that the ARM64_AMU_EXTN kernel config does not have an effect on this code. Given that the amuserenr_el0 resets to an UNKNOWN value, setting the trap of EL0 accesses to EL1 is always attempted for safety and security considerations. Therefore firmware should still ensure accesses to AMU registers are not trapped in EL2/EL3 as this code cannot be bypassed if the CPU implements the Activity Monitors Unit. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-02-27arm64: mm: convert cpu_do_switch_mm() to CMark Rutland
There's no reason that cpu_do_switch_mm() needs to be written as an assembly function, and having it as a C function would make it easier to maintain. This patch converts cpu_do_switch_mm() to C, removing code that this change makes redundant (e.g. the mmid macro). Since the header comment was stale and the prototype now implies all the necessary information, this comment is removed. The 'pgd_phys' argument is made a phys_addr_t to match the return type of virt_to_phys(). At the same time, post_ttbr_update_workaround() is updated to use IS_ENABLED(), which allows the compiler to figure out it can elide calls for !CONFIG_CAVIUM_ERRATUM_27456 builds. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> [catalin.marinas@arm.com: change comments from asm-style to C-style] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-01-28Merge branch 'sched-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "These were the main changes in this cycle: - More -rt motivated separation of CONFIG_PREEMPT and CONFIG_PREEMPTION. - Add more low level scheduling topology sanity checks and warnings to filter out nonsensical topologies that break scheduling. - Extend uclamp constraints to influence wakeup CPU placement - Make the RT scheduler more aware of asymmetric topologies and CPU capacities, via uclamp metrics, if CONFIG_UCLAMP_TASK=y - Make idle CPU selection more consistent - Various fixes, smaller cleanups, updates and enhancements - please see the git log for details" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (58 commits) sched/fair: Define sched_idle_cpu() only for SMP configurations sched/topology: Assert non-NUMA topology masks don't (partially) overlap idle: fix spelling mistake "iterrupts" -> "interrupts" sched/fair: Remove redundant call to cpufreq_update_util() sched/psi: create /proc/pressure and /proc/pressure/{io|memory|cpu} only when psi enabled sched/fair: Fix sgc->{min,max}_capacity calculation for SD_OVERLAP sched/fair: calculate delta runnable load only when it's needed sched/cputime: move rq parameter in irqtime_account_process_tick stop_machine: Make stop_cpus() static sched/debug: Reset watchdog on all CPUs while processing sysrq-t sched/core: Fix size of rq::uclamp initialization sched/uclamp: Fix a bug in propagating uclamp value in new cgroups sched/fair: Load balance aggressively for SCHED_IDLE CPUs sched/fair : Improve update_sd_pick_busiest for spare capacity case watchdog: Remove soft_lockup_hrtimer_cnt and related code sched/rt: Make RT capacity-aware sched/fair: Make EAS wakeup placement consider uclamp restrictions sched/fair: Make task_fits_capacity() consider uclamp restrictions sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() sched/uclamp: Make uclamp util helpers use and return UL values ...
2020-01-22Merge branch 'for-next/asm-annotations' into for-next/coreWill Deacon
* for-next/asm-annotations: (6 commits) arm64: kernel: Correct annotation of end of el0_sync ...
2020-01-17arm64: assembler: remove smp_dmb macroMark Rutland
These days arm64 kernels are always SMP, and thus smp_dmb is an overly-long way of writing dmb. Naturally, no-one uses it. Remove the unused macro. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-17arm64: assembler: remove inherit_daif macroMark Rutland
We haven't needed the inherit_daif macro since commit: ed3768db588291dd ("arm64: entry: convert el1_sync to C") ... which converted all callers to C and the local_daif_inherit function. Remove the unused macro. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-08arm64: asm: Remove ENDPIPROC()Mark Brown
Now all the users have been removed delete the definition of ENDPIPROC() to ensure we don't acquire any new users. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-08arm64: asm: Add new-style position independent function annotationsMark Brown
As part of an effort to make the annotations in assembly code clearer and more consistent new macros have been introduced, including replacements for ENTRY() and ENDPROC(). On arm64 we have ENDPIPROC(), a custom version of ENDPROC() which is used for code that will need to run in position independent environments like EFI, it creates an alias for the function with the prefix __pi_ and then emits the standard ENDPROC. Add new-style macros to replace this which expand to the standard SYM_FUNC_*() and SYM_FUNC_ALIAS_*(), resulting in the same object code. These are added in linkage.h for consistency with where the generic assembler code has its macros. Signed-off-by: Mark Brown <broonie@kernel.org> [will: Rename 'WEAK' macro, use ';' instead of ASM_NL, deprecate ENDPIPROC] Signed-off-by: Will Deacon <will@kernel.org>
2019-12-08sched/rt, arm64: Use CONFIG_PREEMPTIONThomas Gleixner
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the Kconfig dependency, entry code and preemption handling over to use CONFIG_PREEMPTION. Add PREEMPT_RT output in show_stack(). [bigeasy: +traps.c, Kconfig] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20191015191821.11479-3-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-08-30Merge branches 'for-next/52-bit-kva', 'for-next/cpu-topology', ↵Will Deacon
'for-next/error-injection', 'for-next/perf', 'for-next/psci-cpuidle', 'for-next/rng', 'for-next/smpboot', 'for-next/tbi' and 'for-next/tlbi' into for-next/core * for-next/52-bit-kva: (25 commits) Support for 52-bit virtual addressing in kernel space * for-next/cpu-topology: (9 commits) Move CPU topology parsing into core code and add support for ACPI 6.3 * for-next/error-injection: (2 commits) Support for function error injection via kprobes * for-next/perf: (8 commits) Support for i.MX8 DDR PMU and proper SMMUv3 group validation * for-next/psci-cpuidle: (7 commits) Move PSCI idle code into a new CPUidle driver * for-next/rng: (4 commits) Support for 'rng-seed' property being passed in the devicetree * for-next/smpboot: (3 commits) Reduce fragility of secondary CPU bringup in debug configurations * for-next/tbi: (10 commits) Introduce new syscall ABI with relaxed requirements for pointer tags * for-next/tlbi: (6 commits) Handle spurious page faults arising from kernel space
2019-08-09arm64: mm: Introduce 52-bit Kernel VAsSteve Capper
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Logic to make offset_ttbr1 conditionalSteve Capper
When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces conditional logic to offset_ttbr1 to query SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW support for 52-bit VAs then the ttbr1 offset is skipped. We choose to read a system register rather than vabits_actual because offset_ttbr1 can be called in places where the kernel data is not actually mapped. Calls to offset_ttbr1 appear to be made from rarely called code paths so this extra logic is not expected to adversely affect performance. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: Remove unused assembly macroJulien Thierry
As of commit 4141c857fd09dbed480f021b3eece4f46c653161 ("arm64: convert raw syscall invocation to C"), moving syscall handling from assembly to C, the macro mask_nospec64 is no longer referenced. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-05arm64: assembler: Switch ESB-instruction with a vanilla nop if !ARM64_HAS_RASJames Morse
The ESB-instruction is a nop on CPUs that don't implement the RAS extensions. This lets us use it in places like the vectors without having to use alternatives. If someone disables CONFIG_ARM64_RAS_EXTN, this instruction still has its RAS extensions behaviour, but we no longer read DISR_EL1 as this register does depend on alternatives. This could go wrong if we want to synchronize an SError from a KVM guest. On a CPU that has the RAS extensions, but the KConfig option was disabled, we consume the pending SError with no chance of ever reading it. Hide the ESB-instruction behind the CONFIG_ARM64_RAS_EXTN option, outputting a regular nop if the feature has been disabled. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-14arm64: assembler: Update comment above cond_yield_neon() macroHillf Danton
Since commit 7faa313f05ca ("arm64: preempt: Fix big-endian when checking preempt count in assembly") both the preempt count and the 'need_resched' flag are checked as part of a single 64-bit load in cond_yield_neon(), so update the stale comment to reflect reality. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Hillf Danton <hdanton@sina.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-16arm64: add CVADP support to the cache maintenance helperAndrew Murray
Allow users of dcache_by_line_op to specify cvadp as an op. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-05arm64: Use defines instead of magic numbersAlexandru Elisei
Following assembly code is not trivial; make it slightly easier to read by replacing some of the magic numbers with the defines which are already present in sysreg.h. Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-02-28arm64: Add workaround for Fujitsu A64FX erratum 010001Zhang Lei
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause an undefined fault (Data abort, DFSC=0b111111). This fault occurs under a specific hardware condition when a load/store instruction performs an address translation. Any load/store instruction, except non-fault access including Armv8 and SVE might cause this undefined fault. The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE is enabled to mitigate timing attacks against KASLR where the kernel address space could be probed using the FFR and suppressed fault on SVE loads. Since this erratum causes spurious exceptions, which may corrupt the exception registers, we clear the TCR_ELx.NFDx=1 bits when booting on an affected CPU. Signed-off-by: Zhang Lei <zhang.lei@jp.fujitsu.com> [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler and always disabled the NFDx bits on affected CPUs] Signed-off-by: James Morse <james.morse@arm.com> Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-26arm64: Rename get_thread_info()Julien Thierry
The assembly macro get_thread_info() actually returns a task_struct and is analogous to the current/get_current macro/function. While it could be argued that thread_info sits at the start of task_struct and the intention could have been to return a thread_info, instances of loads from/stores to the address obtained from get_thread_info() use offsets that are generated with offsetof(struct task_struct, [...]). Rename get_thread_info() to state it returns a task_struct. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-06arm64: Remove unused daif related functions/macrosJulien Thierry
There are some helpers to modify PSR.[DAIF] bits that are not referenced anywhere. The less these bits are available outside of local_irq_* functions the better. Get rid of those unused helpers. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-12-11arm64: preempt: Fix big-endian when checking preempt count in assemblyWill Deacon
Commit 396244692232 ("arm64: preempt: Provide our own implementation of asm/preempt.h") extended the preempt count field in struct thread_info to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag indicating whether or not the current task needs rescheduling. Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point to this new field, the assembly usage was left untouched meaning that a 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns the reschedule flag instead of the count. Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, we're actually better off reworking the two assembly users so that they operate on the whole 64-bit value in favour of inspecting the thread flags separately in order to determine whether a reschedule is needed. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: "kernelci.org bot" <bot@kernelci.org> Tested-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10arm64: Kconfig: Re-jig CONFIG options for 52-bit VAWill Deacon
Enabling 52-bit VAs for userspace is pretty confusing, since it requires you to select "48-bit" virtual addressing in the Kconfig. Rework the logic so that 52-bit user virtual addressing is advertised in the "Virtual address space size" choice, along with some help text to describe its interaction with Pointer Authentication. The EXPERT-only option to force all user mappings to the 52-bit range is then made available immediately below the VA size selection. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10arm64: mm: introduce 52-bit userspace supportSteve Capper
On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, some changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGDSteve Capper
Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64 entries (for the 48-bit case) to 1024 entries. This quantity, PTRS_PER_PGD is used as follows to compute which PGD entry corresponds to a given virtual address, addr: pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1) Userspace addresses are prefixed by 0's, so for a 48-bit userspace address, uva, the following is true: (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1) In other words, a 48-bit userspace address will have the same pgd_index when using PTRS_PER_PGD = 64 and 1024. Kernel addresses are prefixed by 1's so, given a 48-bit kernel address, kva, we have the following inequality: (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1) In other words a 48-bit kernel virtual address will have a different pgd_index when using PTRS_PER_PGD = 64 and 1024. If, however, we note that: kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b) and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE) We can consider: (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1) = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F // "lower" cancels out = 0x3C0 In other words, one can switch PTRS_PER_PGD to the 52-bit value globally provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16). For kernel configuration where 52-bit userspace VAs are possible, this patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the 52-bit value. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> [will: added comment to TTBR1_BADDR_4852_OFFSET calculation] Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10arm64: Fix minor issues with the dcache_by_line_op macroWill Deacon
The dcache_by_line_op macro suffers from a couple of small problems: First, the GAS directives that are currently being used rely on assembler behavior that is not documented, and probably not guaranteed to produce the correct behavior going forward. As a result, we end up with some undefined symbols in cache.o: $ nm arch/arm64/mm/cache.o ... U civac ... U cvac U cvap U cvau This is due to the fact that the comparisons used to select the operation type in the dcache_by_line_op macro are comparing symbols not strings, and even though it seems that GAS is doing the right thing here (undefined symbols by the same name are equal to each other), it seems unwise to rely on this. Second, when patching in a DC CVAP instruction on CPUs that support it, the fallback path consists of a DC CVAU instruction which may be affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE. Solve these issues by unrolling the various maintenance routines and using the conditional directives that are documented as operating on strings. To avoid the complexity of nested alternatives, we move the DC CVAP patching to __clean_dcache_area_pop, falling back to a branch to __clean_dcache_area_poc if DCPOP is not supported by the CPU. Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10arm64: add EXPORT_SYMBOL_NOKASAN()Mark Rutland
So that we can export symbols directly from assembly files, let's make use of the generic <asm/export.h>. We have a few symbols that we'll want to conditionally export for !KASAN kernel builds, so we add a helper for that in <asm/assembler.h>. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-06arm64: Add support for SB barrier and patch in over DSB; ISB sequencesWill Deacon
We currently use a DSB; ISB sequence to inhibit speculation in set_fs(). Whilst this works for current CPUs, future CPUs may implement a new SB barrier instruction which acts as an architected speculation barrier. On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB sequence and advertise the presence of the new instruction to userspace. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-19arm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZEWill Deacon
There's no need to treat mismatched cache-line sizes reported by CTR_EL0 differently to any other mismatched fields that we treat as "STRICT" in the cpufeature code. In both cases we need to trap and emulate EL0 accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and rely on ARM64_MISMATCHED_CACHE_TYPE instead. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: move ARM64_HAS_CNP in the empty cpucaps.h slot] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-04-11arm64: assembler: add macros to conditionally yield the NEON under PREEMPTArd Biesheuvel
Add support macros to conditionally yield the NEON (and thus the CPU) that may be called from the assembler code. In some cases, yielding the NEON involves saving and restoring a non trivial amount of context (especially in the CRC folding algorithms), and so the macro is split into three, and the code in between is only executed when the yield path is taken, allowing the context to be preserved. The third macro takes an optional label argument that marks the resume path after a yield has been performed. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-04-11arm64: assembler: add utility macros to push/pop stack framesArd Biesheuvel
We are going to add code to all the NEON crypto routines that will turn them into non-leaf functions, so we need to manage the stack frames. To make this less tedious and error prone, add some macros that take the number of callee saved registers to preserve and the extra size to allocate in the stack frame (for locals) and emit the ldp/stp sequences. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-19arm64: asm: drop special versions of adr_l/ldr_l/str_l for modulesArd Biesheuvel
Now that we started keeping modules within 4 GB of the core kernel in all cases, we no longer need to special case the adr_l/ldr_l/str_l macros for modules to deal with them being loaded farther away. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-10Merge tag 'kvm-4.16-1' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Radim Krčmář: "ARM: - icache invalidation optimizations, improving VM startup time - support for forwarded level-triggered interrupts, improving performance for timers and passthrough platform devices - a small fix for power-management notifiers, and some cosmetic changes PPC: - add MMIO emulation for vector loads and stores - allow HPT guests to run on a radix host on POWER9 v2.2 CPUs without requiring the complex thread synchronization of older CPU versions - improve the handling of escalation interrupts with the XIVE interrupt controller - support decrement register migration - various cleanups and bugfixes. s390: - Cornelia Huck passed maintainership to Janosch Frank - exitless interrupts for emulated devices - cleanup of cpuflag handling - kvm_stat counter improvements - VSIE improvements - mm cleanup x86: - hypervisor part of SEV - UMIP, RDPID, and MSR_SMI_COUNT emulation - paravirtualized TLB shootdown using the new KVM_VCPU_PREEMPTED bit - allow guests to see TOPOEXT, GFNI, VAES, VPCLMULQDQ, and more AVX512 features - show vcpu id in its anonymous inode name - many fixes and cleanups - per-VCPU MSR bitmaps (already merged through x86/pti branch) - stable KVM clock when nesting on Hyper-V (merged through x86/hyperv)" * tag 'kvm-4.16-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (197 commits) KVM: PPC: Book3S: Add MMIO emulation for VMX instructions KVM: PPC: Book3S HV: Branch inside feature section KVM: PPC: Book3S HV: Make HPT resizing work on POWER9 KVM: PPC: Book3S HV: Fix handling of secondary HPTEG in HPT resizing code KVM: PPC: Book3S PR: Fix broken select due to misspelling KVM: x86: don't forget vcpu_put() in kvm_arch_vcpu_ioctl_set_sregs() KVM: PPC: Book3S PR: Fix svcpu copying with preemption enabled KVM: PPC: Book3S HV: Drop locks before reading guest memory kvm: x86: remove efer_reload entry in kvm_vcpu_stat KVM: x86: AMD Processor Topology Information x86/kvm/vmx: do not use vm-exit instruction length for fast MMIO when running nested kvm: embed vcpu id to dentry of vcpu anon inode kvm: Map PFN-type memory regions as writable (if possible) x86/kvm: Make it compile on 32bit and with HYPYERVISOR_GUEST=n KVM: arm/arm64: Fixup userspace irqchip static key optimization KVM: arm/arm64: Fix userspace_irqchip_in_use counting KVM: arm/arm64: Fix incorrect timer_is_pending logic MAINTAINERS: update KVM/s390 maintainers MAINTAINERS: add Halil as additional vfio-ccw maintainer MAINTAINERS: add David as a reviewer for KVM/s390 ...
2018-02-06arm64: entry: Ensure branch through syscall table is bounded under speculationWill Deacon
In a similar manner to array_index_mask_nospec, this patch introduces an assembly macro (mask_nospec64) which can be used to bound a value under speculation. This macro is then used to ensure that the indirect branch through the syscall table is bounded under speculation, with out-of-range addresses speculating as calls to sys_io_setup (0). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>