summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-02-09ACPI: OSL: Rework acpi_check_resource_conflict()Rafael J. Wysocki
Rearrange the code in acpi_check_resource_conflict() so as to drop redundant checks and uneeded local variables from there and modify the messages printed by that function to be more concise and hopefully easier to understand. While at it, replace direct printk() usage with pr_*(). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09Merge tag 'devfreq-next-for-5.12' of ↵Rafael J. Wysocki
git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/linux Pull devfreq changes for v5.12 from Chanwoo Choi: "- Correct the spelling of comment in devfreq core - Replace devfreq->dev.parent with dev in devfreq core - Remove unneeded simicolon in rk3399_dmc.c" * tag 'devfreq-next-for-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/linux: PM / devfreq: rk3399_dmc: Remove unneeded semicolon PM / devfreq: Replace devfreq->dev.parent as dev in devfreq_add_device PM / devfreq: Correct spelling in a comment
2021-02-09arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+Nathan Chancellor
Similar to commit 28187dc8ebd9 ("ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD"), ld.lld prior to 13.0.0 does not properly support aarch64 big endian, leading to the following build error when CONFIG_CPU_BIG_ENDIAN is selected: ld.lld: error: unknown emulation: aarch64linuxb This has been resolved in LLVM 13. To avoid errors like this, only allow CONFIG_CPU_BIG_ENDIAN to be selected if using ld.bfd or ld.lld 13.0.0 and newer. While we are here, the indentation of this symbol used spaces since its introduction in commit a872013d6d03 ("arm64: kconfig: allow CPU_BIG_ENDIAN to be selected"). Change it to tabs to be consistent with kernel coding style. Link: https://github.com/ClangBuiltLinux/linux/issues/380 Link: https://github.com/ClangBuiltLinux/linux/issues/1288 Link: https://github.com/llvm/llvm-project/commit/7605a9a009b5fa3bdac07e3131c8d82f6d08feb7 Link: https://github.com/llvm/llvm-project/commit/eea34aae2e74e9b6fbdd5b95f479bc7f397bf387 Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210209005719.803608-1-nathan@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeatures: Allow disabling of Pointer Auth from the command-lineMarc Zyngier
In order to be able to disable Pointer Authentication at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA} fields. This is further mapped on the arm64.nopauth command-line alias. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Tested-by: Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-23-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Defer enabling pointer authentication on boot coreSrinivas Ramana
Defer enabling pointer authentication on boot core until after its required to be enabled by cpufeature framework. This will help in controlling the feature dynamically with a boot parameter. Signed-off-by: Ajay Patil <pajay@qti.qualcomm.com> Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1610152163-16554-2-git-send-email-sramana@codeaurora.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-22-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeatures: Allow disabling of BTI from the command-lineMarc Zyngier
In order to be able to disable BTI at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64PFR1_EL1.BTI field. This is further mapped on the arm64.nobti command-line alias. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Tested-by: Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-21-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Move "nokaslr" over to the early cpufeature infrastructureMarc Zyngier
Given that the early cpufeature infrastructure has borrowed quite a lot of code from the kaslr implementation, let's reimplement the matching of the "nokaslr" option with it. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-20-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09KVM: arm64: Document HVC_VHE_RESTART stub hypercallMarc Zyngier
For completeness, let's document the HVC_VHE_RESTART stub. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-19-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0Marc Zyngier
Admitedly, passing id_aa64mmfr1.vh=0 on the command-line isn't that easy to understand, and it is likely that users would much prefer write "kvm-arm.mode=nvhe", or "...=protected". So here you go. This has the added advantage that we can now always honor the "kvm-arm.mode=protected" option, even when booting on a VHE system. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-18-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Add an aliasing facility for the idreg overrideMarc Zyngier
In order to map the override of idregs to options that a user can easily understand, let's introduce yet another option array, which maps an option to the corresponding idreg options. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-17-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Honor VHE being disabled from the command-lineMarc Zyngier
Finally we can check whether VHE is disabled on the command line, and not enable it if that's the user's wish. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-16-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command lineMarc Zyngier
As we want to be able to disable VHE at runtime, let's match "id_aa64mmfr1.vh=" from the command line as an override. This doesn't have much effect yet as our boot code doesn't look at the cpufeature, but only at the HW registers. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-15-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeature: Add an early command-line cpufeature override facilityMarc Zyngier
In order to be able to override CPU features at boot time, let's add a command line parser that matches options of the form "cpureg.feature=value", and store the corresponding value into the override val/mask pair. No features are currently defined, so no expected change in functionality. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-14-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Extract early FDT mapping from kaslr_early_init()Marc Zyngier
As we want to parse more options very early in the kernel lifetime, let's always map the FDT early. This is achieved by moving that code out of kaslr_early_init(). No functional change expected. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-13-maz@kernel.org [will: Ensue KASAN is enabled before running C code] Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()Marc Zyngier
__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers, which should take the feature override into account. Let's do that. For a good measure (and because we are likely to need to further down the line), make this helper available to the rest of the non-modular kernel. Code that needs to know the *real* features of a CPU can still use read_sysreg_s(), and find the bare, ugly truth. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-12-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeature: Add global feature override facilityMarc Zyngier
Add a facility to globally override a feature, no matter what the HW says. Yes, this sounds dangerous, but we do respect the "safe" value for a given feature. This doesn't mean the user doesn't need to know what they are doing. Nothing uses this yet, so we are pretty safe. For now. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-11-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Move SCTLR_EL1 initialisation to EL-agnostic codeMarc Zyngier
We can now move the initial SCTLR_EL1 setup to be used for both EL1 and EL2 setup. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Simplify init_el2_state to be non-VHE onlyMarc Zyngier
As init_el2_state is now nVHE only, let's simplify it and drop the VHE setup. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Move VHE-specific SPE setup to mutate_to_vhe()Marc Zyngier
There isn't much that a VHE kernel needs on top of whatever has been done for nVHE, so let's move the little we need to the VHE stub (the SPE setup), and drop the init_el2_state macro. No expected functional change. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-8-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Drop early setting of MDSCR_EL2.TPMSMarc Zyngier
When running VHE, we set MDSCR_EL2.TPMS very early on to force the trapping of EL1 SPE accesses to EL2. However: - we are running with HCR_EL2.{E2H,TGE}={1,1}, meaning that there is no EL1 to trap from - before entering a guest, we call kvm_arm_setup_debug(), which sets MDCR_EL2_TPMS in the per-vcpu shadow mdscr_el2, which gets applied on entry by __activate_traps_common(). The early setting of MDSCR_EL2.TPMS is therefore useless and can be dropped. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210208095732.3267263-7-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Initialise as nVHE before switching to VHEMarc Zyngier
As we are aiming to be able to control whether we enable VHE or not, let's always drop down to EL1 first, and only then upgrade to VHE if at all possible. This means that if the kernel is booted at EL2, we always start with a nVHE init, drop to EL1 to initialise the the kernel, and only then upgrade the kernel EL to EL2 if possible (the process is obviously shortened for secondary CPUs). The resume path is handled similarly to a secondary CPU boot. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org [will: Avoid calling switch_to_vhe twice on kaslr path] Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09Documentation: kvm: fix warningPaolo Bonzini
Documentation/virt/kvm/api.rst:4927: WARNING: Title underline too short. 4.130 KVM_XEN_VCPU_GET_ATTR -------------------------- Fixes: e1f68169a4f8 ("KVM: Add documentation for Xen hypercall and shared_info updates") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86/xen: Allow reset of Xen attributesDavid Woodhouse
In order to support Xen SHUTDOWN_soft_reset (for guest kexec, etc.) the VMM needs to be able to tear everything down and return the Xen features to a clean slate. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Message-Id: <20210208232326.1830370-1-dwmw2@infradead.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86/mmu: Make HVA handler retpoline-friendlyMaciej S. Szmigiero
When retpolines are enabled they have high overhead in the inner loop inside kvm_handle_hva_range() that iterates over the provided memory area. Let's mark this function and its TDP MMU equivalent __always_inline so compiler will be able to change the call to the actual handler function inside each of them into a direct one. This significantly improves performance on the unmap test on the existing kernel memslot code (tested on a Xeon 8167M machine): 30 slots in use: Test Before After Improvement Unmap 0.0353s 0.0334s 5% Unmap 2M 0.00104s 0.000407s 61% 509 slots in use: Test Before After Improvement Unmap 0.0742s 0.0740s None Unmap 2M 0.00221s 0.00159s 28% Looks like having an indirect call in these functions (and, so, a retpoline) might have interfered with unrolling of the whole loop in the CPU. Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <732d3fe9eb68aa08402a638ab0309199fa89ae56.1612810129.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Drop hv_vcpu_to_vcpu() helperVitaly Kuznetsov
hv_vcpu_to_vcpu() helper is only used by other helpers and is not very complex, we can drop it without much regret. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-16-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Allocate Hyper-V context lazilyVitaly Kuznetsov
Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests) so we don't actually need to allocate it in kvm_arch_vcpu_create(), we can postpone the action until Hyper-V specific MSRs are accessed or SynIC is enabled. Once allocated, let's keep the context alive for the lifetime of the vCPU as an attempt to free it would require additional synchronization with other vCPUs and normally it is not supposed to happen. Note, Hyper-V style hypercall enablement is done by writing to HV_X64_MSR_GUEST_OS_ID so we don't need to worry about allocating Hyper-V context from kvm_hv_hypercall(). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-15-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Make Hyper-V emulation enablement conditionalVitaly Kuznetsov
Hyper-V emulation is enabled in KVM unconditionally. This is bad at least from security standpoint as it is an extra attack surface. Ideally, there should be a per-VM capability explicitly enabled by VMM but currently it is not the case and we can't mandate one without breaking backwards compatibility. We can, however, check guest visible CPUIDs and only enable Hyper-V emulation when "Hv#1" interface was exposed in HYPERV_CPUID_INTERFACE. Note, VMMs are free to act in any sequence they like, e.g. they can try to set MSRs first and CPUIDs later so we still need to allow the host to read/write Hyper-V specific MSRs unconditionally. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-14-vkuznets@redhat.com> [Add selftest vcpu_set_hv_cpuid API to avoid breaking xen_vmcall_test. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Allocate 'struct kvm_vcpu_hv' dynamicallyVitaly Kuznetsov
Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests). 'struct kvm_vcpu_hv' is, however, quite big, it accounts for more than 1/4 of the total 'struct kvm_vcpu_arch' which is also quite big already. This all looks like a waste. Allocate 'struct kvm_vcpu_hv' dynamically. This patch does not bring any (intentional) functional change as we still allocate the context unconditionally but it paves the way to doing that only when needed. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-13-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Prepare to meet unallocated Hyper-V contextVitaly Kuznetsov
Currently, Hyper-V context is part of 'struct kvm_vcpu_arch' and is always available. As a preparation to allocating it dynamically, check that it is not NULL at call sites which can normally proceed without it i.e. the behavior is identical to the situation when Hyper-V emulation is not being used by the guest. When Hyper-V context for a particular vCPU is not allocated, we may still need to get 'vp_index' from there. E.g. in a hypothetical situation when Hyper-V emulation was enabled on one CPU and wasn't on another, Hyper-V style send-IPI hypercall may still be used. Luckily, vp_index is always initialized to kvm_vcpu_get_idx() and can only be changed when Hyper-V context is present. Introduce kvm_hv_get_vpindex() helper for simplification. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-12-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Always use to_hv_vcpu() accessor to get to 'struct ↵Vitaly Kuznetsov
kvm_vcpu_hv' As a preparation to allocating Hyper-V context dynamically, make it clear who's the user of the said context. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-11-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Stop shadowing global 'current_vcpu' variableVitaly Kuznetsov
'current_vcpu' variable in KVM is a per-cpu pointer to the currently scheduled vcpu. kvm_hv_flush_tlb()/kvm_hv_send_ipi() functions used to have local 'vcpu' variable to iterate over vCPUs but it's gone now and there's no need to use anything but the standard 'vcpu' as an argument. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-10-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Introduce to_kvm_hv() helperVitaly Kuznetsov
Spelling '&kvm->arch.hyperv' correctly is hard. Also, this makes the code more consistent with vmx/svm where to_kvm_vmx()/to_kvm_svm() are already being used. Opportunistically change kvm_hv_msr_{get,set}_crash_{data,ctl}() and kvm_hv_msr_set_crash_data() to take 'kvm' instead of 'vcpu' as these MSRs are partition wide. No functional change intended. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-9-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Rename vcpu_to_hv_syndbg() to to_hv_syndbg()Vitaly Kuznetsov
vcpu_to_hv_syndbg()'s argument is always 'vcpu' so there's no need to have an additional prefix. Also, this makes the code more consistent with vmx/svm where to_vmx()/to_svm() are being used. No functional change intended. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-8-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Rename vcpu_to_stimer()/stimer_to_vcpu()Vitaly Kuznetsov
vcpu_to_stimers()'s argument is almost always 'vcpu' so there's no need to have an additional prefix. Also, this makes the naming more consistent with to_hv_vcpu()/to_hv_synic(). Rename stimer_to_vcpu() to hv_stimer_to_vcpu() for consitency. No functional change intended. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-7-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Rename vcpu_to_synic()/synic_to_vcpu()Vitaly Kuznetsov
vcpu_to_synic()'s argument is almost always 'vcpu' so there's no need to have an additional prefix. Also, as this is used outside of hyper-v emulation code, add '_hv_' part to make it clear what this s. This makes the naming more consistent with to_hv_vcpu(). Rename synic_to_vcpu() to hv_synic_to_vcpu() for consistency. No functional change intended. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-6-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Rename vcpu_to_hv_vcpu() to to_hv_vcpu()Vitaly Kuznetsov
vcpu_to_hv_vcpu()'s argument is almost always 'vcpu' so there's no need to have an additional prefix. Also, this makes the code more consistent with vmx/svm where to_vmx()/to_svm() are being used. No functional change intended. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Drop unused kvm_hv_vapic_assist_page_enabled()Vitaly Kuznetsov
kvm_hv_vapic_assist_page_enabled() seems to be unused since its introduction in commit 10388a07164c1 ("KVM: Add HYPER-V apic access MSRs"), drop it. Reported-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09selftests: kvm: Properly set Hyper-V CPUIDs in evmcs_testVitaly Kuznetsov
Generally, when Hyper-V emulation is enabled, VMM is supposed to set Hyper-V CPUID identifications so the guest knows that Hyper-V features are available. evmcs_test doesn't currently do that but so far Hyper-V emulation in KVM was enabled unconditionally. As we are about to change that, proper Hyper-V CPUID identification should be set in selftests as well. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09selftests: kvm: Move kvm_get_supported_hv_cpuid() to common codeVitaly Kuznetsov
kvm_get_supported_hv_cpuid() may come handy in all Hyper-V related tests. Split it off hyperv_cpuid test, create system-wide and vcpu versions. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: Raise the maximum number of user memslotsVitaly Kuznetsov
Current KVM_USER_MEM_SLOTS limits are arch specific (512 on Power, 509 on x86, 32 on s390, 16 on MIPS) but they don't really need to be. Memory slots are allocated dynamically in KVM when added so the only real limitation is 'id_to_index' array which is 'short'. We don't have any other KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined structures. Low KVM_USER_MEM_SLOTS can be a limiting factor for some configurations. In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per vCPU and the guest is free to pick any GFN for each of them, this fragments memslots as QEMU wants to have a separate memslot for each of these pages (which are supposed to act as 'overlay' pages). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210127175731.2020089-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09selftests: kvm: Raise the default timeout to 120 secondsVitaly Kuznetsov
With the updated maximum number of user memslots (32) set_memory_region_test sometimes takes longer than the default 45 seconds to finish. Raise the value to an arbitrary 120 seconds. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210127175731.2020089-6-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: move kvm_inject_gp up from kvm_set_dr to callersPaolo Bonzini
Push the injection of #GP up to the callers, so that they can just use kvm_complete_insn_gp. __kvm_set_dr is pretty much what the callers can use together with kvm_complete_insn_gp, so rename it to kvm_set_dr and drop the old kvm_set_dr wrapper. This also allows nested VMX code, which really wanted to use __kvm_set_dr, to use the right function. While at it, remove the kvm_require_dr() check from the SVM interception. The APM states: All normal exception checks take precedence over the SVM intercepts. which includes the CR4.DE=1 #UD. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: reading DR cannot failPaolo Bonzini
kvm_get_dr and emulator_get_dr except an in-range value for the register number so they cannot fail. Change the return type to void. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: SVM: Remove an unnecessary forward declarationSean Christopherson
Drop a defunct forward declaration of svm_complete_interrupts(). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210205005750.3841462-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: SVM: Move AVIC vCPU kicking snippet to helper functionSean Christopherson
Add a helper function to handle kicking non-running vCPUs when sending virtual IPIs. A future patch will change SVM's interception functions to take @vcpu instead of @svm, at which piont declaring and modifying 'vcpu' in a case statement is confusing, and potentially dangerous. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210205005750.3841462-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: Restore all 64 bits of DR6 and DR7 during RSM on x86-64Sean Christopherson
Restore the full 64-bit values of DR6 and DR7 when emulating RSM on x86-64, as defined by both Intel's SDM and AMD's APM. Note, bits 63:32 of DR6 and DR7 are reserved, so this is a glorified nop unless the SMM handler is poking into SMRAM, which it most definitely shouldn't be doing since both Intel and AMD list the DR6 and DR7 fields as read-only. Fixes: 660a5d517aaa ("KVM: x86: save/load state on SMM switch") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210205012458.3872687-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: Remove misleading DR6/DR7 adjustments from RSM emulationSean Christopherson
Drop the DR6/7 volatile+fixed bits adjustments in RSM emulation, which are redundant and misleading. The necessary adjustments are made by kvm_set_dr(), which properly sets the fixed bits that are conditional on the vCPU model. Note, KVM incorrectly reads only bits 31:0 of the DR6/7 fields when emulating RSM on x86-64. On the plus side for this change, that bug makes removing "& DRx_VOLATILE" a nop. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210205012458.3872687-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86/xen: Use hva_t for holding hypercall page addressSean Christopherson
Use hva_t, a.k.a. unsigned long, for the local variable that holds the hypercall page address. On 32-bit KVM, gcc complains about using a u64 due to the implicit cast from a 64-bit value to a 32-bit pointer. arch/x86/kvm/xen.c: In function ‘kvm_xen_write_hypercall_page’: arch/x86/kvm/xen.c:300:22: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] 300 | page = memdup_user((u8 __user *)blob_addr, PAGE_SIZE); Cc: Joao Martins <joao.m.martins@oracle.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Fixes: 23200b7a30de ("KVM: x86/xen: intercept xen hypercalls if enabled") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210208201502.1239867-1-seanjc@google.com> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86/xen: Remove extra unlock in kvm_xen_hvm_set_attr()David Woodhouse
This accidentally ended up locking and then immediately unlocking kvm->lock at the beginning of the function. Fix it. Fixes: a76b9641ad1c ("KVM: x86/xen: add KVM_XEN_HVM_SET_ATTR/KVM_XEN_HVM_GET_ATTR") Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Message-Id: <20210208232326.1830370-2-dwmw2@infradead.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09MIPS: crash_dump.c: Simplify copy_oldmem_page()Youling Tang
Replace kmap_atomic_pfn() with kmap_local_pfn() which is preemptible and can take page faults. Remove the indirection of the dump page and the related cruft which is not longer required. Remove unused or redundant header files. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>