summaryrefslogtreecommitdiff
path: root/arch/x86/kvm
AgeCommit message (Collapse)Author
2017-10-12KVM: nSVM: fix SMI injection in guest modeLadi Prosek
Entering SMM while running in guest mode wasn't working very well because several pieces of the vcpu state were left set up for nested operation. Some of the issues observed: * L1 was getting unexpected VM exits (using L1 interception controls but running in SMM execution environment) * MMU was confused (walk_mmu was still set to nested_mmu) * INTERCEPT_SMI was not emulated for L1 (KVM never injected SVM_EXIT_SMI) Intel SDM actually prescribes the logical processor to "leave VMX operation" upon entering SMM in 34.14.1 Default Treatment of SMI Delivery. AMD doesn't seem to document this but they provide fields in the SMM state-save area to stash the current state of SVM. What we need to do is basically get out of guest mode for the duration of SMM. All this completely transparent to L1, i.e. L1 is not given control and no L1 observable state changes. To avoid code duplication this commit takes advantage of the existing nested vmexit and run functionality, perhaps at the cost of efficiency. To get out of guest mode, nested_svm_vmexit is called, unchanged. Re-entering is performed using enter_svm_guest_mode. This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with OVMF firmware (OVMF_CODE-need-smm.fd). Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: nSVM: refactor nested_svm_vmrunLadi Prosek
Analogous to 858e25c06fb0 ("kvm: nVMX: Refactor nested_vmx_run()"), this commit splits nested_svm_vmrun into two parts. The newly introduced enter_svm_guest_mode modifies the vcpu state to transition from L1 to L2, while the code left in nested_svm_vmrun handles the VMRUN instruction. Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: nVMX: fix SMI injection in guest modeLadi Prosek
Entering SMM while running in guest mode wasn't working very well because several pieces of the vcpu state were left set up for nested operation. Some of the issues observed: * L1 was getting unexpected VM exits (using L1 interception controls but running in SMM execution environment) * SMM handler couldn't write to vmx_set_cr4 because of incorrect validity checks predicated on nested.vmxon * MMU was confused (walk_mmu was still set to nested_mmu) Intel SDM actually prescribes the logical processor to "leave VMX operation" upon entering SMM in 34.14.1 Default Treatment of SMI Delivery. What we need to do is basically get out of guest mode and set nested.vmxon to false for the duration of SMM. All this completely transparent to L1, i.e. L1 is not given control and no L1 observable state changes. To avoid code duplication this commit takes advantage of the existing nested vmexit and run functionality, perhaps at the cost of efficiency. To get out of guest mode, nested_vmx_vmexit with exit_reason == -1 is called, a trick already used in vmx_leave_nested. Re-entering is cleaner, using enter_vmx_non_root_mode. This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with OVMF firmware (OVMF_CODE-need-smm.fd). Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: nVMX: set IDTR and GDTR limits when loading L1 host stateLadi Prosek
Intel SDM 27.5.2 Loading Host Segment and Descriptor-Table Registers: "The GDTR and IDTR limits are each set to FFFFH." Signed-off-by: Ladi Prosek <lprosek@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: x86: introduce ISA specific smi_allowed callbackLadi Prosek
Similar to NMI, there may be ISA specific reasons why an SMI cannot be injected into the guest. This commit adds a new smi_allowed callback to be implemented in following commits. Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: x86: introduce ISA specific SMM entry/exit callbacksLadi Prosek
Entering and exiting SMM may require ISA specific handling under certain circumstances. This commit adds two new callbacks with empty implementations. Actual functionality will be added in following commits. * pre_enter_smm() is to be called when injecting an SMM, before any SMM related vcpu state has been changed * pre_leave_smm() is to be called when emulating the RSM instruction, when the vcpu is in real mode and before any SMM related vcpu state has been restored Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: SVM: limit kvm_handle_page_fault to #PF handlingPaolo Bonzini
It has always annoyed me a bit how SVM_EXIT_NPF is handled by pf_interception. This is also the only reason behind the under-documented need_unprotect argument to kvm_handle_page_fault. Let NPF go straight to kvm_mmu_page_fault, just like VMX does in handle_ept_violation and handle_ept_misconfig. Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: SVM: unconditionally wake up VCPU on IOMMU interruptPaolo Bonzini
Checking the mode is unnecessary, and is done without a memory barrier separating the LAPIC write from the vcpu->mode read; in addition, kvm_vcpu_wake_up is already doing a check for waiters on the wait queue that has the same effect. In practice it's safe because spin_lock has full-barrier semantics on x86, but don't be too clever. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12arch/x86: remove redundant null checks before kmem_cache_destroyTim Hansen
Remove redundant null checks before calling kmem_cache_destroy. Found with make coccicheck M=arch/x86/kvm on linux-next tag next-20170929. Signed-off-by: Tim Hansen <devtimhansen@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: VMX: Don't expose unrestricted_guest is enabled if ept is disabledWanpeng Li
SDM mentioned: "If either the “unrestricted guest†VM-execution control or the “mode-based execute control for EPT†VM- execution control is 1, the “enable EPT†VM-execution control must also be 1." However, we can still observe unrestricted_guest is Y after inserting the kvm-intel.ko w/ ept=N. It depends on later starts a guest in order that the function vmx_compute_secondary_exec_control() can be executed, then both the module parameter and exec control fields will be amended. This patch fixes it by amending module parameter immediately during vmcs data setup. Reviewed-by: Jim Mattson <jmattson@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Jim Mattson <jmattson@google.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: X86: Processor States following Reset or INITWanpeng Li
- XCR0 is reset to 1 by RESET but not INIT - XSS is zeroed by both RESET and INIT - BNDCFGU, BND0-BND3, BNDCFGS, BNDSTATUS are zeroed by both RESET and INIT This patch does this according to SDM. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Jim Mattson <jmattson@google.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: x86: thoroughly disarm LAPIC timer around TSC deadline switchRadim Krčmář
Our routines look at tscdeadline and period when deciding state of a timer. The timer is disarmed when switching between TSC deadline and other modes, so we should set everything to disarmed state. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: x86: really disarm lapic timer when clearing TMICTRadim Krčmář
preemption timer only looks at tscdeadline and could inject already disarmed timer. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: x86: handle 0 write to TSC_DEADLINE MSRRadim Krčmář
0 should disable the timer, but start_hv_timer will recognize it as an expired timer instead. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12kvm, mm: account kvm related kmem slabs to kmemcgShakeel Butt
The kvm slabs can consume a significant amount of system memory and indeed in our production environment we have observed that a lot of machines are spending significant amount of memory that can not be left as system memory overhead. Also the allocations from these slabs can be triggered directly by user space applications which has access to kvm and thus a buggy application can leak such memory. So, these caches should be accounted to kmemcg. Signed-off-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-12KVM: VMX: rename RDSEED and RDRAND vmx ctrls to reflect exitingDavid Hildenbrand
Let's just name these according to the SDM. This should make it clearer that the are used to enable exiting and not the feature itself. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: x86: allow setting identity map addr with no vcpus onlyDavid Hildenbrand
Changing it afterwards doesn't make too much sense and will only result in inconsistencies. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: cleanup init_rmode_identity_map()David Hildenbrand
No need for another enable_ept check. kvm->arch.ept_identity_map_addr only has to be inititalized once. Having alloc_identity_pagetable() is overkill and dropping BUG_ONs is always nice. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: nVMX: no need to set ept/vpid caps to 0David Hildenbrand
They are inititally 0, so no need to reset them to 0. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: nVMX: no need to set vcpu->cpu when switching vmcsDavid Hildenbrand
vcpu->cpu is not cleared when doing a vmx_vcpu_put/load, so this can be dropped. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: drop unnecessary function declarationsDavid Hildenbrand
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: require INVEPT GLOBAL for EPTDavid Hildenbrand
Without this, we won't be able to do any flushes, so let's just require it. Should be absent in very strange configurations. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: call ept_sync_global() with enable_ept onlyDavid Hildenbrand
ept_* function should only be called with enable_ept being set. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: drop enable_ept check from ept_sync_context()David Hildenbrand
This function is only called with enable_ept. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: x86: no need to inititalize vcpu members to 0David Hildenbrand
vmx and svm use zalloc, so this is not necessary. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: vmx_vcpu_setup() cannot failDavid Hildenbrand
Make it a void and drop error handling code. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: x86: drop BUG_ON(vcpu->kvm)David Hildenbrand
And also get rid of that superfluous local variable "kvm". Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: x86: mmu: free_page can handle NULLDavid Hildenbrand
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: x86: mmu: returning void in a void function is strangeDavid Hildenbrand
Let's just drop the return. Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: LAPIC: Apply change to TDCR right away to the timerWanpeng Li
The description in the Intel SDM of how the divide configuration register is used: "The APIC timer frequency will be the processor's bus clock or core crystal clock frequency divided by the value specified in the divide configuration register." Observation of baremetal shown that when the TDCR is change, the TMCCT does not change or make a big jump in value, but the rate at which it count down change. The patch update the emulation to APIC timer to so that a change to the divide configuration would be reflected in the value of the counter and when the next interrupt is triggered. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> [Fixed some whitespace and added a check for negative delta and running timer. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: LAPIC: Keep timer running when switching between one-shot and periodic modeWanpeng Li
If we take TSC-deadline mode timer out of the picture, the Intel SDM does not say that the timer is disable when the timer mode is change, either from one-shot to periodic or vice versa. After this patch, the timer is no longer disarmed on change of mode, so the counter (TMCCT) keeps counting down. So what does a write to LVTT changes ? On baremetal, the change of mode is probably taken into account only when the counter reach 0. When this happen, LVTT is use to figure out if the counter should restard counting down from TMICT (so periodic mode) or stop counting (if one-shot mode). This patch is based on observation of the behavior of the APIC timer on baremetal as well as check that they does not go against the description written in the Intel SDM. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> [Fixed rate limiting of periodic timer.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: LAPIC: Introduce limit_periodic_timer_frequencyWanpeng Li
Extract the logic of limit lapic periodic timer frequency to a new function, this function will be used by later patches. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: LAPIC: Fix lapic timer mode transitionWanpeng Li
SDM 10.5.4.1 TSC-Deadline Mode mentioned that "Transitioning between TSC-Deadline mode and other timer modes also disarms the timer". So the APIC Timer Initial Count Register for one-shot/periodic mode should be reset. This patch do it. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> [Removed unnecessary definition of APIC_LVT_TIMER_MASK.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: VMX: Don't expose PLE enable if there is no hardware supportWanpeng Li
KVM doesn't expose the PLE capability to the L1 hypervisor, however, ple_window still shows the default value on L1 hypervisor. This patch fixes it by clearing all the PLE related module parameter if there is no PLE capability. Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-12KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exitHaozhong Zhang
When KVM emulates an exit from L2 to L1, it loads L1 CR4 into the guest CR4. Before this CR4 loading, the guest CR4 refers to L2 CR4. Because these two CR4's are in different levels of guest, we should vmx_set_cr4() rather than kvm_set_cr4() here. The latter, which is used to handle guest writes to its CR4, checks the guest change to CR4 and may fail if the change is invalid. The failure may cause trouble. Consider we start a L1 guest with non-zero L1 PCID in use, (i.e. L1 CR4.PCIDE == 1 && L1 CR3.PCID != 0) and a L2 guest with L2 PCID disabled, (i.e. L2 CR4.PCIDE == 0) and following events may happen: 1. If kvm_set_cr4() is used in load_vmcs12_host_state() to load L1 CR4 into guest CR4 (in VMCS01) for L2 to L1 exit, it will fail because of PCID check. As a result, the guest CR4 recorded in L0 KVM (i.e. vcpu->arch.cr4) is left to the value of L2 CR4. 2. Later, if L1 attempts to change its CR4, e.g., clearing VMXE bit, kvm_set_cr4() in L0 KVM will think L1 also wants to enable PCID, because the wrong L2 CR4 is used by L0 KVM as L1 CR4. As L1 CR3.PCID != 0, L0 KVM will inject GP to L1 guest. Fixes: 4704d0befb072 ("KVM: nVMX: Exiting from L2 to L1") Cc: qemu-stable@nongnu.org Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-10KVM: MMU: always terminate page walks at level 1Ladi Prosek
is_last_gpte() is not equivalent to the pseudo-code given in commit 6bb69c9b69c31 ("KVM: MMU: simplify last_pte_bitmap") because an incorrect value of last_nonleaf_level may override the result even if level == 1. It is critical for is_last_gpte() to return true on level == 1 to terminate page walks. Otherwise memory corruption may occur as level is used as an index to various data structures throughout the page walking code. Even though the actual bug would be wherever the MMU is initialized (as in the previous patch), be defensive and ensure here that is_last_gpte() returns the correct value. This patch is also enough to fix CVE-2017-12188. Fixes: 6bb69c9b69c315200ddc2bc79aee14c0184cf5b2 Cc: stable@vger.kernel.org Cc: Andy Honig <ahonig@google.com> Signed-off-by: Ladi Prosek <lprosek@redhat.com> [Panic if walk_addr_generic gets an incorrect level; this is a serious bug and it's not worth a WARN_ON where the recovery path might hide further exploitable issues; suggested by Andrew Honig. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-10KVM: nVMX: update last_nonleaf_level when initializing nested EPTLadi Prosek
The function updates context->root_level but didn't call update_last_nonleaf_level so the previous and potentially wrong value was used for page walks. For example, a zero value of last_nonleaf_level would allow a potential out-of-bounds access in arch/x86/mmu/paging_tmpl.h's walk_addr_generic function (CVE-2017-12188). Fixes: 155a97a3d7c78b46cef6f1a973c831bc5a4f82bb Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-05KVM: add X86_LOCAL_APIC dependencyArnd Bergmann
The rework of the posted interrupt handling broke building without support for the local APIC: ERROR: "boot_cpu_physical_apicid" [arch/x86/kvm/kvm-intel.ko] undefined! That configuration is probably not particularly useful anyway, so we can avoid the randconfig failures by adding a Kconfig dependency. Fixes: 8b306e2f3c41 ("KVM: VMX: avoid double list add with VT-d posted interrupts") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-05x86/kvm: Move kvm_fastop_exception to .fixup sectionJosh Poimboeuf
When compiling the kernel with the '-frecord-gcc-switches' flag, objtool complains: arch/x86/kvm/emulate.o: warning: objtool: .GCC.command.line+0x0: special: can't find new instruction And also the kernel fails to link. The problem is that the 'kvm_fastop_exception' code gets placed into the throwaway '.GCC.command.line' section instead of '.text'. Exception fixup code is conventionally placed in the '.fixup' section, so put it there where it belongs. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-04kvm/x86: Avoid async PF preempting the kernel incorrectlyBoqun Feng
Currently, in PREEMPT_COUNT=n kernel, kvm_async_pf_task_wait() could call schedule() to reschedule in some cases. This could result in accidentally ending the current RCU read-side critical section early, causing random memory corruption in the guest, or otherwise preempting the currently running task inside between preempt_disable and preempt_enable. The difficulty to handle this well is because we don't know whether an async PF delivered in a preemptible section or RCU read-side critical section for PREEMPT_COUNT=n, since preempt_disable()/enable() and rcu_read_lock/unlock() are both no-ops in that case. To cure this, we treat any async PF interrupting a kernel context as one that cannot be preempted, preventing kvm_async_pf_task_wait() from choosing the schedule() path in that case. To do so, a second parameter for kvm_async_pf_task_wait() is introduced, so that we know whether it's called from a context interrupting the kernel, and the parameter is set properly in all the callsites. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: stable@vger.kernel.org Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-09-29Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "Mixed bugfixes. Perhaps the most interesting one is a latent bug that was finally triggered by PCID support" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: kvm/x86: Handle async PF in RCU read-side critical sections KVM: nVMX: Fix nested #PF intends to break L1's vmlauch/vmresume KVM: VMX: use cmpxchg64 KVM: VMX: simplify and fix vmx_vcpu_pi_load KVM: VMX: avoid double list add with VT-d posted interrupts KVM: VMX: extract __pi_post_block KVM: PPC: Book3S HV: Check for updated HDSISR on P9 HDSI exception KVM: nVMX: fix HOST_CR3/HOST_CR4 cache
2017-09-29KVM: nVMX: Fix nested #PF intends to break L1's vmlauch/vmresumeWanpeng Li
------------[ cut here ]------------ WARNING: CPU: 4 PID: 5280 at /home/kernel/linux/arch/x86/kvm//vmx.c:11394 nested_vmx_vmexit+0xc2b/0xd70 [kvm_intel] CPU: 4 PID: 5280 Comm: qemu-system-x86 Tainted: G W OE 4.13.0+ #17 RIP: 0010:nested_vmx_vmexit+0xc2b/0xd70 [kvm_intel] Call Trace: ? emulator_read_emulated+0x15/0x20 [kvm] ? segmented_read+0xae/0xf0 [kvm] vmx_inject_page_fault_nested+0x60/0x70 [kvm_intel] ? vmx_inject_page_fault_nested+0x60/0x70 [kvm_intel] x86_emulate_instruction+0x733/0x810 [kvm] vmx_handle_exit+0x2f4/0xda0 [kvm_intel] ? kvm_arch_vcpu_ioctl_run+0xd2f/0x1c60 [kvm] kvm_arch_vcpu_ioctl_run+0xdab/0x1c60 [kvm] ? kvm_arch_vcpu_load+0x62/0x230 [kvm] kvm_vcpu_ioctl+0x340/0x700 [kvm] ? kvm_vcpu_ioctl+0x340/0x700 [kvm] ? __fget+0xfc/0x210 do_vfs_ioctl+0xa4/0x6a0 ? __fget+0x11d/0x210 SyS_ioctl+0x79/0x90 entry_SYSCALL_64_fastpath+0x23/0xc2 A nested #PF is triggered during L0 emulating instruction for L2. However, it doesn't consider we should not break L1's vmlauch/vmresme. This patch fixes it by queuing the #PF exception instead ,requesting an immediate VM exit from L2 and keeping the exception for L1 pending for a subsequent nested VM exit. This should actually work all the time, making vmx_inject_page_fault_nested totally unnecessary. However, that's not working yet, so this patch can work around the issue in the meanwhile. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-28KVM: VMX: use cmpxchg64Paolo Bonzini
This fixes a compilation failure on 32-bit systems. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-27KVM: VMX: simplify and fix vmx_vcpu_pi_loadPaolo Bonzini
The simplify part: do not touch pi_desc.nv, we can set it when the VCPU is first created. Likewise, pi_desc.sn is only handled by vmx_vcpu_pi_load, do not touch it in __pi_post_block. The fix part: do not check kvm_arch_has_assigned_device, instead check the SN bit to figure out whether vmx_vcpu_pi_put ran before. This matches what the previous patch did in pi_post_block. Cc: Huangweidong <weidong.huang@huawei.com> Cc: Gonglei <arei.gonglei@huawei.com> Cc: wangxin <wangxinxin.wang@huawei.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Tested-by: Longpeng (Mike) <longpeng2@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-27KVM: VMX: avoid double list add with VT-d posted interruptsPaolo Bonzini
In some cases, for example involving hot-unplug of assigned devices, pi_post_block can forget to remove the vCPU from the blocked_vcpu_list. When this happens, the next call to pi_pre_block corrupts the list. Fix this in two ways. First, check vcpu->pre_pcpu in pi_pre_block and WARN instead of adding the element twice in the list. Second, always do the list removal in pi_post_block if vcpu->pre_pcpu is set (not -1). The new code keeps interrupts disabled for the whole duration of pi_pre_block/pi_post_block. This is not strictly necessary, but easier to follow. For the same reason, PI.ON is checked only after the cmpxchg, and to handle it we just call the post-block code. This removes duplication of the list removal code. Cc: Huangweidong <weidong.huang@huawei.com> Cc: Gonglei <arei.gonglei@huawei.com> Cc: wangxin <wangxinxin.wang@huawei.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Tested-by: Longpeng (Mike) <longpeng2@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-27KVM: VMX: extract __pi_post_blockPaolo Bonzini
Simple code movement patch, preparing for the next one. Cc: Huangweidong <weidong.huang@huawei.com> Cc: Gonglei <arei.gonglei@huawei.com> Cc: wangxin <wangxinxin.wang@huawei.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Tested-by: Longpeng (Mike) <longpeng2@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-26Merge branch 'WIP.x86/fpu' into x86/fpu, because it's readyIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-26x86/fpu: Rename fpu__activate_curr() to fpu__initialize()Ingo Molnar
Rename this function to better express that it's all about initializing the FPU state of a task which goes hand in hand with the fpu::initialized field. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Link: http://lkml.kernel.org/r/20170923130016.21448-33-mingo@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-24Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Another round of CR3/PCID related fixes (I think this addresses all but one of the known problems with PCID support), an objtool fix plus a Clang fix that (finally) solves all Clang quirks to build a bootable x86 kernel as-is" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/asm: Fix inline asm call constraints for Clang objtool: Handle another GCC stack pointer adjustment bug x86/mm/32: Load a sane CR3 before cpu_init() on secondary CPUs x86/mm/32: Move setup_clear_cpu_cap(X86_FEATURE_PCID) earlier x86/mm/64: Stop using CR3.PCID == 0 in ASID-aware code x86/mm: Factor out CR3-building code
2017-09-23x86/asm: Fix inline asm call constraints for ClangJosh Poimboeuf
For inline asm statements which have a CALL instruction, we list the stack pointer as a constraint to convince GCC to ensure the frame pointer is set up first: static inline void foo() { register void *__sp asm(_ASM_SP); asm("call bar" : "+r" (__sp)) } Unfortunately, that pattern causes Clang to corrupt the stack pointer. The fix is easy: convert the stack pointer register variable to a global variable. It should be noted that the end result is different based on the GCC version. With GCC 6.4, this patch has exactly the same result as before: defconfig defconfig-nofp distro distro-nofp before 9820389 9491555 8816046 8516940 after 9820389 9491555 8816046 8516940 With GCC 7.2, however, GCC's behavior has changed. It now changes its behavior based on the conversion of the register variable to a global. That somehow convinces it to *always* set up the frame pointer before inserting *any* inline asm. (Therefore, listing the variable as an output constraint is a no-op and is no longer necessary.) It's a bit overkill, but the performance impact should be negligible. And in fact, there's a nice improvement with frame pointers disabled: defconfig defconfig-nofp distro distro-nofp before 9796316 9468236 9076191 8790305 after 9796957 9464267 9076381 8785949 So in summary, while listing the stack pointer as an output constraint is no longer necessary for newer versions of GCC, it's still needed for older versions. Suggested-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reported-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/3db862e970c432ae823cf515c52b54fec8270e0e.1505942196.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>