summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-03-23tools/kvm_stat: switch to argparseStefan Raspl
optparse is deprecated for a while, hence switching over to argparse (which also works with python2). As a consequence, help output has some subtle changes, the most significant one being that the options are all listed explicitly instead of a universal '[options]' indicator. Also, some of the error messages are phrased slightly different. While at it, squashed a number of minor PEP8 issues. Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Message-Id: <20200306114250.57585-3-raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23tools/kvm_stat: rework command line sequence and message textsStefan Raspl
Make sure command line arguments are sorted alphabetically everywhere, and adjusted existing texts for interactive command 's' to become consistent with the long form --set-delay. Throwing in some PEP8 fixes (all cosmetics) for good measure. Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Message-Id: <20200306114250.57585-2-raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: s390: mark sie block as 512 byte alignedChristian Borntraeger
The sie block must be aligned to 512 bytes. Mark it as such. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2020-03-23KVM: s390: Use fallthrough;Joe Perches
Convert the various uses of fallthrough comments to fallthrough; Done via script Link: https://lore.kernel.org/lkml/b56602fcf79f849e733e7b521bb0e17895d390fa.1582230379.git.joe@perches.com Signed-off-by: Joe Perches <joe@perches.com> Link: https://lore.kernel.org/r/d63c86429f3e5aa806aa3e185c97d213904924a5.1583896348.git.joe@perches.com [borntrager@de.ibm.com: Fix link to tool and subject] Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2020-03-20irqchip/gic-v4.1: Map the ITS SGIR register pageMarc Zyngier
One of the new features of GICv4.1 is to allow virtual SGIs to be directly signaled to a VPE. For that, the ITS has grown a new 64kB page containing only a single register that is used to signal a SGI to a given VPE. Add a second mapping covering this new 64kB range, and take this opportunity to limit the original mapping to 64kB, which is enough to cover the span of the ITS registers. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-8-maz@kernel.org
2020-03-20irqchip/gic-v4.1: Advertise support v4.1 to KVMMarc Zyngier
Tell KVM that we support v4.1. Nothing uses this information so far. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-7-maz@kernel.org
2020-03-20irqchip/gic-v4.1: Ensure mutual exclusion betwen invalidations on the same RDMarc Zyngier
The GICv4.1 spec says that it is CONTRAINED UNPREDICTABLE to write to any of the GICR_INV{LPI,ALL}R registers if GICR_SYNCR.Busy == 1. To deal with it, we must ensure that only a single invalidation can happen at a time for a given redistributor. Add a per-RD lock to that effect and take it around the invalidation/syncr-read to deal with this. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-6-maz@kernel.org
2020-03-20irqchip/gic-v4.1: Wait for completion of redistributor's INVALL operationZenghui Yu
In GICv4.1, we emulate a guest-issued INVALL command by a direct write to GICR_INVALLR. Before we finish the emulation and go back to guest, let's make sure the physical invalidate operation is actually completed and no stale data will be left in redistributor. Per the specification, this can be achieved by polling the GICR_SYNCR.Busy bit (to zero). Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200302092145.899-1-yuzenghui@huawei.com Link: https://lore.kernel.org/r/20200304203330.4967-5-maz@kernel.org
2020-03-19irqchip/gic-v4.1: Ensure mutual exclusion between vPE affinity change and RD ↵Marc Zyngier
access Before GICv4.1, all operations would be serialized with the affinity changes by virtue of using the same ITS command queue. With v4.1, things change, as invalidations (and a number of other operations) are issued using the redistributor MMIO frame. We must thus make sure that these redistributor accesses cannot race against aginst the affinity change, or we may end-up talking to the wrong redistributor. To ensure this, we expand the irq_to_cpuid() helper to take a spinlock when the LPI is mapped to a vLPI (a new per-VPE lock) on each operation that requires mutual exclusion. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-4-maz@kernel.org
2020-03-19irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributorsMarc Zyngier
In a system that is only sparsly populated with CPUs, we can end-up with redistributors structures that are not initialized. Let's make sure we don't try and access those when iterating over them (in this case when checking we have a L2 VPE table). Fixes: 4e6437f12d6e ("irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level") Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-3-maz@kernel.org
2020-03-19irqchip/gic-v3: Use SGIs without active state if offeredMarc Zyngier
To allow the direct injection of SGIs into a guest, the GICv4.1 architecture has to sacrifice the Active state so that SGIs look a lot like LPIs (they are injected by the same mechanism). In order not to break existing software, the architecture gives offers guests OSs the choice: SGIs with or without an active state. It is the hypervisors duty to honor the guest's choice. For this, the architecture offers a discovery bit indicating whether the GIC supports GICv4.1 SGIs (GICD_TYPER2.nASSGIcap), and another bit indicating whether the guest wants Active-less SGIs or not (controlled by GICD_CTLR.nASSGIreq). A hypervisor not supporting GICv4.1 SGIs would leave nASSGIcap clear, and a guest not knowing about GICv4.1 SGIs (or definitely wanting an Active state) would leave nASSGIreq clear (both being thankfully backward compatible with older revisions of the GIC). Since Linux is perfectly happy without an active state on SGIs, inform the hypervisor that we'll use that if offered. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-2-maz@kernel.org
2020-03-19KVM: PPC: Kill kvmppc_ops::mmu_destroy() and kvmppc_mmu_destroy()Greg Kurz
These are only used by HV KVM and BookE, and in both cases they are nops. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S PR: Move kvmppc_mmu_init() into PR KVMGreg Kurz
This is only relevant to PR KVM. Make it obvious by moving the function declaration to the Book3s header and rename it with a _pr suffix. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S PR: Fix kernel crash with PR KVMGreg Kurz
With PR KVM, shutting down a VM causes the host kernel to crash: [ 314.219284] BUG: Unable to handle kernel data access on read at 0xc00800000176c638 [ 314.219299] Faulting instruction address: 0xc008000000d4ddb0 cpu 0x0: Vector: 300 (Data Access) at [c00000036da077a0] pc: c008000000d4ddb0: kvmppc_mmu_pte_flush_all+0x68/0xd0 [kvm_pr] lr: c008000000d4dd94: kvmppc_mmu_pte_flush_all+0x4c/0xd0 [kvm_pr] sp: c00000036da07a30 msr: 900000010280b033 dar: c00800000176c638 dsisr: 40000000 current = 0xc00000036d4c0000 paca = 0xc000000001a00000 irqmask: 0x03 irq_happened: 0x01 pid = 1992, comm = qemu-system-ppc Linux version 5.6.0-master-gku+ (greg@palmb) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #17 SMP Wed Mar 18 13:49:29 CET 2020 enter ? for help [c00000036da07ab0] c008000000d4fbe0 kvmppc_mmu_destroy_pr+0x28/0x60 [kvm_pr] [c00000036da07ae0] c0080000009eab8c kvmppc_mmu_destroy+0x34/0x50 [kvm] [c00000036da07b00] c0080000009e50c0 kvm_arch_vcpu_destroy+0x108/0x140 [kvm] [c00000036da07b30] c0080000009d1b50 kvm_vcpu_destroy+0x28/0x80 [kvm] [c00000036da07b60] c0080000009e4434 kvm_arch_destroy_vm+0xbc/0x190 [kvm] [c00000036da07ba0] c0080000009d9c2c kvm_put_kvm+0x1d4/0x3f0 [kvm] [c00000036da07c00] c0080000009da760 kvm_vm_release+0x38/0x60 [kvm] [c00000036da07c30] c000000000420be0 __fput+0xe0/0x310 [c00000036da07c90] c0000000001747a0 task_work_run+0x150/0x1c0 [c00000036da07cf0] c00000000014896c do_exit+0x44c/0xd00 [c00000036da07dc0] c0000000001492f4 do_group_exit+0x64/0xd0 [c00000036da07e00] c000000000149384 sys_exit_group+0x24/0x30 [c00000036da07e20] c00000000000b9d0 system_call+0x5c/0x68 This is caused by a use-after-free in kvmppc_mmu_pte_flush_all() which dereferences vcpu->arch.book3s which was previously freed by kvmppc_core_vcpu_free_pr(). This happens because kvmppc_mmu_destroy() is called after kvmppc_core_vcpu_free() since commit ff030fdf5573 ("KVM: PPC: Move kvm_vcpu_init() invocation to common code"). The kvmppc_mmu_destroy() helper calls one of the following depending on the KVM backend: - kvmppc_mmu_destroy_hv() which does nothing (Book3s HV) - kvmppc_mmu_destroy_pr() which undoes the effects of kvmppc_mmu_init() (Book3s PR 32-bit) - kvmppc_mmu_destroy_pr() which undoes the effects of kvmppc_mmu_init() (Book3s PR 64-bit) - kvmppc_mmu_destroy_e500() which does nothing (BookE e500/e500mc) It turns out that this is only relevant to PR KVM actually. And both 32 and 64 backends need vcpu->arch.book3s to be valid when calling kvmppc_mmu_destroy_pr(). So instead of calling kvmppc_mmu_destroy() from kvm_arch_vcpu_destroy(), call kvmppc_mmu_destroy_pr() at the beginning of kvmppc_core_vcpu_free_pr(). This is consistent with kvmppc_mmu_init() being the last call in kvmppc_core_vcpu_create_pr(). For the same reason, if kvmppc_core_vcpu_create_pr() returns an error then this means that kvmppc_mmu_init() was either not called or failed, in which case kvmppc_mmu_destroy() should not be called. Drop the line in the error path of kvm_arch_vcpu_create(). Fixes: ff030fdf5573 ("KVM: PPC: Move kvm_vcpu_init() invocation to common code") Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Use fallthrough;Joe Perches
Convert the various uses of fallthrough comments to fallthrough; Done via script Link: https://lore.kernel.org/lkml/b56602fcf79f849e733e7b521bb0e17895d390fa.1582230379.git.joe.com/ Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S HV: Fix H_CEDE return code for nested guestsMichael Roth
The h_cede_tm kvm-unit-test currently fails when run inside an L1 guest via the guest/nested hypervisor. ./run-tests.sh -v ... TESTNAME=h_cede_tm TIMEOUT=90s ACCEL= ./powerpc/run powerpc/tm.elf -smp 2,threads=2 -machine cap-htm=on -append "h_cede_tm" FAIL h_cede_tm (2 tests, 1 unexpected failures) While the test relates to transactional memory instructions, the actual failure is due to the return code of the H_CEDE hypercall, which is reported as 224 instead of 0. This happens even when no TM instructions are issued. 224 is the value placed in r3 to execute a hypercall for H_CEDE, and r3 is where the caller expects the return code to be placed upon return. In the case of guest running under a nested hypervisor, issuing H_CEDE causes a return from H_ENTER_NESTED. In this case H_CEDE is specially-handled immediately rather than later in kvmppc_pseries_do_hcall() as with most other hcalls, but we forget to set the return code for the caller, hence why kvm-unit-test sees the 224 return code and reports an error. Guest kernels generally don't check the return value of H_CEDE, so that likely explains why this hasn't caused issues outside of kvm-unit-tests so far. Fix this by setting r3 to 0 after we finish processing the H_CEDE. RHBZ: 1778556 Fixes: 4bad77799fed ("KVM: PPC: Book3S HV: Handle hypercalls correctly when nested") Cc: linuxppc-dev@ozlabs.org Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S HV: Treat TM-related invalid form instructions on P9 like ↵Gustavo Romero
the valid ones On P9 DD2.2 due to a CPU defect some TM instructions need to be emulated by KVM. This is handled at first by the hardware raising a softpatch interrupt when certain TM instructions that need KVM assistance are executed in the guest. Althought some TM instructions per Power ISA are invalid forms they can raise a softpatch interrupt too. For instance, 'tresume.' instruction as defined in the ISA must have bit 31 set (1), but an instruction that matches 'tresume.' PO and XO opcode fields but has bit 31 not set (0), like 0x7cfe9ddc, also raises a softpatch interrupt. Similarly for 'treclaim.' and 'trechkpt.' instructions with bit 31 = 0, i.e. 0x7c00075c and 0x7c0007dc, respectively. Hence, if a code like the following is executed in the guest it will raise a softpatch interrupt just like a 'tresume.' when the TM facility is enabled ('tabort. 0' in the example is used only to enable the TM facility): int main() { asm("tabort. 0; .long 0x7cfe9ddc;"); } Currently in such a case KVM throws a complete trace like: [345523.705984] WARNING: CPU: 24 PID: 64413 at arch/powerpc/kvm/book3s_hv_tm.c:211 kvmhv_p9_tm_emulation+0x68/0x620 [kvm_hv] [345523.705985] Modules linked in: kvm_hv(E) xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter bridge stp llc sch_fq_codel ipmi_powernv at24 vmx_crypto ipmi_devintf ipmi_msghandler ibmpowernv uio_pdrv_genirq kvm opal_prd uio leds_powernv ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx libcrc32c xor raid6_pq raid1 raid0 multipath linear tg3 crct10dif_vpmsum crc32c_vpmsum ipr [last unloaded: kvm_hv] [345523.706030] CPU: 24 PID: 64413 Comm: CPU 0/KVM Tainted: G W E 5.5.0+ #1 [345523.706031] NIP: c0080000072cb9c0 LR: c0080000072b5e80 CTR: c0080000085c7850 [345523.706034] REGS: c000000399467680 TRAP: 0700 Tainted: G W E (5.5.0+) [345523.706034] MSR: 900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 24022428 XER: 00000000 [345523.706042] CFAR: c0080000072b5e7c IRQMASK: 0 GPR00: c0080000072b5e80 c000000399467910 c0080000072db500 c000000375ccc720 GPR04: c000000375ccc720 00000003fbec0000 0000a10395dda5a6 0000000000000000 GPR08: 000000007cfe9ddc 7cfe9ddc000005dc 7cfe9ddc7c0005dc c0080000072cd530 GPR12: c0080000085c7850 c0000003fffeb800 0000000000000001 00007dfb737f0000 GPR16: c0002001edcca558 0000000000000000 0000000000000000 0000000000000001 GPR20: c000000001b21258 c0002001edcca558 0000000000000018 0000000000000000 GPR24: 0000000001000000 ffffffffffffffff 0000000000000001 0000000000001500 GPR28: c0002001edcc4278 c00000037dd80000 800000050280f033 c000000375ccc720 [345523.706062] NIP [c0080000072cb9c0] kvmhv_p9_tm_emulation+0x68/0x620 [kvm_hv] [345523.706065] LR [c0080000072b5e80] kvmppc_handle_exit_hv.isra.53+0x3e8/0x798 [kvm_hv] [345523.706066] Call Trace: [345523.706069] [c000000399467910] [c000000399467940] 0xc000000399467940 (unreliable) [345523.706071] [c000000399467950] [c000000399467980] 0xc000000399467980 [345523.706075] [c0000003994679f0] [c0080000072bd1c4] kvmhv_run_single_vcpu+0xa1c/0xb80 [kvm_hv] [345523.706079] [c000000399467ac0] [c0080000072bd8e0] kvmppc_vcpu_run_hv+0x5b8/0xb00 [kvm_hv] [345523.706087] [c000000399467b90] [c0080000085c93cc] kvmppc_vcpu_run+0x34/0x48 [kvm] [345523.706095] [c000000399467bb0] [c0080000085c582c] kvm_arch_vcpu_ioctl_run+0x244/0x420 [kvm] [345523.706101] [c000000399467c40] [c0080000085b7498] kvm_vcpu_ioctl+0x3d0/0x7b0 [kvm] [345523.706105] [c000000399467db0] [c0000000004adf9c] ksys_ioctl+0x13c/0x170 [345523.706107] [c000000399467e00] [c0000000004adff8] sys_ioctl+0x28/0x80 [345523.706111] [c000000399467e20] [c00000000000b278] system_call+0x5c/0x68 [345523.706112] Instruction dump: [345523.706114] 419e0390 7f8a4840 409d0048 6d497c00 2f89075d 419e021c 6d497c00 2f8907dd [345523.706119] 419e01c0 6d497c00 2f8905dd 419e00a4 <0fe00000> 38210040 38600000 ebc1fff0 and then treats the executed instruction as a 'nop'. However the POWER9 User's Manual, in section "4.6.10 Book II Invalid Forms", informs that for TM instructions bit 31 is in fact ignored, thus for the TM-related invalid forms ignoring bit 31 and handling them like the valid forms is an acceptable way to handle them. POWER8 behaves the same way too. This commit changes the handling of the cases here described by treating the TM-related invalid forms that can generate a softpatch interrupt just like their valid forms (w/ bit 31 = 1) instead of as a 'nop' and by gently reporting any other unrecognized case to the host and treating it as illegal instruction instead of throwing a trace and treating it as a 'nop'. Signed-off-by: Gustavo Romero <gromero@linux.ibm.com> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Acked-By: Michael Neuling <mikey@neuling.org> Reviewed-by: Leonardo Bras <leonardo@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S HV: Use RADIX_PTE_INDEX_SIZE in Radix MMU codeMichael Ellerman
In kvmppc_unmap_free_pte() in book3s_64_mmu_radix.c, we use the non-constant value PTE_INDEX_SIZE to clear a PTE page. We can instead use the constant RADIX_PTE_INDEX_SIZE, because we know this code will only be running when the Radix MMU is active. Note that we already use RADIX_PTE_INDEX_SIZE for the allocation of kvm_pte_cache. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Leonardo Bras <leonardo@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-19KVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot in HPT page fault handlerPaul Mackerras
This makes the same changes in the page fault handler for HPT guests that commits 31c8b0d0694a ("KVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot() in page fault handler", 2018-03-01), 71d29f43b633 ("KVM: PPC: Book3S HV: Don't use compound_order to determine host mapping size", 2018-09-11) and 6579804c4317 ("KVM: PPC: Book3S HV: Avoid crash from THP collapse during radix page fault", 2018-10-04) made for the page fault handler for radix guests. In summary, where we used to call get_user_pages_fast() and then do special handling for VM_PFNMAP vmas, we now call __get_user_pages_fast() and then __gfn_to_pfn_memslot() if that fails, followed by reading the Linux PTE to get the host PFN, host page size and mapping attributes. This also brings in the change from SetPageDirty() to set_page_dirty_lock() which was done for the radix page fault handler in commit c3856aeb2940 ("KVM: PPC: Book3S HV: Fix handling of large pages in radix page fault handler", 2018-02-23). Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-18KVM: selftests: Rework timespec functions and usageAndrew Jones
The steal_time test's timespec stop condition was wrong and should have used the timespec functions instead to avoid being wrong, but timespec_diff had a strange interface. Rework all the timespec API and its use. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: x86: Code style cleanup in kvm_arch_dev_ioctl()Xiaoyao Li
In kvm_arch_dev_ioctl(), the brackets of case KVM_X86_GET_MCE_CAP_SUPPORTED accidently encapsulates case KVM_GET_MSR_FEATURE_INDEX_LIST and case KVM_GET_MSRS. It doesn't affect functionality but it's misleading. Remove unnecessary brackets and opportunistically add a "break" in the default path. Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: x86: Add blurb to CPUID tracepoint when using max basic leaf valuesSean Christopherson
Tack on "used max basic" at the end of the CPUID tracepoint when the output values correspond to the max basic leaf, i.e. when emulating Intel's out-of-range CPUID behavior. Observing "cpuid entry not found" in the tracepoint with non-zero output values is confusing for users that aren't familiar with the out-of-range semantics, and qualifying the "not found" case hopefully makes it clear that "found" means "found the exact entry". Suggested-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: x86: Add requested index to the CPUID tracepointSean Christopherson
Output the requested index when tracing CPUID emulation; it's basically mandatory for leafs where the index is meaningful, and is helpful for verifying KVM correctness even when the index isn't meaningful, e.g. the trace for a Linux guest's hypervisor_cpuid_base() probing appears to be broken (returns all zeroes) at first glance, but is correct because the index is non-zero, i.e. the output values correspond to a random index in the maximum basic leaf. Suggested-by: Xiaoyao Li <xiaoyao.li@intel.com> Cc: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: nSVM: check for EFER.SVME=1 before entering guestPaolo Bonzini
EFER is set for L2 using svm_set_efer, which hardcodes EFER_SVME to 1 and hides an incorrect value for EFER.SVME in the L1 VMCB. Perform the check manually to detect invalid guest state. Reported-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: x86: Expose AVX512 VP2INTERSECT in cpuid for TGLZhenyu Wang
On Tigerlake new AVX512 VP2INTERSECT feature is available. This allows to expose it via KVM_GET_SUPPORTED_CPUID. Cc: "Zhong, Yang" <yang.zhong@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18KVM: nVMX: remove side effects from nested_vmx_exit_reflectedPaolo Bonzini
The name of nested_vmx_exit_reflected suggests that it's purely a test, but it actually marks VMCS12 pages as dirty. Move this to vmx_handle_exit, observing that the initial nested_run_pending check in nested_vmx_exit_reflected is pointless---nested_run_pending has just been cleared in vmx_vcpu_run and won't be set until handle_vmlaunch or handle_vmresume. Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-17KVM: VMX: access regs array in vmenter.S in its natural orderUros Bizjak
Registers in "regs" array are indexed as rax/rcx/rdx/.../rsi/rdi/r8/... Reorder access to "regs" array in vmenter.S to follow its natural order. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16Merge tag 'kvm-s390-next-5.7-1' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: Features and Enhancements for 5.7 part1 1. Allow to disable gisa 2. protected virtual machines Protected VMs (PVM) are KVM VMs, where KVM can't access the VM's state like guest memory and guest registers anymore. Instead the PVMs are mostly managed by a new entity called Ultravisor (UV), which provides an API, so KVM and the PV can request management actions. PVMs are encrypted at rest and protected from hypervisor access while running. They switch from a normal operation into protected mode, so we can still use the standard boot process to load a encrypted blob and then move it into protected mode. Rebooting is only possible by passing through the unprotected/normal mode and switching to protected again. One mm related patch will go via Andrews mm tree ( mm/gup/writeback: add callbacks for inaccessible pages)
2020-03-16KVM: selftests: enlightened VMPTRLD with an incorrect GPAVitaly Kuznetsov
Check that guest doesn't hang when an invalid eVMCS GPA is specified. Testing that #UD is injected would probably be better but selftests lack the infrastructure currently. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: test enlightened vmenter with wrong eVMCS versionVitaly Kuznetsov
Check that VMfailInvalid happens when eVMCS revision is is invalid. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: define and use EVMCS_VERSIONVitaly Kuznetsov
KVM allows to use revision_id from MSR_IA32_VMX_BASIC as eVMCS revision_id to workaround a bug in genuine Hyper-V (see the comment in nested_vmx_handle_enlightened_vmptrld()), this shouldn't be used by default. Switch to using KVM_EVMCS_VERSION(1). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: nVMX: properly handle errors in nested_vmx_handle_enlightened_vmptrld()Vitaly Kuznetsov
nested_vmx_handle_enlightened_vmptrld() fails in two cases: - when we fail to kvm_vcpu_map() the supplied GPA - when revision_id is incorrect. Genuine Hyper-V raises #UD in the former case (at least with *some* incorrect GPAs) and does VMfailInvalid() in the later. KVM doesn't do anything so L1 just gets stuck retrying the same faulty VMLAUNCH. nested_vmx_handle_enlightened_vmptrld() has two call sites: nested_vmx_run() and nested_get_vmcs12_pages(). The former needs to queue do much: the failure there happens after migration when L2 was running (and L1 did something weird like wrote to VP assist page from a different vCPU), just kill L1 with KVM_EXIT_INTERNAL_ERROR. Reported-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> [Squash kbuild autopatch. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: nVMX: stop abusing need_vmcs12_to_shadow_sync for eVMCS mappingVitaly Kuznetsov
When vmx_set_nested_state() happens, we may not have all the required data to map enlightened VMCS: e.g. HV_X64_MSR_VP_ASSIST_PAGE MSR may not yet be restored so we need a postponed action. Currently, we (ab)use need_vmcs12_to_shadow_sync/nested_sync_vmcs12_to_shadow() for that but this is not ideal: - We may not need to sync anything if L2 is running - It is hard to propagate errors from nested_sync_vmcs12_to_shadow() as we call it from vmx_prepare_switch_to_guest() which happens just before we do VMLAUNCH, the code is not ready to handle errors there. Move eVMCS mapping to nested_get_vmcs12_pages() and request KVM_REQ_GET_VMCS12_PAGES, it seems to be is less abusive in nature. It would probably be possible to introduce a specialized KVM_REQ_EVMCS_MAP but it is undesirable to propagate eVMCS specifics all the way up to x86.c Note, we don't need to request KVM_REQ_GET_VMCS12_PAGES from vmx_set_nested_state() directly as nested_vmx_enter_non_root_mode() already does that. Requesting KVM_REQ_GET_VMCS12_PAGES is done to document the (non-obvious) side-effect and to be future proof. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16Merge branch 'kvm-null-pointer-fix' into HEADPaolo Bonzini
2020-03-16selftests: kvm: Uses TEST_FAIL in tests/utilitiesWainer dos Santos Moschetta
Changed all tests and utilities to use TEST_FAIL macro instead of TEST_ASSERT(false,...). Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: kvm: Introduce the TEST_FAIL macroWainer dos Santos Moschetta
Some tests/utilities use the TEST_ASSERT(false, ...) pattern to indicate a failure and stop execution. This change introduces the TEST_FAIL macro which is a wrap around TEST_ASSERT(false, ...) and so provides a direct alternative for failing a test. Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: s390: check for registers to NOT change on resetChristian Borntraeger
Normal reset and initial CPU reset do not clear all registers. Add a test that those registers are NOT changed. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: s390: test more register variants for the reset ioctlChristian Borntraeger
We should not only test the oneregs or the get_(x)regs interfaces but also the sync_regs. Those are usually the canonical place for register content. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: s390: fix early guest crashChristian Borntraeger
The guest crashes very early due to changes in the control registers used by dynamic address translation. Let us use different registers that will not crash the guest. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: Introduce steal-time testAndrew Jones
The steal-time test confirms what is reported to the guest as stolen time is consistent with the run_delay reported for the VCPU thread on the host. Both x86_64 and AArch64 have the concept of steal/stolen time so this test is introduced for both architectures. While adding the test we ensure .gitignore has all tests listed (it was missing s390x/resets) and that the Makefile has all tests listed in alphabetical order (not really necessary, but it almost was already...). We also extend the common API with a new num-guest- pages call and a new timespec call. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: virt_map should take npages, not sizeAndrew Jones
Also correct the comment and prototype for vm_create_default(), as it takes a number of pages, not a size. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: Use consistent message for test skippingAndrew Jones
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: Enable printf format warnings for TEST_ASSERTAndrew Jones
Use the format attribute to enable printf format warnings, and then fix them all. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: s390: fix format strings for access reg testChristian Borntraeger
acrs are 32 bit and not 64 bit. Reported-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: s390: fixup fprintf format error in reset.cChristian Borntraeger
value is u64 and not string. Reported-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: selftests: Share common API documentationAndrew Jones
Move function documentation comment blocks to the header files in order to avoid duplicating them for each architecture. While at it clean up and fix up the comment blocks. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16selftests: KVM: SVM: Add vmcall test to gitignoreAndrew Jones
Add svm_vmcall_test to gitignore list, and realphabetize it. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: nSVM: Remove an obsolete comment.Miaohe Lin
The function does not return bool anymore. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: X86: correct meaningless kvm_apicv_activated() checkPaolo Bonzini
After test_and_set_bit() for kvm->arch.apicv_inhibit_reasons, we will always get false when calling kvm_apicv_activated() because it's sure apicv_inhibit_reasons do not equal to 0. What the code wants to do, is check whether APICv was *already* active and if so skip the costly request; we can do this using cmpxchg. Reported-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: nVMX: Consolidate nested MTF checks to helper functionOliver Upton
commit 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation") introduced a helper to check the MTF VM-execution control in vmcs12. Change pre-existing check in nested_vmx_exit_reflected() to instead use the helper. Signed-off-by: Oliver Upton <oupton@google.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>