summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2021-04-21KVM: SVM: Add KVM_SEV_RECEIVE_FINISH commandBrijesh Singh
The command finalize the guest receiving process and make the SEV guest ready for the execution. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Steve Rutherford <srutherford@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <d08914dc259644de94e29b51c3b68a13286fc5a3.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA commandBrijesh Singh
The command is used for copying the incoming buffer into the SEV guest memory space. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Steve Rutherford <srutherford@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <c5d0e3e719db7bb37ea85d79ed4db52e9da06257.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add support for KVM_SEV_RECEIVE_START commandBrijesh Singh
The command is used to create the encryption context for an incoming SEV guest. The encryption context can be later used by the hypervisor to import the incoming data into the SEV guest memory space. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Steve Rutherford <srutherford@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <c7400111ed7458eee01007c4d8d57cdf2cbb0fc2.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add support for KVM_SEV_SEND_CANCEL commandSteve Rutherford
After completion of SEND_START, but before SEND_FINISH, the source VMM can issue the SEND_CANCEL command to stop a migration. This is necessary so that a cancelled migration can restart with a new target later. Reviewed-by: Nathan Tempelman <natet@google.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Steve Rutherford <srutherford@google.com> Message-Id: <20210412194408.2458827-1-srutherford@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add KVM_SEV_SEND_FINISH commandBrijesh Singh
The command is used to finailize the encryption context created with KVM_SEV_SEND_START command. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Steve Rutherford <srutherford@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <5082bd6a8539d24bc55a1dd63a1b341245bb168f.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add KVM_SEND_UPDATE_DATA commandBrijesh Singh
The command is used for encrypting the guest memory region using the encryption context created with KVM_SEV_SEND_START. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by : Steve Rutherford <srutherford@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <d6a6ea740b0c668b30905ae31eac5ad7da048bb3.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: SVM: Add KVM_SEV SEND_START commandBrijesh Singh
The command is used to create an outgoing SEV guest encryption context. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Steve Rutherford <srutherford@google.com> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-Id: <2f1686d0164e0f1b3d6a41d620408393e0a48376.1618498113.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: Boost vCPU candidate in user mode which is delivering interruptWanpeng Li
Both lock holder vCPU and IPI receiver that has halted are condidate for boost. However, the PLE handler was originally designed to deal with the lock holder preemption problem. The Intel PLE occurs when the spinlock waiter is in kernel mode. This assumption doesn't hold for IPI receiver, they can be in either kernel or user mode. the vCPU candidate in user mode will not be boosted even if they should respond to IPIs. Some benchmarks like pbzip2, swaptions etc do the TLB shootdown in kernel mode and most of the time they are running in user mode. It can lead to a large number of continuous PLE events because the IPI sender causes PLE events repeatedly until the receiver is scheduled while the receiver is not candidate for a boost. This patch boosts the vCPU candidiate in user mode which is delivery interrupt. We can observe the speed of pbzip2 improves 10% in 96 vCPUs VM in over-subscribe scenario (The host machine is 2 socket, 48 cores, 96 HTs Intel CLX box). There is no performance regression for other benchmarks like Unixbench spawn (most of the time contend read/write lock in kernel mode), ebizzy (most of the time contend read/write sem and TLB shoodtdown in kernel mode). Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1618542490-14756-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21KVM: x86: Support KVM VMs sharing SEV contextNathan Tempelman
Add a capability for userspace to mirror SEV encryption context from one vm to another. On our side, this is intended to support a Migration Helper vCPU, but it can also be used generically to support other in-guest workloads scheduled by the host. The intention is for the primary guest and the mirror to have nearly identical memslots. The primary benefits of this are that: 1) The VMs do not share KVM contexts (think APIC/MSRs/etc), so they can't accidentally clobber each other. 2) The VMs can have different memory-views, which is necessary for post-copy migration (the migration vCPUs on the target need to read and write to pages, when the primary guest would VMEXIT). This does not change the threat model for AMD SEV. Any memory involved is still owned by the primary guest and its initial state is still attested to through the normal SEV_LAUNCH_* flows. If userspace wanted to circumvent SEV, they could achieve the same effect by simply attaching a vCPU to the primary VM. This patch deliberately leaves userspace in charge of the memslots for the mirror, as it already has the power to mess with them in the primary guest. This patch does not support SEV-ES (much less SNP), as it does not handle handing off attested VMSAs to the mirror. For additional context, we need a Migration Helper because SEV PSP migration is far too slow for our live migration on its own. Using an in-guest migrator lets us speed this up significantly. Signed-off-by: Nathan Tempelman <natet@google.com> Message-Id: <20210408223214.2582277-1-natet@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21nSVM: Check addresses of MSR and IO permission mapsKrish Sadhukhan
According to section "Canonicalization and Consistency Checks" in APM vol 2, the following guest state is illegal: "The MSR or IOIO intercept tables extend to a physical address that is greater than or equal to the maximum supported physical address." Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210412215611.110095-5-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-21powerpc: Move copy_inst_from_kernel_nofault()Christophe Leroy
When probe_kernel_read_inst() was created, there was no good place to put it, so a file called lib/inst.c was dedicated for it. Since then, probe_kernel_read_inst() has been renamed copy_inst_from_kernel_nofault(). And mm/maccess.h didn't exist at that time. Today, mm/maccess.h is related to copy_from_kernel_nofault(). Move copy_inst_from_kernel_nofault() into mm/maccess.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9655d8957313906b77b8db5700a0e33ce06f45e5.1618405715.git.christophe.leroy@csgroup.eu
2021-04-21powerpc: Rename probe_kernel_read_inst()Christophe Leroy
When probe_kernel_read_inst() was created, it was to mimic probe_kernel_read() function. Since then, probe_kernel_read() has been renamed copy_from_kernel_nofault(). Rename probe_kernel_read_inst() into copy_inst_from_kernel_nofault(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b783d1f7cdb8914992384a669a2af57051b6bdcf.1618405715.git.christophe.leroy@csgroup.eu
2021-04-21powerpc: Make probe_kernel_read_inst() common to PPC32 and PPC64Christophe Leroy
We have two independant versions of probe_kernel_read_inst(), one for PPC32 and one for PPC64. The PPC32 is identical to the first part of the PPC64 version. The remaining part of PPC64 version is not relevant for PPC32, but not contradictory, so we can easily have a common function with the PPC64 part opted out via a IS_ENABLED(CONFIG_PPC64). The only need is to add a version of ppc_inst_prefix() for PPC32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f7b9dfddef3b3760182c7e5466356c121a293dc9.1618405715.git.christophe.leroy@csgroup.eu
2021-04-21powerpc: Remove probe_user_read_inst()Christophe Leroy
Its name comes from former probe_user_read() function. That function is now called copy_from_user_nofault(). probe_user_read_inst() uses copy_from_user_nofault() to read only a few bytes. It is suboptimal. It does the same as get_user_inst() but in addition disables page faults. But on the other hand, it is not used for the time being. So remove it for now. If one day it is really needed, we can give it a new name more in line with today's naming, and implement it using get_user_inst() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f6f82572242a59bfee1e19a71194d8f7ef5fca4.1618405715.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/ebpf32: Use standard function call for functions within 32M distanceChristophe Leroy
If the target of a function call is within 32 Mbytes distance, use a standard function call with 'bl' instead of the 'lis/ori/mtlr/blrl' sequence. In the first pass, no memory has been allocated yet and the code position is not known yet (image pointer is NULL). This pass is there to calculate the amount of memory to allocate for the EBPF code, so assume the 4 instructions sequence is required, so that enough memory is allocated. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/74944a1e3e5cfecc141e440a6ccd37920e186b70.1618227846.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/ebpf32: Rework 64 bits shifts to avoid tests and branchesChristophe Leroy
Re-implement BPF_ALU64 | BPF_{LSH/RSH/ARSH} | BPF_X with branchless implementation copied from misc_32.S. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/03167350b05b2fe8b741e53363ee37709d0f878d.1618227846.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/ebpf32: Fix comment on BPF_ALU{64} | BPF_LSH | BPF_KChristophe Leroy
Replace <<== by <<= Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/34d12a4f75cb8b53a925fada5e7ddddd3b145203.1618227846.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/32: Use r2 in wrtspr() instead of r0Christophe Leroy
wrtspr() is a function to write an arbitrary value in a special register. It is used on 8xx to write to SPRN_NRI, SPRN_EID and SPRN_EIE. Writing any value to one of those will play with MSR EE and MSR RI regardless of that value. r0 is used many places in the generated code and using r0 for that creates an unnecessary dependency of this instruction with preceding ones using r0 in a few places in vmlinux. r2 is most likely the most stable register as it contains the pointer to 'current'. Using r2 instead of r0 avoids that unnecessary dependency. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/69f9968f4b592fefda55227f0f7430ea612cc950.1611299687.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/mce: save ignore_event flag unconditionally for UEGanesh Goudar
When we hit an UE while using machine check safe copy routines, ignore_event flag is set and the event is ignored by mce handler, And the flag is also saved for defered handling and printing of mce event information, But as of now saving of this flag is done on checking if the effective address is provided or physical address is calculated, which is not right. Save ignore_event flag regardless of whether the effective address is provided or physical address is calculated. Without this change following log is seen, when the event is to be ignored. [ 512.971365] MCE: CPU1: machine check (Severe) UE Load/Store [Recovered] [ 512.971509] MCE: CPU1: NIP: [c0000000000b67c0] memcpy+0x40/0x90 [ 512.971655] MCE: CPU1: Initiator CPU [ 512.971739] MCE: CPU1: Unknown [ 512.972209] MCE: CPU1: machine check (Severe) UE Load/Store [Recovered] [ 512.972334] MCE: CPU1: NIP: [c0000000000b6808] memcpy+0x88/0x90 [ 512.972456] MCE: CPU1: Initiator CPU [ 512.972534] MCE: CPU1: Unknown Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com> Reviewed-by: Santosh Sivaraj <santosh@fossix.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210407045816.352276-1-ganeshgr@linux.ibm.com
2021-04-21powerpc: Enable OPTPROBES on PPC32Christophe Leroy
For that, create a 32 bits version of patch_imm64_load_insns() and create a patch_imm_load_insns() which calls patch_imm32_load_insns() on PPC32 and patch_imm64_load_insns() on PPC64. Adapt optprobes_head.S for PPC32. Use PPC_LL/PPC_STL macros instead of raw ld/std, opt out things linked to paca and use stmw/lmw to save/restore registers. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bad58c66859b2a475c0ad516b53164ae3b4853cd.1618927318.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/inst: ppc_inst_as_u64() becomes ppc_inst_as_ulong()Christophe Leroy
In order to simplify use on PPC32, change ppc_inst_as_u64() into ppc_inst_as_ulong() that returns the 32 bits instruction on PPC32. Will be used when porting OPTPROBES to PPC32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/22cadf29620664b600b82026d2a72b8b23351777.1618927318.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/irq: Enhance readability of trap typesChristophe Leroy
This patch makes use of trap types in irq.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f7f8c9f98c33eaea316755c7fef150d1d77e047d.1618847273.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/32s: Enhance readability of trap typesChristophe Leroy
This patch makes use of trap types in head_book3s_32.S Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bd80ace67757f489fc4ecdb76dd1a71511daba94.1618847273.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/8xx: Enhance readability of trap typesChristophe Leroy
This patch makes use of trap types in head_8xx.S Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e1147287bf6f2fb0693048fe8db0298c7870e419.1618847273.git.christophe.leroy@csgroup.eu
2021-04-21powerpc/pseries/iommu: Fix window size for direct mapping with pmemLeonardo Bras
As of today, if the DDW is big enough to fit (1 << MAX_PHYSMEM_BITS) it's possible to use direct DMA mapping even with pmem region. But, if that happens, the window size (len) is set to (MAX_PHYSMEM_BITS - page_shift) instead of MAX_PHYSMEM_BITS, causing a pagesize times smaller DDW to be created, being insufficient for correct usage. Fix this so the correct window size is used in this case. Fixes: bf6e2d562bbc4 ("powerpc/dma: Fallback to dma_ops when persistent memory present") Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210420045404.438735-1-leobras.c@gmail.com
2021-04-21perf/x86/intel/uncore: Remove uncore extra PCI dev HSWEP_PCI_PCU_3Kan Liang
There may be a kernel panic on the Haswell server and the Broadwell server, if the snbep_pci2phy_map_init() return error. The uncore_extra_pci_dev[HSWEP_PCI_PCU_3] is used in the cpu_init() to detect the existence of the SBOX, which is a MSR type of PMON unit. The uncore_extra_pci_dev is allocated in the uncore_pci_init(). If the snbep_pci2phy_map_init() returns error, perf doesn't initialize the PCI type of the PMON units, so the uncore_extra_pci_dev will not be allocated. But perf may continue initializing the MSR type of PMON units. A null dereference kernel panic will be triggered. The sockets in a Haswell server or a Broadwell server are identical. Only need to detect the existence of the SBOX once. Current perf probes all available PCU devices and stores them into the uncore_extra_pci_dev. It's unnecessary. Use the pci_get_device() to replace the uncore_extra_pci_dev. Only detect the existence of the SBOX on the first available PCU device once. Factor out hswep_has_limit_sbox(), since the Haswell server and the Broadwell server uses the same way to detect the existence of the SBOX. Add some macros to replace the magic number. Fixes: 5306c31c5733 ("perf/x86/uncore/hsw-ep: Handle systems with only two SBOXes") Reported-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/1618521764-100923-1-git-send-email-kan.liang@linux.intel.com
2021-04-21mips/bootinfo:correct some comments of fw_argxiaochuan mao
from arch/mips/kernel/head.S we know that use a0~a3 for fw_arg0~fw_arg3 there is some code from head.S: LONG_S a0, fw_arg0 # firmware arguments LONG_S a1, fw_arg1 LONG_S a2, fw_arg2 LONG_S a3, fw_arg3 Signed-off-by: xiaochuan mao <maoxiaochuan@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-04-21MIPS: Avoid DIVU in `__div64_32' is result would be zeroMaciej W. Rozycki
We already check the high part of the divident against zero to avoid the costly DIVU instruction in that case, needed to reduce the high part of the divident, so we may well check against the divisor instead and set the high part of the quotient to zero right away. We need to treat the high part the divident in that case though as the remainder that would be calculated by the DIVU instruction we avoided. This has passed correctness verification with test_div64 and reduced the module's average execution time down to 1.0445s and 0.2619s from 1.0668s and 0.2629s respectively for an R3400 CPU @40MHz and a 5Kc CPU @160MHz. Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-04-21MIPS: Reinstate platform `__div64_32' handlerMaciej W. Rozycki
Our current MIPS platform `__div64_32' handler is inactive, because it is incorrectly only enabled for 64-bit configurations, for which generic `do_div' code does not call it anyway. The handler is not suitable for being called from there though as it only calculates 32 bits of the quotient under the assumption the 64-bit divident has been suitably reduced. Code for such reduction used to be there, however it has been incorrectly removed with commit c21004cd5b4c ("MIPS: Rewrite <asm/div64.h> to work with gcc 4.4.0."), which should have only updated an obsoleted constraint for an inline asm involving $hi and $lo register outputs, while possibly wiring the original MIPS variant of the `do_div' macro as `__div64_32' handler for the generic `do_div' implementation Correct the handler as follows then: - Revert most of the commit referred, however retaining the current formatting, except for the final two instructions of the inline asm sequence, which the original commit missed. Omit the original 64-bit parts though. - Rename the original `do_div' macro to `__div64_32'. Use the combined `x' constraint referring to the MD accumulator as a whole, replacing the original individual `h' and `l' constraints used for $hi and $lo registers respectively, of which `h' has been obsoleted with GCC 4.4. Update surrounding code accordingly. We have since removed support for GCC versions before 4.9, so no need for a special arrangement here; GCC has supported the `x' constraint since forever anyway, or at least going back to 1991. - Rename the `__base' local variable in `__div64_32' to `__radix' to avoid a conflict with a local variable in `do_div'. - Actually enable this code for 32-bit rather than 64-bit configurations by qualifying it with BITS_PER_LONG being 32 instead of 64. Include <asm/bitsperlong.h> for this macro rather than <linux/types.h> as we don't need anything else. - Finally include <asm-generic/div64.h> last rather than first. This has passed correctness verification with test_div64 and reduced the module's average execution time down to 1.0668s and 0.2629s from 2.1529s and 0.5647s respectively for an R3400 CPU @40MHz and a 5Kc CPU @160MHz. For a reference 64-bit `do_div' code where we have the DDIVU instruction available to do the whole calculation right away averages at 0.0660s for the latter CPU. Fixes: c21004cd5b4c ("MIPS: Rewrite <asm/div64.h> to work with gcc 4.4.0.") Reported-by: Huacai Chen <chenhuacai@kernel.org> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Cc: stable@vger.kernel.org # v2.6.30+ Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-04-21MIPS: Makefile: Replace -pg with CC_FLAGS_FTRACEzhaoxiao
This patch replaces the "open-coded" -pg compile flag with a CC_FLAGS_FTRACE makefile variable which architectures can override if a different option should be used for code generation. Signed-off-by: zhaoxiao <zhaoxiao@uniontech.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-04-21MIPS: pci-legacy: revert "use generic pci_enable_resources"Ilya Lipnitskiy
This mostly reverts commit 99bca615d895 ("MIPS: pci-legacy: use generic pci_enable_resources"). Fixes regressions such as: ata_piix 0000:00:0a.1: can't enable device: BAR 0 [io 0x01f0-0x01f7] not claimed ata_piix: probe of 0000:00:0a.1 failed with error -22 The only changes from the strict revert are to fix checkpatch errors: ERROR: spaces required around that '=' (ctx:VxV) #33: FILE: arch/mips/pci/pci-legacy.c:252: + for (idx=0; idx < PCI_NUM_RESOURCES; idx++) { ^ ERROR: do not use assignment in if condition #67: FILE: arch/mips/pci/pci-legacy.c:284: + if ((err = pcibios_enable_resources(dev, mask)) < 0) Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Ilya Lipnitskiy <ilya.lipnitskiy@gmail.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-04-21s390/disassembler: increase ebpf disasm buffer sizeVasily Gorbik
Current ebpf disassembly buffer size of 64 is too small. E.g. this line takes 65 bytes: 01fffff8005822e: ec8100ed8065\tclgrj\t%r8,%r1,8,001fffff80058408\n\0 Double the buffer size like it is done for the kernel disassembly buffer. Fixes the following KASAN finding: UG: KASAN: stack-out-of-bounds in print_fn_code+0x34c/0x380 Write of size 1 at addr 001fff800ad5f970 by task test_progs/853 CPU: 53 PID: 853 Comm: test_progs Not tainted 5.12.0-rc7-23786-g23457d86b1f0-dirty #19 Hardware name: IBM 3906 M04 704 (LPAR) Call Trace: [<0000000cd8e0538a>] show_stack+0x17a/0x1668 [<0000000cd8e2a5d8>] dump_stack+0x140/0x1b8 [<0000000cd8e16e74>] print_address_description.constprop.0+0x54/0x260 [<0000000cd75a8698>] kasan_report+0xc8/0x130 [<0000000cd6e26da4>] print_fn_code+0x34c/0x380 [<0000000cd6ea0f4e>] bpf_int_jit_compile+0xe3e/0xe58 [<0000000cd72c4c88>] bpf_prog_select_runtime+0x5b8/0x9c0 [<0000000cd72d1bf8>] bpf_prog_load+0xa78/0x19c0 [<0000000cd72d7ad6>] __do_sys_bpf.part.0+0x18e/0x768 [<0000000cd6e0f392>] do_syscall+0x12a/0x220 [<0000000cd8e333f8>] __do_syscall+0x98/0xc8 [<0000000cd8e54834>] system_call+0x6c/0x94 1 lock held by test_progs/853: #0: 0000000cd9bf7460 (report_lock){....}-{2:2}, at: kasan_report+0x96/0x130 addr 001fff800ad5f970 is located in stack of task test_progs/853 at offset 96 in frame: print_fn_code+0x0/0x380 this frame has 1 object: [32, 96) 'buffer' Memory state around the buggy address: 001fff800ad5f800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 001fff800ad5f880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >001fff800ad5f900: 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 00 f3 f3 ^ 001fff800ad5f980: f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 00 00 001fff800ad5fa00: 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 Cc: <stable@vger.kernel.org> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-04-21s390/archrandom: add parameter check for s390_arch_random_generateHarald Freudenberger
A review of the code showed, that this function which is exposed within the whole kernel should do a parameter check for the amount of bytes requested. If this requested bytes is too high an unsigned int overflow could happen causing this function to try to memcpy a really big memory chunk. This is not a security issue as there are only two invocations of this function from arch/s390/include/asm/archrandom.h and both are not exposed to userland. Reported-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-04-21drivers: hv: Create a consistent pattern for checking Hyper-V hypercall statusJoseph Salisbury
There is not a consistent pattern for checking Hyper-V hypercall status. Existing code uses a number of variants. The variants work, but a consistent pattern would improve the readability of the code, and be more conformant to what the Hyper-V TLFS says about hypercall status. Implemented new helper functions hv_result(), hv_result_success(), and hv_repcomp(). Changed the places where hv_do_hypercall() and related variants are used to use the helper functions. Signed-off-by: Joseph Salisbury <joseph.salisbury@microsoft.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1618620183-9967-2-git-send-email-joseph.salisbury@linux.microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-04-21x86/hyperv: Move hv_do_rep_hypercall to asm-genericJoseph Salisbury
This patch makes no functional changes. It simply moves hv_do_rep_hypercall() out of arch/x86/include/asm/mshyperv.h and into asm-generic/mshyperv.h hv_do_rep_hypercall() is architecture independent, so it makes sense that it should be in the architecture independent mshyperv.h, not in the x86-specific mshyperv.h. This is done in preperation for a follow up patch which creates a consistent pattern for checking Hyper-V hypercall status. Signed-off-by: Joseph Salisbury <joseph.salisbury@microsoft.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1618620183-9967-1-git-send-email-joseph.salisbury@linux.microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-04-20x86/crash: Fix crash_setup_memmap_entries() out-of-bounds accessMike Galbraith
Commit in Fixes: added support for kexec-ing a kernel on panic using a new system call. As part of it, it does prepare a memory map for the new kernel. However, while doing so, it wrongly accesses memory it has not allocated: it accesses the first element of the cmem->ranges[] array in memmap_exclude_ranges() but it has not allocated the memory for it in crash_setup_memmap_entries(). As KASAN reports: BUG: KASAN: vmalloc-out-of-bounds in crash_setup_memmap_entries+0x17e/0x3a0 Write of size 8 at addr ffffc90000426008 by task kexec/1187 (gdb) list *crash_setup_memmap_entries+0x17e 0xffffffff8107cafe is in crash_setup_memmap_entries (arch/x86/kernel/crash.c:322). 317 unsigned long long mend) 318 { 319 unsigned long start, end; 320 321 cmem->ranges[0].start = mstart; 322 cmem->ranges[0].end = mend; 323 cmem->nr_ranges = 1; 324 325 /* Exclude elf header region */ 326 start = image->arch.elf_load_addr; (gdb) Make sure the ranges array becomes a single element allocated. [ bp: Write a proper commit message. ] Fixes: dd5f726076cc ("kexec: support for kexec on panic using new system call") Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Young <dyoung@redhat.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/725fa3dc1da2737f0f6188a1a9701bead257ea9d.camel@gmx.de
2021-04-20floppy: remove redundant assignment to variable stColin Ian King
The variable st is being assigned a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Denis Efremov <efremov@linux.com> Acked-by: Willy Tarreau <w@1wt.eu> Link: https://lore.kernel.org/r/20210415130020.1959951-1-colin.king@canonical.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-20s390/pci: expose a PCI device's UID as its indexNiklas Schnelle
On s390 each PCI device has a user-defined ID (UID) exposed under /sys/bus/pci/devices/<dev>/uid. This ID was designed to serve as the PCI device's primary index and to match the device within Linux to the device configured in the hypervisor. To serve as a primary identifier the UID must be unique within the Linux instance, this is guaranteed by the platform if and only if the UID Uniqueness Checking flag is set within the CLP List PCI Functions response. In this sense the UID serves an analogous function as the SMBIOS instance number or ACPI index exposed as the "index" respectively "acpi_index" device attributes and used by e.g. systemd to set interface names. As s390 does not use and will likely never use ACPI nor SMBIOS there is no conflict and we can just expose the UID under the "index" attribute whenever UID Uniqueness Checking is active and get systemd's interface naming support for free. Link: https://lore.kernel.org/lkml/20210412135905.1434249-1-schnelle@linux.ibm.com/ Acked-by: Viktor Mihajlovski <mihajlov@linux.ibm.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Narendra K <narendra_k@dell.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-04-20Merge tag 'bmc-5.13-devicetree-2' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/joel/bmc into arm/dt BMC device tree updates for 5.13, round two - Fixes to the first pull request for Rainier - Small changes for Rainier, EthanolX and Tiogapass * tag 'bmc-5.13-devicetree-2' of git://git.kernel.org/pub/scm/linux/kernel/git/joel/bmc: ARM: dts: aspeed: tiogapass: add hotplug controller ARM: dts: aspeed: amd-ethanolx: Enable all used I2C busses ARM: dts: aspeed: Rainier: Update to pass 2 hardware ARM: dts: aspeed: Rainier 1S4U: Fix fan nodes ARM: dts: aspeed: Rainier: Fix humidity sensor bus address ARM: dts: aspeed: Rainier: Fix PCA9552 on bus 8 Link: https://lore.kernel.org/r/CACPK8XeJdHmxyJn43Ju5hmxJ7+rJgHmx=ANFaL17YUmp+gOtJg@mail.gmail.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-04-20KVM: SVM: Define actual size of IOPM and MSRPM tablesKrish Sadhukhan
Define the actual size of the IOPM and MSRPM tables so that the actual size can be used when initializing them and when checking the consistency of their physical address. These #defines are placed in svm.h so that they can be shared. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210412215611.110095-2-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: x86: Add capability to grant VM access to privileged SGX attributeSean Christopherson
Add a capability, KVM_CAP_SGX_ATTRIBUTE, that can be used by userspace to grant a VM access to a priveleged attribute, with args[0] holding a file handle to a valid SGX attribute file. The SGX subsystem restricts access to a subset of enclave attributes to provide additional security for an uncompromised kernel, e.g. to prevent malware from using the PROVISIONKEY to ensure its nodes are running inside a geniune SGX enclave and/or to obtain a stable fingerprint. To prevent userspace from circumventing such restrictions by running an enclave in a VM, KVM restricts guest access to privileged attributes by default. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <0b099d65e933e068e3ea934b0523bab070cb8cea.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LCSean Christopherson
Enable SGX virtualization now that KVM has the VM-Exit handlers needed to trap-and-execute ENCLS to ensure correctness and/or enforce the CPU model exposed to the guest. Add a KVM module param, "sgx", to allow an admin to disable SGX virtualization independent of the kernel. When supported in hardware and the kernel, advertise SGX1, SGX2 and SGX LC to userspace via CPUID and wire up the ENCLS_EXITING bitmap based on the guest's SGX capabilities, i.e. to allow ENCLS to be executed in an SGX-enabled guest. With the exception of the provision key, all SGX attribute bits may be exposed to the guest. Guest access to the provision key, which is controlled via securityfs, will be added in a future patch. Note, KVM does not yet support exposing ENCLS_C leafs or ENCLV leafs. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <a99e9c23310c79f2f4175c1af4c4cbcef913c3e5.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)Sean Christopherson
Add a VM-Exit handler to trap-and-execute EINIT when SGX LC is enabled in the host. When SGX LC is enabled, the host kernel may rewrite the hardware values at will, e.g. to launch enclaves with different signers, thus KVM needs to intercept EINIT to ensure it is executed with the correct LE hash (even if the guest sees a hardwired hash). Switching the LE hash MSRs on VM-Enter/VM-Exit is not a viable option as writing the MSRs is prohibitively expensive, e.g. on SKL hardware each WRMSR is ~400 cycles. And because EINIT takes tens of thousands of cycles to execute, the ~1500 cycle overhead to trap-and-execute EINIT is unlikely to be noticed by the guest, let alone impact its overall SGX performance. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <57c92fa4d2083eb3be9e6355e3882fc90cffea87.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Add emulation of SGX Launch Control LE hash MSRsSean Christopherson
Emulate the four Launch Enclave public key hash MSRs (LE hash MSRs) that exist on CPUs that support SGX Launch Control (LC). SGX LC modifies the behavior of ENCLS[EINIT] to use the LE hash MSRs when verifying the key used to sign an enclave. On CPUs without LC support, the LE hash is hardwired into the CPU to an Intel controlled key (the Intel key is also the reset value of the LE hash MSRs). Track the guest's desired hash so that a future patch can stuff the hash into the hardware MSRs when executing EINIT on behalf of the guest, when those MSRs are writable in host. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <c58ef601ddf88f3a113add837969533099b1364a.1618196135.git.kai.huang@intel.com> [Add a comment regarding the MSRs being available until SGX is locked. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictionsSean Christopherson
Add an ECREATE handler that will be used to intercept ECREATE for the purpose of enforcing and enclave's MISCSELECT, ATTRIBUTES and XFRM, i.e. to allow userspace to restrict SGX features via CPUID. ECREATE will be intercepted when any of the aforementioned masks diverges from hardware in order to enforce the desired CPUID model, i.e. inject #GP if the guest attempts to set a bit that hasn't been enumerated as allowed-1 in CPUID. Note, access to the PROVISIONKEY is not yet supported. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <c3a97684f1b71b4f4626a1fc3879472a95651725.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Frame in ENCLS handler for SGX virtualizationSean Christopherson
Introduce sgx.c and sgx.h, along with the framework for handling ENCLS VM-Exits. Add a bool, enable_sgx, that will eventually be wired up to a module param to control whether or not SGX virtualization is enabled at runtime. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <1c782269608b2f5e1034be450f375a8432fb705d.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: VMX: Add basic handling of VM-Exit from SGX enclaveSean Christopherson
Add support for handling VM-Exits that originate from a guest SGX enclave. In SGX, an "enclave" is a new CPL3-only execution environment, wherein the CPU and memory state is protected by hardware to make the state inaccesible to code running outside of the enclave. When exiting an enclave due to an asynchronous event (from the perspective of the enclave), e.g. exceptions, interrupts, and VM-Exits, the enclave's state is automatically saved and scrubbed (the CPU loads synthetic state), and then reloaded when re-entering the enclave. E.g. after an instruction based VM-Exit from an enclave, vmcs.GUEST_RIP will not contain the RIP of the enclave instruction that trigered VM-Exit, but will instead point to a RIP in the enclave's untrusted runtime (the guest userspace code that coordinates entry/exit to/from the enclave). To help a VMM recognize and handle exits from enclaves, SGX adds bits to existing VMCS fields, VM_EXIT_REASON.VMX_EXIT_REASON_FROM_ENCLAVE and GUEST_INTERRUPTIBILITY_INFO.GUEST_INTR_STATE_ENCLAVE_INTR. Define the new architectural bits, and add a boolean to struct vcpu_vmx to cache VMX_EXIT_REASON_FROM_ENCLAVE. Clear the bit in exit_reason so that checks against exit_reason do not need to account for SGX, e.g. "if (exit_reason == EXIT_REASON_EXCEPTION_NMI)" continues to work. KVM is a largely a passive observer of the new bits, e.g. KVM needs to account for the bits when propagating information to a nested VMM, but otherwise doesn't need to act differently for the majority of VM-Exits from enclaves. The one scenario that is directly impacted is emulation, which is for all intents and purposes impossible[1] since KVM does not have access to the RIP or instruction stream that triggered the VM-Exit. The inability to emulate is a non-issue for KVM, as most instructions that might trigger VM-Exit unconditionally #UD in an enclave (before the VM-Exit check. For the few instruction that conditionally #UD, KVM either never sets the exiting control, e.g. PAUSE_EXITING[2], or sets it if and only if the feature is not exposed to the guest in order to inject a #UD, e.g. RDRAND_EXITING. But, because it is still possible for a guest to trigger emulation, e.g. MMIO, inject a #UD if KVM ever attempts emulation after a VM-Exit from an enclave. This is architecturally accurate for instruction VM-Exits, and for MMIO it's the least bad choice, e.g. it's preferable to killing the VM. In practice, only broken or particularly stupid guests should ever encounter this behavior. Add a WARN in skip_emulated_instruction to detect any attempt to modify the guest's RIP during an SGX enclave VM-Exit as all such flows should either be unreachable or must handle exits from enclaves before getting to skip_emulated_instruction. [1] Impossible for all practical purposes. Not truly impossible since KVM could implement some form of para-virtualization scheme. [2] PAUSE_LOOP_EXITING only affects CPL0 and enclaves exist only at CPL3, so we also don't need to worry about that interaction. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <315f54a8507d09c292463ef29104e1d4c62e9090.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: x86: Add reverse-CPUID lookup support for scattered SGX featuresSean Christopherson
Define a new KVM-only feature word for advertising and querying SGX sub-features in CPUID.0x12.0x0.EAX. Because SGX1 and SGX2 are scattered in the kernel's feature word, they need to be translated so that the bit numbers match those of hardware. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <e797c533f4c71ae89265bbb15a02aef86b67cbec.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: x86: Add support for reverse CPUID lookup of scattered featuresSean Christopherson
Introduce a scheme that allows KVM's CPUID magic to support features that are scattered in the kernel's feature words. To advertise and/or query guest support for CPUID-based features, KVM requires the bit number of an X86_FEATURE_* to match the bit number in its associated CPUID entry. For scattered features, this does not hold true. Add a framework to allow defining KVM-only words, stored in kvm_cpu_caps after the shared kernel caps, that can be used to gather the scattered feature bits by translating X86_FEATURE_* flags into their KVM-defined feature. Note, because reverse_cpuid_check() effectively forces kvm_cpu_caps lookups to be resolved at compile time, there is no runtime cost for translating from kernel-defined to kvm-defined features. More details here: https://lkml.kernel.org/r/X/jxCOLG+HUO4QlZ@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <16cad8d00475f67867fb36701fc7fb7c1ec86ce1.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-20KVM: x86: Define new #PF SGX error code bitSean Christopherson
Page faults that are signaled by the SGX Enclave Page Cache Map (EPCM), as opposed to the traditional IA32/EPT page tables, set an SGX bit in the error code to indicate that the #PF was induced by SGX. KVM will need to emulate this behavior as part of its trap-and-execute scheme for virtualizing SGX Launch Control, e.g. to inject SGX-induced #PFs if EINIT faults in the host, and to support live migration. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Message-Id: <e170c5175cb9f35f53218a7512c9e3db972b97a2.1618196135.git.kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>