summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-12-23KVM: x86/tdp_mmu: Take root types for kvm_tdp_mmu_invalidate_all_roots()Isaku Yamahata
Rename kvm_tdp_mmu_invalidate_all_roots() to kvm_tdp_mmu_invalidate_roots(), and make it enum kvm_tdp_mmu_root_types as an argument. kvm_tdp_mmu_invalidate_roots() is called with different root types. For kvm_mmu_zap_all_fast() it only operates on shared roots. But when tearing down a VM it needs to invalidate all roots. Have the callers only invalidate the required roots instead of all roots. Within kvm_tdp_mmu_invalidate_roots(), respect the root type passed by checking the root type in root iterator. Suggested-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-17-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Propagate tearing down mirror page tablesIsaku Yamahata
Integrate hooks for mirroring page table operations for cases where TDX will zap PTEs or free page tables. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly though calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. Since calls into the TDX module are relatively slow, walking private page tables by making calls into the TDX module would not be efficient. Because of this, previous changes have taught the TDP MMU to keep a mirror root, which is separate, unmapped TDP root that private operations can be directed to. Currently this root is disconnected from the guest. Now add plumbing to propagate changes to the "external" page tables being mirrored. Just create the x86_ops for now, leave plumbing the operations into the TDX module for future patches. Add two operations for tearing down page tables, one for freeing page tables (free_external_spt) and one for zapping PTEs (remove_external_spte). Define them such that remove_external_spte will perform a TLB flush as well. (in TDX terms "ensure there are no active translations"). TDX MMU support will exclude certain MMU operations, so only plug in the mirroring x86 ops where they will be needed. For zapping/freeing, only hook tdp_mmu_iter_set_spte() which is used for mapping and linking PTs. Don't bother hooking tdp_mmu_set_spte_atomic() as it is only used for zapping PTEs in operations unsupported by TDX: zapping collapsible PTEs and kvm_mmu_zap_all_fast(). In previous changes to address races around concurrent populating using tdp_mmu_set_spte_atomic(), a solution was introduced to temporarily set FROZEN_SPTE in the mirrored page tables while performing the external operations. Such a solution is not needed for the tear down paths in TDX as these will always be performed with the mmu_lock held for write. Sprinkle some KVM_BUG_ON()s to reflect this. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Co-developed-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-16-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Propagate building mirror page tablesIsaku Yamahata
Integrate hooks for mirroring page table operations for cases where TDX will set PTEs or link page tables. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. Since calls into the TDX module are relatively slow, walking private page tables by making calls into the TDX module would not be efficient. Because of this, previous changes have taught the TDP MMU to keep a mirror root, which is separate, unmapped TDP root that private operations can be directed to. Currently this root is disconnected from any actual guest mapping. Now add plumbing to propagate changes to the "external" page tables being mirrored. Just create the x86_ops for now, leave plumbing the operations into the TDX module for future patches. Add two operations for setting up external page tables, one for linking new page tables and one for setting leaf PTEs. Don't add any op for configuring the root PFN, as TDX handles this itself. Don't provide a way to set permissions on the PTEs also, as TDX doesn't support it. This results in MMU "mirroring" support that is very targeted towards TDX. Since it is likely there will be no other user, the main benefit of making the support generic is to keep TDX specific *looking* code outside of the MMU. As a generic feature it will make enough sense from TDX's perspective. For developers unfamiliar with TDX arch it can express the general concepts such that they can continue to work in the code. TDX MMU support will exclude certain MMU operations, so only plug in the mirroring x86 ops where they will be needed. For setting/linking, only hook tdp_mmu_set_spte_atomic() which is used for mapping and linking PTs. Don't bother hooking tdp_mmu_iter_set_spte() as it is only used for setting PTEs in operations unsupported by TDX: splitting huge pages and write protecting. Sprinkle KVM_BUG_ON()s to document as code that these paths are not supported for mirrored page tables. For zapping operations, leave those for near future changes. Many operations in the TDP MMU depend on atomicity of the PTE update. While the mirror PTE on KVM's side can be updated atomically, the update that happens inside the external operations (S-EPT updates via TDX module call) can't happen atomically with the mirror update. The following race could result during two vCPU's populating private memory: * vcpu 1: atomically update 2M level mirror EPT entry to be present * vcpu 2: read 2M level EPT entry that is present * vcpu 2: walk down into 4K level EPT * vcpu 2: atomically update 4K level mirror EPT entry to be present * vcpu 2: set_exterma;_spte() to update 4K secure EPT entry => error because 2M secure EPT entry is not populated yet * vcpu 1: link_external_spt() to update 2M secure EPT entry Prevent this by setting the mirror PTE to FROZEN_SPTE while the reflect operations are performed. Only write the actual mirror PTE value once the reflect operations have completed. When trying to set a PTE to present and encountering a frozen SPTE, retry the fault. By doing this the race is prevented as follows: * vcpu 1: atomically update 2M level EPT entry to be FROZEN_SPTE * vcpu 2: read 2M level EPT entry that is FROZEN_SPTE * vcpu 2: find that the EPT entry is frozen abandon page table walk to resume guest execution * vcpu 1: link_external_spt() to update 2M secure EPT entry * vcpu 1: atomically update 2M level EPT entry to be present (unfreeze) * vcpu 2: resume guest execution Depending on vcpu 1 state, vcpu 2 may result in EPT violation again or make progress on guest execution Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Co-developed-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-15-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Propagate attr_filter to MMU notifier callbacksPaolo Bonzini
Teach the MMU notifier callbacks how to check kvm_gfn_range.process to filter which KVM MMU root types to operate on. The private GPAs are backed by guest memfd. Such memory is not subjected to MMU notifier callbacks because it can't be mapped into the host user address space. Now kvm_gfn_range conveys info about which root to operate on. Enhance the callback to filter the root page table type. The KVM MMU notifier comes down to two functions. kvm_tdp_mmu_unmap_gfn_range() and __kvm_tdp_mmu_age_gfn_range(): - invalidate_range_start() calls kvm_tdp_mmu_unmap_gfn_range() - invalidate_range_end() doesn't call into arch code - the other callbacks call __kvm_tdp_mmu_age_gfn_range() For VM's without a private/shared split in the EPT, all operations should target the normal(direct) root. With the switch from for_each_tdp_mmu_root() to __for_each_tdp_mmu_root() in kvm_tdp_mmu_handle_gfn(), there are no longer any users of for_each_tdp_mmu_root(). Remove it. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-14-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Support mirror root for TDP MMUIsaku Yamahata
Add the ability for the TDP MMU to maintain a mirror of a separate mapping. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. In order to handle both shared and private memory, KVM needs to learn to handle faults and other operations on the correct root for the operation. KVM could learn the concept of private roots, and operate on them by calling out to operations that call into the TDX module. But there are two problems with that: 1. Calls into the TDX module are relatively slow compared to the simple accesses required to read a PTE managed directly by KVM. 2. Other Coco technologies deal with private memory completely differently and it will make the code confusing when being read from their perspective. Special operations added for TDX that set private or zap private memory will have nothing to do with these other private memory technologies. (SEV, etc). To handle these, instead teach the TDP MMU about a new concept "mirror roots". Such roots maintain page tables that are not actually mapped, and are just used to traverse quickly to determine if the mid level page tables need to be installed. When the memory be mirrored needs to actually be changed, calls can be made to via x86_ops. private KVM page fault | | | V | private GPA | CPU protected EPTP | | | V | V mirror PT root | external PT root | | | V | V mirror PT --hook to propagate-->external PT | | | \--------------------+------\ | | | | | V V | private guest page | | non-encrypted memory | encrypted memory | Leave calling out to actually update the private page tables that are being mirrored for later changes. Just implement the handling of MMU operations on to mirrored roots. In order to direct operations to correct root, add root types KVM_DIRECT_ROOTS and KVM_MIRROR_ROOTS. Tie the usage of mirrored/direct roots to private/shared with conditionals. It could also be implemented by making the kvm_tdp_mmu_root_types and kvm_gfn_range_filter enum bits line up such that conversion could be a direct assignment with a case. Don't do this because the mapping of private to mirrored is confusing enough. So it is worth not hiding the logic in type casting. Cleanup the mirror root in kvm_mmu_destroy() instead of the normal place in kvm_mmu_free_roots(), because the private root that is being cannot be rebuilt like a normal root. It needs to persist for the lifetime of the VM. The TDX module will also need to be provided with page tables to use for the actual mapping being mirrored by the mirrored page tables. Allocate these in the mapping path using the recently added kvm_mmu_alloc_external_spt(). Don't support 2M page for now. This is avoided by forcing 4k pages in the fault. Add a KVM_BUG_ON() to verify. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Co-developed-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-13-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Take root in tdp_mmu_for_each_pte()Isaku Yamahata
Take the root as an argument of tdp_mmu_for_each_pte() instead of looking it up in the mmu. With no other purpose of passing the mmu, drop it. Future changes will want to change which root is used based on the context of the MMU operation. So change the callers to pass in the root currently used, mmu->root.hpa in a preparatory patch to make the later one smaller and easier to review. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-12-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Introduce KVM MMU root types to specify page table typeIsaku Yamahata
Define an enum kvm_tdp_mmu_root_types to specify the KVM MMU root type [1] so that the iterator on the root page table can consistently filter the root page table type instead of only_valid. TDX KVM will operate on KVM page tables with specified types. Shared page table, private page table, or both. Introduce an enum instead of bool only_valid so that we can easily enhance page table types applicable to shared, private, or both in addition to valid or not. Replace only_valid=false with KVM_ANY_ROOTS and only_valid=true with KVM_ANY_VALID_ROOTS. Use KVM_ANY_ROOTS and KVM_ANY_VALID_ROOTS to wrap KVM_VALID_ROOTS to avoid further code churn when direct vs mirror root concepts are introduced in future patches. Link: https://lore.kernel.org/kvm/ZivazWQw1oCU8VBC@google.com/ [1] Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-11-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Extract root invalid check from tdx_mmu_next_root()Isaku Yamahata
Extract tdp_mmu_root_match() to check if the root has given types and use it for the root page table iterator. It checks only_invalid now. TDX KVM operates on a shared page table only (Shared-EPT), a mirrored page table only (Secure-EPT), or both based on the operation. KVM MMU notifier operations only on shared page table. KVM guest_memfd invalidation operations only on mirrored page table, and so on. Introduce a centralized matching function instead of open coding matching logic in the iterator. The next step is to extend the function to check whether the page is shared or private Link: https://lore.kernel.org/kvm/ZivazWQw1oCU8VBC@google.com/ Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-10-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/mmu: Support GFN direct bitsIsaku Yamahata
Teach the MMU to map guest GFNs at a massaged position on the TDP, to aid in implementing TDX shared memory. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. For TDX, the shared half will be mapped in the higher alias, with a "shared bit" set in the GPA. However, KVM will still manage it with the same memslots as the private half. This means memslot looks ups and zapping operations will be provided with a GFN without the shared bit set. So KVM will either need to apply or strip the shared bit before mapping or zapping the shared EPT. Having GFNs sometimes have the shared bit and sometimes not would make the code confusing. So instead arrange the code such that GFNs never have shared bit set. Create a concept of "direct bits", that is stripped from the fault address when setting fault->gfn, and applied within the TDP MMU iterator. Calling code will behave as if it is operating on the PTE mapping the GFN (without shared bits) but within the iterator, the actual mappings will be shifted using bits specific for the root. SPs will have the GFN set without the shared bit. In the end the TDP MMU will behave like it is mapping things at the GFN without the shared bit but with a strange page table format where everything is offset by the shared bit. Since TDX only needs to shift the mapping like this for the shared bit, which is mapped as the normal TDP root, add a "gfn_direct_bits" field to the kvm_arch structure for each VM with a default value of 0. It will have the bit set at the position of the GPA shared bit in GFN through TD specific initialization code. Keep TDX specific concepts out of the MMU code by not naming it "shared". Ranged TLB flushes (i.e. flush_remote_tlbs_range()) target specific GFN ranges. In convention established above, these would need to target the shifted GFN range. It won't matter functionally, since the actual implementation will always result in a full flush for the only planned user (TDX). For correctness reasons, future changes can provide a TDX x86_ops.flush_remote_tlbs_range implementation to return -EOPNOTSUPP and force the full flush for TDs. This leaves one problem. Some operations use a concept of max GFN (i.e. kvm_mmu_max_gfn()), to iterate over the whole TDP range. When applying the direct mask to the start of the range, the iterator would end up skipping iterating over the range not covered by the direct mask bit. For safety, make sure the __tdp_mmu_zap_root() operation iterates over the full GFN range supported by the underlying TDP format. Add a new iterator helper, for_each_tdp_pte_min_level_all(), that iterates the entire TDP GFN range, regardless of root. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-9-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/tdp_mmu: Take struct kvm in iter loopsIsaku Yamahata
Add a struct kvm argument to the TDP MMU iterators. Future changes will want to change how the iterator behaves based on a member of struct kvm. Change the signature and callers of the iterator loop helpers in a separate patch to make the future one easier to review. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-8-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/mmu: Make kvm_tdp_mmu_alloc_root() return voidRick Edgecombe
The kvm_tdp_mmu_alloc_root() function currently always returns 0. This allows for the caller, mmu_alloc_direct_roots(), to call kvm_tdp_mmu_alloc_root() and also return 0 in one line: return kvm_tdp_mmu_alloc_root(vcpu); So it is useful even though the return value of kvm_tdp_mmu_alloc_root() is always the same. However, in future changes, kvm_tdp_mmu_alloc_root() will be called twice in mmu_alloc_direct_roots(). This will force the first call to either awkwardly handle the return value that will always be zero or ignore it. So change kvm_tdp_mmu_alloc_root() to return void. Do it in a separate change so the future change will be cleaner. Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20240718211230.1492011-7-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/mmu: Add an is_mirror member for union kvm_mmu_page_roleIsaku Yamahata
Introduce a "is_mirror" member to the kvm_mmu_page_role union to identify SPTEs associated with the mirrored EPT. The TDX module maintains the private half of the EPT mapped in the TD in its protected memory. KVM keeps a copy of the private GPAs in a mirrored EPT tree within host memory. This "is_mirror" attribute enables vCPUs to find and get the root page of mirrored EPT from the MMU root list for a guest TD. This also allows KVM MMU code to detect changes in mirrored EPT according to the "is_mirror" mmu page role and propagate the changes to the private EPT managed by TDX module. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-6-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/mmu: Add an external pointer to struct kvm_mmu_pageIsaku Yamahata
Add an external pointer to struct kvm_mmu_page for TDX's private page table and add helper functions to allocate/initialize/free a private page table page. TDX will only be supported with the TDP MMU. Because KVM TDP MMU doesn't use unsync_children and write_flooding_count, pack them to have room for a pointer and use a union to avoid memory overhead. For private GPA, CPU refers to a private page table whose contents are encrypted. The dedicated APIs to operate on it (e.g. updating/reading its PTE entry) are used, and their cost is expensive. When KVM resolves the KVM page fault, it walks the page tables. To reuse the existing KVM MMU code and mitigate the heavy cost of directly walking the private page table allocate two sets of page tables for the private half of the GPA space. For the page tables that KVM will walk, allocate them like normal and refer to them as mirror page tables. Additionally allocate one more page for the page tables the CPU will walk, and call them external page tables. Resolve the KVM page fault with the existing code, and do additional operations necessary for modifying the external page table in future patches. The relationship of the types of page tables in this scheme is depicted below: KVM page fault | | | V | -------------+---------- | | | | V V | shared GPA private GPA | | | | V V | shared PT root mirror PT root | private PT root | | | | V V | V shared PT mirror PT --propagate--> external PT | | | | | \-----------------+------\ | | | | | V | V V shared guest page | private guest page | non-encrypted memory | encrypted memory | PT - Page table Shared PT - Visible to KVM, and the CPU uses it for shared mappings. External PT - The CPU uses it, but it is invisible to KVM. TDX module updates this table to map private guest pages. Mirror PT - It is visible to KVM, but the CPU doesn't use it. KVM uses it to propagate PT change to the actual private PT. Add a helper kvm_has_mirrored_tdp() to trigger this behavior and wire it to the TDX vm type. Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Message-ID: <20240718211230.1492011-5-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86: Add a VM type define for TDXRick Edgecombe
Add a VM type define for TDX. Future changes will need to lay the ground work for TDX support by making some behavior conditional on the VM being a TDX guest. Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-4-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: Add member to struct kvm_gfn_range to indicate private/sharedIsaku Yamahata
Add new members to strut kvm_gfn_range to indicate which mapping (private-vs-shared) to operate on: enum kvm_gfn_range_filter attr_filter. Update the core zapping operations to set them appropriately. TDX utilizes two GPA aliases for the same memslots, one for memory that is for private memory and one that is for shared. For private memory, KVM cannot always perform the same operations it does on memory for default VMs, such as zapping pages and having them be faulted back in, as this requires guest coordination. However, some operations such as guest driven conversion of memory between private and shared should zap private memory. Internally to the MMU, private and shared mappings are tracked on separate roots. Mapping and zapping operations will operate on the respective GFN alias for each root (private or shared). So zapping operations will by default zap both aliases. Add fields in struct kvm_gfn_range to allow callers to specify which aliases so they can only target the aliases appropriate for their specific operation. There was feedback that target aliases should be specified such that the default value (0) is to operate on both aliases. Several options were considered. Several variations of having separate bools defined such that the default behavior was to process both aliases. They either allowed nonsensical configurations, or were confusing for the caller. A simple enum was also explored and was close, but was hard to process in the caller. Instead, use an enum with the default value (0) reserved as a disallowed value. Catch ranges that didn't have the target aliases specified by looking for that specific value. Set target alias with enum appropriately for these MMU operations: - For KVM's mmu notifier callbacks, zap shared pages only because private pages won't have a userspace mapping - For setting memory attributes, kvm_arch_pre_set_memory_attributes() chooses the aliases based on the attribute. - For guest_memfd invalidations, zap private only. Link: https://lore.kernel.org/kvm/ZivIF9vjKcuGie3s@google.com/ Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-3-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: x86/mmu: Zap invalid roots with mmu_lock holding for write at uninitRick Edgecombe
Prepare for a future TDX patch which asserts that atomic zapping (i.e. zapping with mmu_lock taken for read) don't operate on mirror roots. When tearing down a VM, all roots have to be zapped (including mirror roots once they're in place) so do that with the mmu_lock taken for write. kvm_mmu_uninit_tdp_mmu() is invoked either before or after executing any atomic operations on SPTEs by vCPU threads. Therefore, it will not impact vCPU threads performance if kvm_tdp_mmu_zap_invalidated_roots() acquires mmu_lock for write to zap invalid roots. Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-2-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-23KVM: guest_memfd: Remove RCU-protected attribute from slot->gmem.fileYan Zhao
Remove the RCU-protected attribute from slot->gmem.file. No need to use RCU primitives rcu_assign_pointer()/synchronize_rcu() to update this pointer. - slot->gmem.file is updated in 3 places: kvm_gmem_bind(), kvm_gmem_unbind(), kvm_gmem_release(). All of them are protected by kvm->slots_lock. - slot->gmem.file is read in 2 paths: (1) kvm_gmem_populate kvm_gmem_get_file __kvm_gmem_get_pfn (2) kvm_gmem_get_pfn kvm_gmem_get_file __kvm_gmem_get_pfn Path (1) kvm_gmem_populate() requires holding kvm->slots_lock, so slot->gmem.file is protected by the kvm->slots_lock in this path. Path (2) kvm_gmem_get_pfn() does not require holding kvm->slots_lock. However, it's also not guarded by rcu_read_lock() and rcu_read_unlock(). So synchronize_rcu() in kvm_gmem_unbind()/kvm_gmem_release() actually will not wait for the readers in kvm_gmem_get_pfn() due to lack of RCU read-side critical section. The path (2) kvm_gmem_get_pfn() is safe without RCU protection because: a) kvm_gmem_bind() is called on a new memslot, before the memslot is visible to kvm_gmem_get_pfn(). b) kvm->srcu ensures that kvm_gmem_unbind() and freeing of a memslot occur after the memslot is no longer visible to kvm_gmem_get_pfn(). c) get_file_active() ensures that kvm_gmem_get_pfn() will not access the stale file if kvm_gmem_release() sets it to NULL. This is because if kvm_gmem_release() occurs before kvm_gmem_get_pfn(), get_file_active() will return NULL; if get_file_active() does not return NULL, kvm_gmem_release() should not occur until after kvm_gmem_get_pfn() releases the file reference. Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Message-ID: <20241104084303.29909-1-yan.y.zhao@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: Refactor __kvm_emulate_hypercall() into a macroPaolo Bonzini
Rework __kvm_emulate_hypercall() into a macro so that completion of hypercalls that don't exit to userspace use direct function calls to the completion helper, i.e. don't trigger a retpoline when RETPOLINE=y. Opportunistically take the names of the input registers, as opposed to taking the input values, to preemptively dedup more of the calling code (TDX needs to use different registers). Use the direct GPR accessors to read values to avoid the pointless marking of the registers as available (KVM requires GPRs to always be available). Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Message-ID: <20241128004344.4072099-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: Always complete hypercall via function callbackSean Christopherson
Finish "emulation" of KVM hypercalls by function callback, even when the hypercall is handled entirely within KVM, i.e. doesn't require an exit to userspace, and refactor __kvm_emulate_hypercall()'s return value to *only* communicate whether or not KVM should exit to userspace or resume the guest. (Ab)Use vcpu->run->hypercall.ret to propagate the return value to the callback, purely to avoid having to add a trampoline for every completion callback. Using the function return value for KVM's control flow eliminates the multiplexed return value, where '0' for KVM_HC_MAP_GPA_RANGE (and only that hypercall) means "exit to userspace". Note, the unnecessary extra indirect call and thus potential retpoline will be eliminated in the near future by converting the intermediate layer to a macro. Suggested-by: Binbin Wu <binbin.wu@linux.intel.com> Suggested-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Message-ID: <20241128004344.4072099-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: Bump hypercall stat prior to fully completing hypercallSean Christopherson
Increment the "hypercalls" stat for KVM hypercalls as soon as KVM knows it will skip the guest instruction, i.e. once KVM is committed to emulating the hypercall. Waiting until completion adds no known value, and creates a discrepancy where the stat will be bumped if KVM exits to userspace as a result of trying to skip the instruction, but not if the hypercall itself exits. Handling the stat in common code will also avoid the need for another helper to dedup code when TDX comes along (TDX needs a separate completion path due to GPR usage differences). Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-ID: <20241128004344.4072099-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: Move "emulate hypercall" function declarations to x86.hSean Christopherson
Move the declarations for the hypercall emulation APIs to x86.h. While the helpers are exported, they are intended to be consumed only by KVM vendor modules, i.e. don't need to be exposed to the kernel at-large. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-ID: <20241128004344.4072099-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: Add a helper to check for user interception of KVM hypercallsBinbin Wu
Add and use user_exit_on_hypercall() to check if userspace wants to handle a KVM hypercall instead of open-coding the logic everywhere. No functional change intended. Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> [sean: squash into one patch, keep explicit KVM_HC_MAP_GPA_RANGE check] Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Message-ID: <20241128004344.4072099-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22KVM: x86: clear vcpu->run->hypercall.ret before exiting for KVM_EXIT_HYPERCALLPaolo Bonzini
QEMU up to 9.2.0 is assuming that vcpu->run->hypercall.ret is 0 on exit and it never modifies it when processing KVM_EXIT_HYPERCALL. Make this explicit in the code, to avoid breakage when KVM starts modifying that field. This in principle is not a good idea... It would have been much better if KVM had set the field to -KVM_ENOSYS from the beginning, so that a dumb userspace that does nothing on KVM_EXIT_HYPERCALL would tell the guest it does not support KVM_HC_MAP_GPA_RANGE. However, breaking userspace is a Very Bad Thing, as everybody should know. Reported-by: Binbin Wu <binbin.wu@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-22Merge tag 'kvm-x86-fixes-6.13-rcN' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM x86 fixes for 6.13: - Disable AVIC on SNP-enabled systems that don't allow writes to the virtual APIC page, as such hosts will hit unexpected RMP #PFs in the host when running VMs of any flavor. - Fix a WARN in the hypercall completion path due to KVM trying to determine if a guest with protected register state is in 64-bit mode (KVM's ABI is to assume such guests only make hypercalls in 64-bit mode). - Allow the guest to write to supported bits in MSR_AMD64_DE_CFG to fix a regression with Windows guests, and because KVM's read-only behavior appears to be entirely made up. - Treat TDP MMU faults as spurious if the faulting access is allowed given the existing SPTE. This fixes a benign WARN (other than the WARN itself) due to unexpectedly replacing a writable SPTE with a read-only SPTE.
2024-12-19KVM: x86/mmu: Treat TDP MMU faults as spurious if access is already allowedSean Christopherson
Treat slow-path TDP MMU faults as spurious if the access is allowed given the existing SPTE to fix a benign warning (other than the WARN itself) due to replacing a writable SPTE with a read-only SPTE, and to avoid the unnecessary LOCK CMPXCHG and subsequent TLB flush. If a read fault races with a write fault, fast GUP fails for any reason when trying to "promote" the read fault to a writable mapping, and KVM resolves the write fault first, then KVM will end up trying to install a read-only SPTE (for a !map_writable fault) overtop a writable SPTE. Note, it's not entirely clear why fast GUP fails, or if that's even how KVM ends up with a !map_writable fault with a writable SPTE. If something else is going awry, e.g. due to a bug in mmu_notifiers, then treating read faults as spurious in this scenario could effectively mask the underlying problem. However, retrying the faulting access instead of overwriting an existing SPTE is functionally correct and desirable irrespective of the WARN, and fast GUP _can_ legitimately fail with a writable VMA, e.g. if the Accessed bit in primary MMU's PTE is toggled and causes a PTE value mismatch. The WARN was also recently added, specifically to track down scenarios where KVM is unnecessarily overwrites SPTEs, i.e. treating the fault as spurious doesn't regress KVM's bug-finding capabilities in any way. In short, letting the WARN linger because there's a tiny chance it's due to a bug elsewhere would be excessively paranoid. Fixes: 1a175082b190 ("KVM: x86/mmu: WARN and flush if resolving a TDP MMU fault clears MMU-writable") Reported-by: Lei Yang <leiyang@redhat.com> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219588 Tested-by: Lei Yang <leiyang@redhat.com> Link: https://lore.kernel.org/r/20241218213611.3181643-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: SVM: Allow guest writes to set MSR_AMD64_DE_CFG bitsSean Christopherson
Drop KVM's arbitrary behavior of making DE_CFG.LFENCE_SERIALIZE read-only for the guest, as rejecting writes can lead to guest crashes, e.g. Windows in particular doesn't gracefully handle unexpected #GPs on the WRMSR, and nothing in the AMD manuals suggests that LFENCE_SERIALIZE is read-only _if it exists_. KVM only allows LFENCE_SERIALIZE to be set, by the guest or host, if the underlying CPU has X86_FEATURE_LFENCE_RDTSC, i.e. if LFENCE is guaranteed to be serializing. So if the guest sets LFENCE_SERIALIZE, KVM will provide the desired/correct behavior without any additional action (the guest's value is never stuffed into hardware). And having LFENCE be serializing even when it's not _required_ to be is a-ok from a functional perspective. Fixes: 74a0e79df68a ("KVM: SVM: Disallow guest from changing userspace's MSR_AMD64_DE_CFG value") Fixes: d1d93fa90f1a ("KVM: SVM: Add MSR-based feature support for serializing LFENCE") Reported-by: Simon Pilkington <simonp.git@mailbox.org> Closes: https://lore.kernel.org/all/52914da7-a97b-45ad-86a0-affdf8266c61@mailbox.org Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: stable@vger.kernel.org Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20241211172952.1477605-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Play nice with protected guests in complete_hypercall_exit()Sean Christopherson
Use is_64_bit_hypercall() instead of is_64_bit_mode() to detect a 64-bit hypercall when completing said hypercall. For guests with protected state, e.g. SEV-ES and SEV-SNP, KVM must assume the hypercall was made in 64-bit mode as the vCPU state needed to detect 64-bit mode is unavailable. Hacking the sev_smoke_test selftest to generate a KVM_HC_MAP_GPA_RANGE hypercall via VMGEXIT trips the WARN: ------------[ cut here ]------------ WARNING: CPU: 273 PID: 326626 at arch/x86/kvm/x86.h:180 complete_hypercall_exit+0x44/0xe0 [kvm] Modules linked in: kvm_amd kvm ... [last unloaded: kvm] CPU: 273 UID: 0 PID: 326626 Comm: sev_smoke_test Not tainted 6.12.0-smp--392e932fa0f3-feat #470 Hardware name: Google Astoria/astoria, BIOS 0.20240617.0-0 06/17/2024 RIP: 0010:complete_hypercall_exit+0x44/0xe0 [kvm] Call Trace: <TASK> kvm_arch_vcpu_ioctl_run+0x2400/0x2720 [kvm] kvm_vcpu_ioctl+0x54f/0x630 [kvm] __se_sys_ioctl+0x6b/0xc0 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e </TASK> ---[ end trace 0000000000000000 ]--- Fixes: b5aead0064f3 ("KVM: x86: Assume a 64-bit hypercall for guests with protected state") Cc: stable@vger.kernel.org Cc: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/r/20241128004344.4072099-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: SVM: Disable AVIC on SNP-enabled system without HvInUseWrAllowed featureSuravee Suthikulpanit
On SNP-enabled system, VMRUN marks AVIC Backing Page as in-use while the guest is running for both secure and non-secure guest. Any hypervisor write to the in-use vCPU's AVIC backing page (e.g. to inject an interrupt) will generate unexpected #PF in the host. Currently, attempt to run AVIC guest would result in the following error: BUG: unable to handle page fault for address: ff3a442e549cc270 #PF: supervisor write access in kernel mode #PF: error_code(0x80000003) - RMP violation PGD b6ee01067 P4D b6ee02067 PUD 10096d063 PMD 11c540063 PTE 80000001149cc163 SEV-SNP: PFN 0x1149cc unassigned, dumping non-zero entries in 2M PFN region: [0x114800 - 0x114a00] ... Newer AMD system is enhanced to allow hypervisor to modify the backing page for non-secure guest on SNP-enabled system. This enhancement is available when the CPUID Fn8000_001F_EAX bit 30 is set (HvInUseWrAllowed). This table describes AVIC support matrix w.r.t. SNP enablement: | Non-SNP system | SNP system ----------------------------------------------------- Non-SNP guest | AVIC Activate | AVIC Activate iff | | HvInuseWrAllowed=1 ----------------------------------------------------- SNP guest | N/A | Secure AVIC Therefore, check and disable AVIC in kvm_amd driver when the feature is not available on SNP-enabled system. See the AMD64 Architecture Programmer’s Manual (APM) Volume 2 for detail. (https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/ programmer-references/40332.pdf) Fixes: 216d106c7ff7 ("x86/sev: Add SEV-SNP host initialization support") Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20241104075845.7583-1-suravee.suthikulpanit@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19Merge tag 'kvm-selftests-treewide-6.14' of https://github.com/kvm-x86/linux ↵Paolo Bonzini
into HEAD KVM selftests "tree"-wide changes for 6.14: - Rework vcpu_get_reg() to return a value instead of using an out-param, and update all affected arch code accordingly. - Convert the max_guest_memory_test into a more generic mmu_stress_test. The basic gist of the "conversion" is to have the test do mprotect() on guest memory while vCPUs are accessing said memory, e.g. to verify KVM and mmu_notifiers are working as intended. - Play nice with treewrite builds of unsupported architectures, e.g. arm (32-bit), as KVM selftests' Makefile doesn't do anything to ensure the target architecture is actually one KVM selftests supports. - Use the kernel's $(ARCH) definition instead of the target triple for arch specific directories, e.g. arm64 instead of aarch64, mainly so as not to be different from the rest of the kernel.
2024-12-18KVM: selftests: Override ARCH for x86_64 instead of using ARCH_DIRSean Christopherson
Now that KVM selftests uses the kernel's canonical arch paths, directly override ARCH to 'x86' when targeting x86_64 instead of defining ARCH_DIR to redirect to appropriate paths. ARCH_DIR was originally added to deal with KVM selftests using the target triple ARCH for directories, e.g. s390x and aarch64; keeping it around just to deal with the one-off alias from x86_64=>x86 is unnecessary and confusing. Note, even when selftests are built from the top-level Makefile, ARCH is scoped to KVM's makefiles, i.e. overriding ARCH won't trip up some other selftests that (somehow) expects x86_64 and can't work with x86. Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Link: https://lore.kernel.org/r/20241128005547.4077116-17-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Use canonical $(ARCH) paths for KVM selftests directoriesSean Christopherson
Use the kernel's canonical $(ARCH) paths instead of the raw target triple for KVM selftests directories. KVM selftests are quite nearly the only place in the entire kernel that using the target triple for directories, tools/testing/selftests/drivers/s390x being the lone holdout. Using the kernel's preferred nomenclature eliminates the minor, but annoying, friction of having to translate to KVM's selftests directories, e.g. for pattern matching, opening files, running selftests, etc. Opportunsitically delete file comments that reference the full path of the file, as they are obviously prone to becoming stale, and serve no known purpose. Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Acked-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-16-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Provide empty 'all' and 'clean' targets for unsupported ARCHsSean Christopherson
Provide empty targets for KVM selftests if the target architecture is unsupported to make it obvious which architectures are supported, and so that various side effects don't fail and/or do weird things, e.g. as is, "mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))" fails due to a missing operand, and conversely, "$(shell mkdir -p $(sort $(OUTPUT)/$(ARCH_DIR) ..." will create an empty, useless directory for the unsupported architecture. Move the guts of the Makefile to Makefile.kvm so that it's easier to see that the if-statement effectively guards all of KVM selftests. Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-15-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)Sean Christopherson
Add two phases to mmu_stress_test to verify that KVM correctly handles guest memory that was writable, and then made read-only in the primary MMU, and then made writable again. Add bonus coverage for x86 and arm64 to verify that all of guest memory was marked read-only. Making forward progress (without making memory writable) requires arch specific code to skip over the faulting instruction, but the test can at least verify each vCPU's starting page was made read-only for other architectures. Link: https://lore.kernel.org/r/20241128005547.4077116-14-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Add a read-only mprotect() phase to mmu_stress_testSean Christopherson
Add a third phase of mmu_stress_test to verify that mprotect()ing guest memory to make it read-only doesn't cause explosions, e.g. to verify KVM correctly handles the resulting mmu_notifier invalidations. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Precisely limit the number of guest loops in mmu_stress_testSean Christopherson
Run the exact number of guest loops required in mmu_stress_test instead of looping indefinitely in anticipation of adding more stages that run different code (e.g. reads instead of writes). Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_testSean Christopherson
Use vcpu_arch_put_guest() to write memory from the guest in mmu_stress_test as an easy way to provide a bit of extra coverage. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Enable mmu_stress_test on arm64Sean Christopherson
Enable the mmu_stress_test on arm64. The intent was to enable the test across all architectures when it was first added, but a few goofs made it unrunnable on !x86. Now that those goofs are fixed, at least for arm64, enable the test. Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Marc Zyngier <maz@kernel.org> Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: sefltests: Explicitly include ucall_common.h in mmu_stress_test.cSean Christopherson
Explicitly include ucall_common.h in the MMU stress test, as unlike arm64 and x86-64, RISC-V doesn't include ucall_common.h in its processor.h, i.e. this will allow enabling the test on RISC-V. Reported-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Compute number of extra pages needed in mmu_stress_testSean Christopherson
Create mmu_stress_tests's VM with the correct number of extra pages needed to map all of memory in the guest. The bug hasn't been noticed before as the test currently runs only on x86, which maps guest memory with 1GiB pages, i.e. doesn't need much memory in the guest for page tables. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Only muck with SREGS on x86 in mmu_stress_testSean Christopherson
Try to get/set SREGS in mmu_stress_test only when running on x86, as the ioctls are supported only by x86 and PPC, and the latter doesn't yet support KVM selftests. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Rename max_guest_memory_test to mmu_stress_testSean Christopherson
Rename max_guest_memory_test to mmu_stress_test so that the name isn't horribly misleading when future changes extend the test to verify things like mprotect() interactions, and because the test is useful even when its configured to populate far less than the maximum amount of guest memory. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Check for a potential unhandled exception iff KVM_RUN succeededSean Christopherson
Don't check for an unhandled exception if KVM_RUN failed, e.g. if it returned errno=EFAULT, as reporting unhandled exceptions is done via a ucall, i.e. requires KVM_RUN to exit cleanly. Theoretically, checking for a ucall on a failed KVM_RUN could get a false positive, e.g. if there were stale data in vcpu->run from a previous exit. Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncateSean Christopherson
Assert that the register being read/written by vcpu_{g,s}et_reg() is no larger than a uint64_t, i.e. that a selftest isn't unintentionally truncating the value being read/written. Ideally, the assert would be done at compile-time, but that would limit the checks to hardcoded accesses and/or require fancier compile-time assertion infrastructure to filter out dynamic usage. Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20241128005547.4077116-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-paramSean Christopherson
Return a uint64_t from vcpu_get_reg() instead of having the caller provide a pointer to storage, as none of the vcpu_get_reg() usage in KVM selftests accesses a register larger than 64 bits, and vcpu_set_reg() only accepts a 64-bit value. If a use case comes along that needs to get a register that is larger than 64 bits, then a utility can be added to assert success and take a void pointer, but until then, forcing an out param yields ugly code and prevents feeding the output of vcpu_get_reg() into vcpu_set_reg(). Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20241128005547.4077116-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-17KVM: Move KVM_REG_SIZE() definition to common uAPI headerSean Christopherson
Define KVM_REG_SIZE() in the common kvm.h header, and delete the arm64 and RISC-V versions. As evidenced by the surrounding definitions, all aspects of the register size encoding are generic, i.e. RISC-V should have moved arm64's definition to common code instead of copy+pasting. Acked-by: Anup Patel <anup@brainfault.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Link: https://lore.kernel.org/r/20241128005547.4077116-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-15Linux 6.13-rc3v6.13-rc3Linus Torvalds
2024-12-15Merge tag 'arc-6.13-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC fixes from Vineet Gupta: - Sundry build and misc fixes * tag 'arc-6.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: build: Try to guess GCC variant of cross compiler ARC: bpf: Correct conditional check in 'check_jmp_32' ARC: dts: Replace deprecated snps,nr-gpios property for snps,dw-apb-gpio-port devices ARC: build: Use __force to suppress per-CPU cmpxchg warnings ARC: fix reference of dependency for PAE40 config ARC: build: disallow invalid PAE40 + 4K page config arc: rename aux.h to arc_aux.h
2024-12-15Merge tag 'efi-fixes-for-v6.13-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI fixes from Ard Biesheuvel: - Limit EFI zboot to GZIP and ZSTD before it comes in wider use - Fix inconsistent error when looking up a non-existent file in efivarfs with a name that does not adhere to the NAME-GUID format - Drop some unused code * tag 'efi-fixes-for-v6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: efi/esrt: remove esre_attribute::store() efivarfs: Fix error on non-existent file efi/zboot: Limit compression options to GZIP and ZSTD
2024-12-15Merge tag 'i2c-for-6.13-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c fixes from Wolfram Sang: "i2c host fixes: PNX used the wrong unit for timeouts, Nomadik was missing a sentinel, and RIIC was missing rounding up" * tag 'i2c-for-6.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: riic: Always round-up when calculating bus period i2c: nomadik: Add missing sentinel to match table i2c: pnx: Fix timeout in wait functions
2024-12-15Merge tag 'edac_urgent_for_v6.13_rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras Pull EDAC fix from Borislav Petkov: - Make sure amd64_edac loads successfully on certain Zen4 memory configurations * tag 'edac_urgent_for_v6.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras: EDAC/amd64: Simplify ECC check on unified memory controllers