summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2021-08-11x86/resctrl: Add resctrl_arch_get_num_closid()James Morse
To initialise struct resctrl_schema's num_closid, schemata_list_create() reaches into the architectures private structure to retrieve num_closid from the struct rdt_hw_resource. The 'half the closids' behaviour should be part of the filesystem parts of resctrl that are the same on any architecture. struct resctrl_schema's num_closid should include any correction for CDP. Having two properties called num_closid is likely to be confusing when they have different values. Add a helper to read the resource's num_closid from the arch code. This should return the number of closid that the resource supports, regardless of whether CDP is in use. Once the CDP resources are merged, schemata_list_create() can apply the correction itself. Using a type with an obvious size for the arch helper means changing the type of num_closid to u32, which matches the type already used by struct rdtgroup. reset_all_ctrls() does not use resctrl_arch_get_num_closid(), even though it sets up a structure for modifying the hardware. This function will be part of the architecture code, the maximum closid should be the maximum value the hardware has, regardless of the way resctrl is using it. All the uses of num_closid in core.c are naturally part of the architecture specific code. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-9-james.morse@arm.com
2021-08-11x86/resctrl: Store the effective num_closid in the schemaJames Morse
Struct resctrl_schema holds properties that vary with the style of configuration that resctrl applies to a resource. There are already two values for the hardware's num_closid, depending on whether the architecture presents the L3 or L3CODE/L3DATA resources. As the way CDP changes the number of control groups that resctrl can create is part of the user-space interface, it should be managed by the filesystem parts of resctrl. This allows the architecture code to only describe the value the hardware supports. Add num_closid to resctrl_schema. This is the value seen by the filesystem, which may be different to the maximum value described by the arch code when CDP is enabled. These functions operate on the num_closid value that is exposed to user-space: * rdtgroup_parse_resource() * rdtgroup_schemata_show() * rdt_num_closids_show() * closid_init() Change them to use the schema value instead. schemata_list_create() sets this value, and reaches into the architecture-specific structure to get the value. This will eventually be replaced with a helper. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-8-james.morse@arm.com
2021-08-11x86/resctrl: Walk the resctrl schema list instead of an arch listJames Morse
When parsing a schema configuration value from user-space, resctrl walks the architectures rdt_resources_all[] array to find a matching struct rdt_resource. Once the CDP resources are merged there will be one resource in use by two schemata. Anything walking rdt_resources_all[] on behalf of a user-space request should walk the list of struct resctrl_schema instead. Change the users of for_each_alloc_enabled_rdt_resource() to walk the schema instead. Schemata were only created for alloc_enabled resources so these two lists are currently equivalent. schemata_list_create() and rdt_kill_sb() are ignored. The first creates the schema list, and will eventually loop over the resource indexes using an arch helper to retrieve the resource. rdt_kill_sb() will eventually make use of an arch 'reset everything' helper. After the filesystem code is moved, rdtgroup_pseudo_locked_in_hierarchy() remains part of the x86 specific hooks to support pseudo lock. This code walks each domain, and still does this after the separate resources are merged. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-7-james.morse@arm.com
2021-08-11x86/resctrl: Label the resources with their configuration typeJames Morse
The names of resources are used for the schema name presented to user-space. The name used is rooted in a structure provided by the architecture code because the names are different when CDP is enabled. x86 implements this by swapping between two sets of resource structures based on their alloc_enabled flag. The type of configuration in-use is encoded in the name (and cbm_idx_offset). Once the CDP behaviour is moved into the parts of resctrl that will move to /fs/, there will be two struct resctrl_schema for one struct rdt_resource. The schema describes the type of configuration being applied to the resource. The name of the schema should be generated by resctrl, base on the type of configuration. To do this struct resctrl_schema needs to store the type of configuration in use for a schema. Create an enum resctrl_conf_type describing the options, and add it to struct resctrl_schema. The underlying resources are still separate, as cbm_idx_offset is still in use. Temporarily label all the entries in rdt_resources_all[] and copy that value to struct resctrl_schema. Copying the value ensures there is no mismatch while the filesystem parts of resctrl are modified to use the schema. Once the resources are merged, the filesystem code can assign this value based on the schema being created. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-6-james.morse@arm.com
2021-08-11x86/resctrl: Pass the schema in info dir's private pointerJames Morse
Many of resctrl's per-schema files return a value from struct rdt_resource, which they take as their 'priv' pointer. Moving properties that resctrl exposes to user-space into the core 'fs' code, (e.g. the name of the schema), means some of the functions that back the filesystem need the schema struct (to where the properties are moved), but currently take struct rdt_resource. For example, once the CDP resources are merged, struct rdt_resource no longer reflects all the properties of the schema. For the info dirs that represent a control, the information needed will be accessed via struct resctrl_schema, as this is how the resource is being used. For the monitors, its still struct rdt_resource as the monitors aren't described as schema. This difference means the type of the private pointers varies between control and monitor info dirs. Change the 'priv' pointer to point to struct resctrl_schema for the per-schema files that represent a control. The type can be determined from the fflags field. If the flags are RF_MON_INFO, its a struct rdt_resource. If the flags are RF_CTRL_INFO, its a struct resctrl_schema. No entry in res_common_files[] has both flags. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-5-james.morse@arm.com
2021-08-11x86/resctrl: Add a separate schema list for resctrlJames Morse
Resctrl exposes schemata to user-space, which allow the control values to be specified for a group of tasks. User-visible properties of the interface, (such as the schemata names and how the values are parsed) are rooted in a struct provided by the architecture code. (struct rdt_hw_resource). Once a second architecture uses resctrl, this would allow user-visible properties to diverge between architectures. These properties should come from the resctrl code that will be common to all architectures. Resctrl has no per-schema structure, only struct rdt_{hw_,}resource. Create a struct resctrl_schema to hold the rdt_resource. Before a second architecture can be supported, this structure will also need to hold the schema name visible to user-space and the type of configuration values for resctrl. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-4-james.morse@arm.com
2021-08-11x86/resctrl: Split struct rdt_domainJames Morse
resctrl is the defacto Linux ABI for SoC resource partitioning features. To support it on another architecture, it needs to be abstracted from the features provided by Intel RDT and AMD PQoS, and moved to /fs/. struct rdt_domain contains a mix of architecture private details and properties of the filesystem interface user-space uses. Continue by splitting struct rdt_domain, into an architecture private 'hw' struct, which contains the common resctrl structure that would be used by any architecture. The hardware values in ctrl_val and mbps_val need to be accessed via helpers to allow another architecture to convert these into a different format if necessary. After this split, filesystem code paths touching a 'hw' struct indicates where an abstraction is needed. Splitting this structure only moves types around, and should not lead to any change in behaviour. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-3-james.morse@arm.com
2021-08-11x86/resctrl: Split struct rdt_resourceJames Morse
resctrl is the defacto Linux ABI for SoC resource partitioning features. To support it on another architecture, it needs to be abstracted from the features provided by Intel RDT and AMD PQoS, and moved to /fs/. struct rdt_resource contains a mix of architecture private details and properties of the filesystem interface user-space uses. Start by splitting struct rdt_resource, into an architecture private 'hw' struct, which contains the common resctrl structure that would be used by any architecture. The foreach helpers are most commonly used by the filesystem code, and should return the common resctrl structure. for_each_rdt_resource() is changed to walk the common structure in its parent arch private structure. Move as much of the structure as possible into the common structure in the core code's header file. The x86 hardware accessors remain part of the architecture private code, as do num_closid, mon_scale and mbm_width. mon_scale and mbm_width are used to detect overflow of the hardware counters, and convert them from their native size to bytes. Any cross-architecture abstraction should be in terms of bytes, making these properties private. The hardware's num_closid is kept in the private structure to force the filesystem code to use a helper to access it. MPAM would return a single value for the system, regardless of the resource. Using the helper prevents this field from being confused with the version of num_closid that is being exposed to user-space (added in a later patch). After this split, filesystem code touching a 'hw' struct indicates where an abstraction is needed. Splitting this structure only moves types around, and should not lead to any change in behaviour. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jamie Iles <jamie@nuviainc.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20210728170637.25610-2-james.morse@arm.com
2021-08-10x86: Fix typo s/ECLR/ELCR/ for the PIC registerMaciej W. Rozycki
The proper spelling for the acronym referring to the Edge/Level Control Register is ELCR rather than ECLR. Adjust references accordingly. No functional change. Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107200251080.9461@angie.orcam.me.uk
2021-08-10x86: Avoid magic number with ELCR register accessesMaciej W. Rozycki
Define PIC_ELCR1 and PIC_ELCR2 macros for accesses to the ELCR registers implemented by many chipsets in their embedded 8259A PIC cores, avoiding magic numbers that are difficult to handle, and complementing the macros we already have for registers originally defined with discrete 8259A PIC implementations. No functional change. Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107200237300.9461@angie.orcam.me.uk
2021-08-10x86/PCI: Add support for the Intel 82426EX PIRQ routerMaciej W. Rozycki
The Intel 82426EX ISA Bridge (IB), a part of the Intel 82420EX PCIset, implements PCI interrupt steering with a PIRQ router in the form of two PIRQ Route Control registers, available in the PCI configuration space at locations 0x66 and 0x67 for the PIRQ0# and PIRQ1# lines respectively. The semantics is the same as with the PIIX router, however it is not clear if BIOSes use register indices or line numbers as the cookie to identify PCI interrupts in their routing tables and therefore support either scheme. The IB is directly attached to the Intel 82425EX PCI System Controller (PSC) component of the chipset via a dedicated PSC/IB Link interface rather than the host bus or PCI. Therefore it does not itself appear in the PCI configuration space even though it responds to configuration cycles addressing registers it implements. Use 82425EX's identification then for determining the presence of the IB. References: [1] "82420EX PCIset Data Sheet, 82425EX PCI System Controller (PSC) and 82426EX ISA Bridge (IB)", Intel Corporation, Order Number: 290488-004, December 1995, Section 3.3.18 "PIRQ1RC/PIRQ0RC--PIRQ Route Control Registers", p. 61 Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107200213490.9461@angie.orcam.me.uk
2021-08-10x86/PCI: Add support for the Intel 82374EB/82374SB (ESC) PIRQ routerMaciej W. Rozycki
The Intel 82374EB/82374SB EISA System Component (ESC) devices implement PCI interrupt steering with a PIRQ router[1] in the form of four PIRQ Route Control registers, available in the port I/O space accessible indirectly via the index/data register pair at 0x22/0x23, located at indices 0x60/0x61/0x62/0x63 for the PIRQ0/1/2/3# lines respectively. The semantics is the same as with the PIIX router, however it is not clear if BIOSes use register indices or line numbers as the cookie to identify PCI interrupts in their routing tables and therefore support either scheme. Accesses to the port I/O space concerned here need to be unlocked by writing the value of 0x0f to the ESC ID Register at index 0x02 beforehand[2]. Do so then and then lock access after use for safety. This locking could possibly interfere with accesses to the Intel MP spec IMCR register, implemented by the 82374SB variant of the ESC only as the PCI/APIC Control Register at index 0x70[3], for which leaving access to the configuration space concerned unlocked may have been a requirement for the BIOS to remain compliant with the MP spec. However we only poke at the IMCR register if the APIC mode is used, in which case the PIRQ router is not, so this arrangement is not going to interfere with IMCR access code. The ESC is implemented as a part of the combined southbridge also made of 82375EB/82375SB PCI-EISA Bridge (PCEB) and does itself appear in the PCI configuration space. Use the PCEB's device identification then for determining the presence of the ESC. References: [1] "82374EB/82374SB EISA System Component (ESC)", Intel Corporation, Order Number: 290476-004, March 1996, Section 3.1.12 "PIRQ[0:3]#--PIRQ Route Control Registers", pp. 44-45 [2] same, Section 3.1.1 "ESCID--ESC ID Register", p. 36 [3] same, Section 3.1.17 "PAC--PCI/APIC Control Register", p. 47 Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107192023450.9461@angie.orcam.me.uk
2021-08-10x86/PCI: Add support for the ALi M1487 (IBC) PIRQ routerMaciej W. Rozycki
The ALi M1487 ISA Bus Controller (IBC), a part of the ALi FinALi 486 chipset, implements PCI interrupt steering with a PIRQ router[1] in the form of four 4-bit mappings, spread across two PCI INTx Routing Table Mapping Registers, available in the port I/O space accessible indirectly via the index/data register pair at 0x22/0x23, located at indices 0x42 and 0x43 for the INT1/INT2 and INT3/INT4 lines respectively. Additionally there is a separate PCI INTx Sensitivity Register at index 0x44 in the same port I/O space, whose bits 3:0 select the trigger mode for INT[4:1] lines respectively[2]. Manufacturer's documentation says that this register has to be set consistently with the relevant ELCR register[3]. Add a router-specific hook then and use it to handle this register. Accesses to the port I/O space concerned here need to be unlocked by writing the value of 0xc5 to the Lock Register at index 0x03 beforehand[4]. Do so then and then lock access after use for safety. The IBC is implemented as a peer bridge on the host bus rather than a southbridge on PCI and therefore it does not itself appear in the PCI configuration space. It is complemented by the M1489 Cache-Memory PCI Controller (CMP) host-to-PCI bridge, so use that device's identification for determining the presence of the IBC. References: [1] "M1489/M1487: 486 PCI Chip Set", Version 1.2, Acer Laboratories Inc., July 1997, Section 4: "Configuration Registers", pp. 76-77 [2] same, p. 77 [3] same, Section 5: "M1489/M1487 Software Programming Guide", pp. 99-100 [4] same, Section 4: "Configuration Registers", p. 37 Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107191702020.9461@angie.orcam.me.uk
2021-08-10x86: Add support for 0x22/0x23 port I/O configuration spaceMaciej W. Rozycki
Define macros and accessors for the configuration space addressed indirectly with an index register and a data register at the port I/O locations of 0x22 and 0x23 respectively. This space is defined by the Intel MultiProcessor Specification for the IMCR register used to switch between the PIC and the APIC mode[1], by Cyrix processors for their configuration[2][3], and also some chipsets. Given the lack of atomicity with the indirect addressing a spinlock is required to protect accesses, although for Cyrix processors it is enough if accesses are executed with interrupts locally disabled, because the registers are local to the accessing CPU, and IMCR is only ever poked at by the BSP and early enough for interrupts not to have been configured yet. Therefore existing code does not have to change or use the new spinlock and neither it does. Put the spinlock in a library file then, so that it does not get pulled unnecessarily for configurations that do not refer it. Convert Cyrix accessors to wrappers so as to retain the brevity and clarity of the `getCx86' and `setCx86' calls. References: [1] "MultiProcessor Specification", Version 1.4, Intel Corporation, Order Number: 242016-006, May 1997, Section 3.6.2.1 "PIC Mode", pp. 3-7, 3-8 [2] "5x86 Microprocessor", Cyrix Corporation, Order Number: 94192-00, July 1995, Section 2.3.2.4 "Configuration Registers", p. 2-23 [3] "6x86 Processor", Cyrix Corporation, Order Number: 94175-01, March 1996, Section 2.4.4 "6x86 Configuration Registers", p. 2-23 Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2107182353140.9461@angie.orcam.me.uk
2021-08-10Merge branch 'kvm-vmx-secctl' into HEADPaolo Bonzini
Merge common topic branch for 5.14-rc6 and 5.15 merge window.
2021-08-10KVM: VMX: Use current VMCS to query WAITPKG support for MSR emulationSean Christopherson
Use the secondary_exec_controls_get() accessor in vmx_has_waitpkg() to effectively get the controls for the current VMCS, as opposed to using vmx->secondary_exec_controls, which is the cached value of KVM's desired controls for vmcs01 and truly not reflective of any particular VMCS. While the waitpkg control is not dynamic, i.e. vmcs01 will always hold the same waitpkg configuration as vmx->secondary_exec_controls, the same does not hold true for vmcs02 if the L1 VMM hides the feature from L2. If L1 hides the feature _and_ does not intercept MSR_IA32_UMWAIT_CONTROL, L2 could incorrectly read/write L1's virtual MSR instead of taking a #GP. Fixes: 6e3ba4abcea5 ("KVM: vmx: Emulate MSR IA32_UMWAIT_CONTROL") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210810171952.2758100-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-10x86/mce/inject: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210803141621.780504-10-bigeasy@linutronix.de
2021-08-10x86/microcode: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210803141621.780504-9-bigeasy@linutronix.de
2021-08-10x86/mtrr: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210803141621.780504-8-bigeasy@linutronix.de
2021-08-10x86/mmiotrace: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Karol Herbst <kherbst@redhat.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20210803141621.780504-7-bigeasy@linutronix.de
2021-08-10x86/msi: Force affinity setup before startupThomas Gleixner
The X86 MSI mechanism cannot handle interrupt affinity changes safely after startup other than from an interrupt handler, unless interrupt remapping is enabled. The startup sequence in the generic interrupt code violates that assumption. Mark the irq chips with the new IRQCHIP_AFFINITY_PRE_STARTUP flag so that the default interrupt setting happens before the interrupt is started up for the first time. While the interrupt remapping MSI chip does not require this, there is no point in treating it differently as this might spare an interrupt to a CPU which is not in the default affinity mask. For the non-remapping case go to the direct write path when the interrupt is not yet started similar to the not yet activated case. Fixes: 18404756765c ("genirq: Expose default irq affinity mask (take 3)") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.886722080@linutronix.de
2021-08-10x86/ioapic: Force affinity setup before startupThomas Gleixner
The IO/APIC cannot handle interrupt affinity changes safely after startup other than from an interrupt handler. The startup sequence in the generic interrupt code violates that assumption. Mark the irq chip with the new IRQCHIP_AFFINITY_PRE_STARTUP flag so that the default interrupt setting happens before the interrupt is started up for the first time. Fixes: 18404756765c ("genirq: Expose default irq affinity mask (take 3)") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.832143400@linutronix.de
2021-08-10kbuild: do not require sub-make for separate output tree buildsMasahiro Yamada
As explained in commit 3204a7fb98a3 ("kbuild: prefix $(srctree)/ to some included Makefiles"), I want to stop using --include-dir some day. I already fixed up the top Makefile, but some arch Makefiles (mips, um, x86) still include check-in Makefiles without $(srctree)/. Fix them up so 'need-sub-make := 1' can go away for this case. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2021-08-09x86/amd_gart: don't set failed sg dma_address to DMA_MAPPING_ERRORLogan Gunthorpe
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of the ->map_sg calling convention, so remove it. Link: https://lore.kernel.org/linux-mips/20210716063241.GC13345@lst.de/ Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Niklas Schnelle <schnelle@linux.ibm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09x86/amd_gart: return error code from gart_map_sg()Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure. So make __dma_map_cont() return a valid errno (which is then propagated to gart_map_sg() via dma_map_cont()) and return it in case of failure. Also, return -EINVAL in case of invalid nents. Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Niklas Schnelle <schnelle@linux.ibm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-08Merge tag 'perf-urgent-2021-08-08' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Thomas Gleixner: "A set of perf fixes: - Correct the permission checks for perf event which send SIGTRAP to a different process and clean up that code to be more readable. - Prevent an out of bound MSR access in the x86 perf code which happened due to an incomplete limiting to the actually available hardware counters. - Prevent access to the AMD64_EVENTSEL_HOSTONLY bit when running inside a guest. - Handle small core counter re-enabling correctly by issuing an ACK right before reenabling it to prevent a stale PEBS record being kept around" * tag 'perf-urgent-2021-08-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel: Apply mid ACK for small core perf/x86/amd: Don't touch the AMD64_EVENTSEL_HOSTONLY bit inside the guest perf/x86: Fix out of bound MSR access perf: Refactor permissions check into perf_check_permission() perf: Fix required permissions if sigtrap is requested
2021-08-07Merge tag 'kbuild-fixes-v5.14-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - Correct the Extended Regular Expressions in tools - Adjust scripts/checkversion.pl for the current Kbuild - Unset sub_make_done for 'make install' to make DKMS work again * tag 'kbuild-fixes-v5.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: cancel sub_make_done for the install target to fix DKMS scripts: checkversion: modernize linux/version.h search strings mips: Fix non-POSIX regexp x86/tools/relocs: Fix non-POSIX regexp
2021-08-06perf/x86/intel: Apply mid ACK for small coreKan Liang
A warning as below may be occasionally triggered in an ADL machine when these conditions occur: - Two perf record commands run one by one. Both record a PEBS event. - Both runs on small cores. - They have different adaptive PEBS configuration (PEBS_DATA_CFG). [ ] WARNING: CPU: 4 PID: 9874 at arch/x86/events/intel/ds.c:1743 setup_pebs_adaptive_sample_data+0x55e/0x5b0 [ ] RIP: 0010:setup_pebs_adaptive_sample_data+0x55e/0x5b0 [ ] Call Trace: [ ] <NMI> [ ] intel_pmu_drain_pebs_icl+0x48b/0x810 [ ] perf_event_nmi_handler+0x41/0x80 [ ] </NMI> [ ] __perf_event_task_sched_in+0x2c2/0x3a0 Different from the big core, the small core requires the ACK right before re-enabling counters in the NMI handler, otherwise a stale PEBS record may be dumped into the later NMI handler, which trigger the warning. Add a new mid_ack flag to track the case. Add all PMI handler bits in the struct x86_hybrid_pmu to track the bits for different types of PMUs. Apply mid ACK for the small cores on an Alder Lake machine. The existing hybrid() macro has a compile error when taking address of a bit-field variable. Add a new macro hybrid_bit() to get the bit-field value of a given PMU. Fixes: f83d2f91d259 ("perf/x86/intel: Add Alder Lake Hybrid support") Reported-by: Ammy Yi <ammy.yi@intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Tested-by: Ammy Yi <ammy.yi@intel.com> Link: https://lkml.kernel.org/r/1627997128-57891-1-git-send-email-kan.liang@linux.intel.com
2021-08-06KVM: x86/mmu: Rename __gfn_to_rmap to gfn_to_rmapDavid Matlack
gfn_to_rmap was removed in the previous patch so there is no need to retain the double underscore on __gfn_to_rmap. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20210804222844.1419481-7-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-06KVM: x86/mmu: Leverage vcpu->last_used_slot for rmap_add and rmap_recycleDavid Matlack
rmap_add() and rmap_recycle() both run in the context of the vCPU and thus we can use kvm_vcpu_gfn_to_memslot() to look up the memslot. This enables rmap_add() and rmap_recycle() to take advantage of vcpu->last_used_slot and avoid expensive memslot searching. This change improves the performance of "Populate memory time" in dirty_log_perf_test with tdp_mmu=N. In addition to improving the performance, "Populate memory time" no longer scales with the number of memslots in the VM. Command | Before | After ------------------------------- | ---------------- | ------------- ./dirty_log_perf_test -v64 -x1 | 15.18001570s | 14.99469366s ./dirty_log_perf_test -v64 -x64 | 18.71336392s | 14.98675076s Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20210804222844.1419481-6-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-06KVM: x86/mmu: Leverage vcpu->last_used_slot in tdp_mmu_map_handle_target_levelDavid Matlack
The existing TDP MMU methods to handle dirty logging are vcpu-agnostic since they can be driven by MMU notifiers and other non-vcpu-specific events in addition to page faults. However this means that the TDP MMU is not benefiting from the new vcpu->last_used_slot. Fix that by introducing a tdp_mmu_map_set_spte_atomic() which is only called during a TDP page fault and has access to the kvm_vcpu for fast slot lookups. This improves "Populate memory time" in dirty_log_perf_test by 5%: Command | Before | After ------------------------------- | ---------------- | ------------- ./dirty_log_perf_test -v64 -x64 | 5.472321072s | 5.169832886s Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20210804222844.1419481-5-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Build failure in drivers/net/wwan/mhi_wwan_mbim.c: add missing parameter (0, assuming we don't want buffer pre-alloc). Conflict in drivers/net/dsa/sja1105/sja1105_main.c between: 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") 0fac6aa098ed ("net: dsa: sja1105: delete the best_effort_vlan_filtering mode") Follow the instructions from the commit message of the former commit - removed the if conditions. When looking at commit 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") note that the mask_iotag fields get removed by the following patch. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "Mostly bugfixes; plus, support for XMM arguments to Hyper-V hypercalls now obeys KVM_CAP_HYPERV_ENFORCE_CPUID. Both the XMM arguments feature and KVM_CAP_HYPERV_ENFORCE_CPUID are new in 5.14, and each did not know of the other" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86/mmu: Fix per-cpu counter corruption on 32-bit builds KVM: selftests: fix hyperv_clock test KVM: SVM: improve the code readability for ASID management KVM: SVM: Fix off-by-one indexing when nullifying last used SEV VMCB KVM: Do not leak memory for duplicate debugfs directories KVM: selftests: Test access to XMM fast hypercalls KVM: x86: hyper-v: Check if guest is allowed to use XMM registers for hypercall input KVM: x86: Introduce trace_kvm_hv_hypercall_done() KVM: x86: hyper-v: Check access to hypercall before reading XMM registers KVM: x86: accept userspace interrupt only if no event is injected
2021-08-05x86/tools/relocs: Fix non-POSIX regexpH. Nikolaus Schaller
Trying to run a cross-compiled x86 relocs tool on a BSD based HOSTCC leads to errors like VOFFSET arch/x86/boot/compressed/../voffset.h - due to: vmlinux CC arch/x86/boot/compressed/misc.o - due to: arch/x86/boot/compressed/../voffset.h OBJCOPY arch/x86/boot/compressed/vmlinux.bin - due to: vmlinux RELOCS arch/x86/boot/compressed/vmlinux.relocs - due to: vmlinux empty (sub)expressionarch/x86/boot/compressed/Makefile:118: recipe for target 'arch/x86/boot/compressed/vmlinux.relocs' failed make[3]: *** [arch/x86/boot/compressed/vmlinux.relocs] Error 1 It turns out that relocs.c uses patterns like "something(|_end)" This is not valid syntax or gives undefined results according to POSIX 9.5.3 ERE Grammar https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html It seems to be silently accepted by the Linux regexp() implementation while a BSD host complains. Such patterns can be replaced by a transformation like "(|p1|p2)" -> "(p1|p2)?" Fixes: fd952815307f ("x86-32, relocs: Whitelist more symbols for ld bug workaround") Signed-off-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2021-08-05KVM: x86/mmu: Fix per-cpu counter corruption on 32-bit buildsSean Christopherson
Take a signed 'long' instead of an 'unsigned long' for the number of pages to add/subtract to the total number of pages used by the MMU. This fixes a zero-extension bug on 32-bit kernels that effectively corrupts the per-cpu counter used by the shrinker. Per-cpu counters take a signed 64-bit value on both 32-bit and 64-bit kernels, whereas kvm_mod_used_mmu_pages() takes an unsigned long and thus an unsigned 32-bit value on 32-bit kernels. As a result, the value used to adjust the per-cpu counter is zero-extended (unsigned -> signed), not sign-extended (signed -> signed), and so KVM's intended -1 gets morphed to 4294967295 and effectively corrupts the counter. This was found by a staggering amount of sheer dumb luck when running kvm-unit-tests on a 32-bit KVM build. The shrinker just happened to kick in while running tests and do_shrink_slab() logged an error about trying to free a negative number of objects. The truly lucky part is that the kernel just happened to be a slightly stale build, as the shrinker no longer yells about negative objects as of commit 18bb473e5031 ("mm: vmscan: shrink deferred objects proportional to priority"). vmscan: shrink_slab: mmu_shrink_scan+0x0/0x210 [kvm] negative objects to delete nr=-858993460 Fixes: bc8a3d8925a8 ("kvm: mmu: Fix overflow on kvm mmu page limit calculation") Cc: stable@vger.kernel.org Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210804214609.1096003-1-seanjc@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-05KVM: xen: do not use struct gfn_to_hva_cachePaolo Bonzini
gfn_to_hva_cache is not thread-safe, so it is usually used only within a vCPU (whose code is protected by vcpu->mutex). The Xen interface implementation has such a cache in kvm->arch, but it is not really used except to store the location of the shared info page. Replace shinfo_set and shinfo_cache with just the value that is passed via KVM_XEN_ATTR_TYPE_SHARED_INFO; the only complication is that the initialization value is not zero anymore and therefore kvm_xen_init_vm needs to be introduced. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-04KVM: SVM: improve the code readability for ASID managementMingwei Zhang
KVM SEV code uses bitmaps to manage ASID states. ASID 0 was always skipped because it is never used by VM. Thus, in existing code, ASID value and its bitmap postion always has an 'offset-by-1' relationship. Both SEV and SEV-ES shares the ASID space, thus KVM uses a dynamic range [min_asid, max_asid] to handle SEV and SEV-ES ASIDs separately. Existing code mixes the usage of ASID value and its bitmap position by using the same variable called 'min_asid'. Fix the min_asid usage: ensure that its usage is consistent with its name; allocate extra size for ASID 0 to ensure that each ASID has the same value with its bitmap position. Add comments on ASID bitmap allocation to clarify the size change. Signed-off-by: Mingwei Zhang <mizhang@google.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Marc Orr <marcorr@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Alper Gun <alpergun@google.com> Cc: Dionna Glaze <dionnaglaze@google.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vipin Sharma <vipinsh@google.com> Cc: Peter Gonda <pgonda@google.com> Cc: Joerg Roedel <joro@8bytes.org> Message-Id: <20210802180903.159381-1-mizhang@google.com> [Fix up sev_asid_free to also index by ASID, as suggested by Sean Christopherson, and use nr_asids in sev_cpu_init. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-04perf/x86/amd: Don't touch the AMD64_EVENTSEL_HOSTONLY bit inside the guestLike Xu
If we use "perf record" in an AMD Milan guest, dmesg reports a #GP warning from an unchecked MSR access error on MSR_F15H_PERF_CTLx: [] unchecked MSR access error: WRMSR to 0xc0010200 (tried to write 0x0000020000110076) at rIP: 0xffffffff8106ddb4 (native_write_msr+0x4/0x20) [] Call Trace: [] amd_pmu_disable_event+0x22/0x90 [] x86_pmu_stop+0x4c/0xa0 [] x86_pmu_del+0x3a/0x140 The AMD64_EVENTSEL_HOSTONLY bit is defined and used on the host, while the guest perf driver should avoid such use. Fixes: 1018faa6cf23 ("perf/x86/kvm: Fix Host-Only/Guest-Only counting with SVM disabled") Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Tested-by: Kim Phillips <kim.phillips@amd.com> Tested-by: Liam Merwick <liam.merwick@oracle.com> Link: https://lkml.kernel.org/r/20210802070850.35295-1-likexu@tencent.com
2021-08-04perf/x86: Fix out of bound MSR accessPeter Zijlstra
On Wed, Jul 28, 2021 at 12:49:43PM -0400, Vince Weaver wrote: > [32694.087403] unchecked MSR access error: WRMSR to 0x318 (tried to write 0x0000000000000000) at rIP: 0xffffffff8106f854 (native_write_msr+0x4/0x20) > [32694.101374] Call Trace: > [32694.103974] perf_clear_dirty_counters+0x86/0x100 The problem being that it doesn't filter out all fake counters, in specific the above (erroneously) tries to use FIXED_BTS. Limit the fixed counters indexes to the hardware supplied number. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Vince Weaver <vincent.weaver@maine.edu> Tested-by: Like Xu <likexu@tencent.com> Link: https://lkml.kernel.org/r/YQJxka3dxgdIdebG@hirez.programming.kicks-ass.net
2021-08-04x86/hyperv: fix root partition faults when writing to VP assist page MSRPraveen Kumar
For root partition the VP assist pages are pre-determined by the hypervisor. The root kernel is not allowed to change them to different locations. And thus, we are getting below stack as in current implementation root is trying to perform write to specific MSR. [ 2.778197] unchecked MSR access error: WRMSR to 0x40000073 (tried to write 0x0000000145ac5001) at rIP: 0xffffffff810c1084 (native_write_msr+0x4/0x30) [ 2.784867] Call Trace: [ 2.791507] hv_cpu_init+0xf1/0x1c0 [ 2.798144] ? hyperv_report_panic+0xd0/0xd0 [ 2.804806] cpuhp_invoke_callback+0x11a/0x440 [ 2.811465] ? hv_resume+0x90/0x90 [ 2.818137] cpuhp_issue_call+0x126/0x130 [ 2.824782] __cpuhp_setup_state_cpuslocked+0x102/0x2b0 [ 2.831427] ? hyperv_report_panic+0xd0/0xd0 [ 2.838075] ? hyperv_report_panic+0xd0/0xd0 [ 2.844723] ? hv_resume+0x90/0x90 [ 2.851375] __cpuhp_setup_state+0x3d/0x90 [ 2.858030] hyperv_init+0x14e/0x410 [ 2.864689] ? enable_IR_x2apic+0x190/0x1a0 [ 2.871349] apic_intr_mode_init+0x8b/0x100 [ 2.878017] x86_late_time_init+0x20/0x30 [ 2.884675] start_kernel+0x459/0x4fb [ 2.891329] secondary_startup_64_no_verify+0xb0/0xbb Since the hypervisor already provides the VP assist pages for root partition, we need to memremap the memory from hypervisor for root kernel to use. The mapping is done in hv_cpu_init during bringup and is unmapped in hv_cpu_die during teardown. Signed-off-by: Praveen Kumar <kumarpraveen@linux.microsoft.com> Reviewed-by: Sunil Muthuswamy <sunilmut@microsoft.com> Link: https://lore.kernel.org/r/20210731120519.17154-1-kumarpraveen@linux.microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-08-04KVM: SVM: Fix off-by-one indexing when nullifying last used SEV VMCBSean Christopherson
Use the raw ASID, not ASID-1, when nullifying the last used VMCB when freeing an SEV ASID. The consumer, pre_sev_run(), indexes the array by the raw ASID, thus KVM could get a false negative when checking for a different VMCB if KVM manages to reallocate the same ASID+VMCB combo for a new VM. Note, this cannot cause a functional issue _in the current code_, as pre_sev_run() also checks which pCPU last did VMRUN for the vCPU, and last_vmentry_cpu is initialized to -1 during vCPU creation, i.e. is guaranteed to mismatch on the first VMRUN. However, prior to commit 8a14fe4f0c54 ("kvm: x86: Move last_cpu into kvm_vcpu_arch as last_vmentry_cpu"), SVM tracked pCPU on its own and zero-initialized the last_cpu variable. Thus it's theoretically possible that older versions of KVM could miss a TLB flush if the first VMRUN is on pCPU0 and the ASID and VMCB exactly match those of a prior VM. Fixes: 70cd94e60c73 ("KVM: SVM: VMRUN should use associated ASID when SEV is enabled") Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-04KVM: x86/pmu: Introduce pmc->is_paused to reduce the call time of perf ↵Like Xu
interfaces Based on our observations, after any vm-exit associated with vPMU, there are at least two or more perf interfaces to be called for guest counter emulation, such as perf_event_{pause, read_value, period}(), and each one will {lock, unlock} the same perf_event_ctx. The frequency of calls becomes more severe when guest use counters in a multiplexed manner. Holding a lock once and completing the KVM request operations in the perf context would introduce a set of impractical new interfaces. So we can further optimize the vPMU implementation by avoiding repeated calls to these interfaces in the KVM context for at least one pattern: After we call perf_event_pause() once, the event will be disabled and its internal count will be reset to 0. So there is no need to pause it again or read its value. Once the event is paused, event period will not be updated until the next time it's resumed or reprogrammed. And there is also no need to call perf_event_period twice for a non-running counter, considering the perf_event for a running counter is never paused. Based on this implementation, for the following common usage of sampling 4 events using perf on a 4u8g guest: echo 0 > /proc/sys/kernel/watchdog echo 25 > /proc/sys/kernel/perf_cpu_time_max_percent echo 10000 > /proc/sys/kernel/perf_event_max_sample_rate echo 0 > /proc/sys/kernel/perf_cpu_time_max_percent for i in `seq 1 1 10` do taskset -c 0 perf record \ -e cpu-cycles -e instructions -e branch-instructions -e cache-misses \ /root/br_instr a done the average latency of the guest NMI handler is reduced from 37646.7 ns to 32929.3 ns (~1.14x speed up) on the Intel ICX server. Also, in addition to collecting more samples, no loss of sampling accuracy was observed compared to before the optimization. Signed-off-by: Like Xu <likexu@tencent.com> Message-Id: <20210728120705.6855-1-likexu@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Peter Zijlstra <peterz@infradead.org>
2021-08-04KVM: X86: Optimize zapping rmapPeter Xu
Using rmap_get_first() and rmap_remove() for zapping a huge rmap list could be slow. The easy way is to travers the rmap list, collecting the a/d bits and free the slots along the way. Provide a pte_list_destroy() and do exactly that. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210730220605.26377-1-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-04KVM: X86: Optimize pte_list_desc with per-array counterPeter Xu
Add a counter field into pte_list_desc, so as to simplify the add/remove/loop logic. E.g., we don't need to loop over the array any more for most reasons. This will make more sense after we've switched the array size to be larger otherwise the counter will be a waste. Initially I wanted to store a tail pointer at the head of the array list so we don't need to traverse the list at least for pushing new ones (if without the counter we traverse both the list and the array). However that'll need slightly more change without a huge lot benefit, e.g., after we grow entry numbers per array the list traversing is not so expensive. So let's be simple but still try to get as much benefit as we can with just these extra few lines of changes (not to mention the code looks easier too without looping over arrays). I used the same a test case to fork 500 child and recycle them ("./rmap_fork 500" [1]), this patch further speeds up the total fork time of about 4%, which is a total of 33% of vanilla kernel: Vanilla: 473.90 (+-5.93%) 3->15 slots: 366.10 (+-4.94%) Add counter: 351.00 (+-3.70%) [1] https://github.com/xzpeter/clibs/commit/825436f825453de2ea5aaee4bdb1c92281efe5b3 Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210730220602.26327-1-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-04KVM: X86: MMU: Tune PTE_LIST_EXT to be biggerPeter Xu
Currently rmap array element only contains 3 entries. However for EPT=N there could have a lot of guest pages that got tens of even hundreds of rmap entry. A normal distribution of a 6G guest (even if idle) shows this with rmap count statistics: Rmap_Count: 0 1 2-3 4-7 8-15 16-31 32-63 64-127 128-255 256-511 512-1023 Level=4K: 3089171 49005 14016 1363 235 212 15 7 0 0 0 Level=2M: 5951 227 0 0 0 0 0 0 0 0 0 Level=1G: 32 0 0 0 0 0 0 0 0 0 0 If we do some more fork some pages will grow even larger rmap counts. This patch makes PTE_LIST_EXT bigger so it'll be more efficient for the general use case of EPT=N as we do list reference less and the loops over PTE_LIST_EXT will be slightly more efficient; but still not too large so less waste when array not full. It should not affecting EPT=Y since EPT normally only has zero or one rmap entry for each page, so no array is even allocated. With a test case to fork 500 child and recycle them ("./rmap_fork 500" [1]), this patch speeds up fork time of about 29%. Before: 473.90 (+-5.93%) After: 366.10 (+-4.94%) [1] https://github.com/xzpeter/clibs/commit/825436f825453de2ea5aaee4bdb1c92281efe5b3 Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210730220455.26054-6-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-03KVM: x86: hyper-v: Check if guest is allowed to use XMM registers for ↵Vitaly Kuznetsov
hypercall input TLFS states that "Availability of the XMM fast hypercall interface is indicated via the “Hypervisor Feature Identification” CPUID Leaf (0x40000003, see section 2.4.4) ... Any attempt to use this interface when the hypervisor does not indicate availability will result in a #UD fault." Implement the check for 'strict' mode (KVM_CAP_HYPERV_ENFORCE_CPUID). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Siddharth Chandrasekaran <sidcha@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20210730122625.112848-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-03KVM: x86: Introduce trace_kvm_hv_hypercall_done()Vitaly Kuznetsov
Hypercall failures are unusual with potentially far going consequences so it would be useful to see their results when tracing. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Siddharth Chandrasekaran <sidcha@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20210730122625.112848-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-03KVM: x86: hyper-v: Check access to hypercall before reading XMM registersVitaly Kuznetsov
In case guest doesn't have access to the particular hypercall we can avoid reading XMM registers. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Siddharth Chandrasekaran <sidcha@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20210730122625.112848-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-03KVM: const-ify all relevant uses of struct kvm_memory_slotHamza Mahfooz
As alluded to in commit f36f3f2846b5 ("KVM: add "new" argument to kvm_arch_commit_memory_region"), a bunch of other places where struct kvm_memory_slot is used, needs to be refactored to preserve the "const"ness of struct kvm_memory_slot across-the-board. Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Message-Id: <20210713023338.57108-1-someguy@effective-light.com> [Do not touch body of slot_rmap_walk_init. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-02KVM: nSVM: remove useless kvm_clear_*_queuePaolo Bonzini
For an event to be in injected state when nested_svm_vmrun executes, it must have come from exitintinfo when svm_complete_interrupts ran: vcpu_enter_guest static_call(kvm_x86_run) -> svm_vcpu_run svm_complete_interrupts // now the event went from "exitintinfo" to "injected" static_call(kvm_x86_handle_exit) -> handle_exit svm_invoke_exit_handler vmrun_interception nested_svm_vmrun However, no event could have been in exitintinfo before a VMRUN vmexit. The code in svm.c is a bit more permissive than the one in vmx.c: if (is_external_interrupt(svm->vmcb->control.exit_int_info) && exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR && exit_code != SVM_EXIT_NPF && exit_code != SVM_EXIT_TASK_SWITCH && exit_code != SVM_EXIT_INTR && exit_code != SVM_EXIT_NMI) but in any case, a VMRUN instruction would not even start to execute during an attempted event delivery. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>