summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-10-31KVM: arm64: nv: Add missing EL2->EL1 mappings in get_el2_to_el1_mapping()Marc Zyngier
As KVM has grown a bunch of new system register for NV, it appears that we are missing them in the get_el2_to_el1_mapping() list. Most of them are not crucial as they don't tend to be accessed via vcpu_read_sys_reg() and vcpu_write_sys_reg(). Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-6-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Drop useless struct s2_mmu in __kvm_at_s1e2()Marc Zyngier
__kvm_at_s1e2() contains the definition of an s2_mmu for the current context, but doesn't make any use of it. Drop it. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-5-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31arm64: Add encoding for PIRE0_EL2Marc Zyngier
PIRE0_EL2 is the equivalent of PIRE0_EL1 for the EL2&0 translation regime, and it is sorely missing from the sysreg file. Add the sucker. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-4-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31arm64: Remove VNCR definition for PIRE0_EL2Marc Zyngier
As of the ARM ARM Known Issues document 102105_K.a_04_en, D22677 fixes a problem with the PIRE0_EL2 register, resulting in its removal from the VNCR page (it had no purpose being there the first place). Follow the architecture update by removing this offset. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-3-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31arm64: Drop SKL0/SKL1 from TCR2_EL2Marc Zyngier
Despite what the documentation says, TCR2_EL2.{SKL0,SKL1} do not exist, and the corresponding information is in the respective TTBRx_EL2. This is a leftover from a development version of the architecture. This change makes TCR2_EL2 similar to TCR2_EL1 in that respect. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-2-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-30net: ethernet: mtk_wed: fix path of MT7988 WO firmwareDaniel Golle
linux-firmware commit 808cba84 ("mtk_wed: add firmware for mt7988 Wireless Ethernet Dispatcher") added mt7988_wo_{0,1}.bin in the 'mediatek/mt7988' directory while driver current expects the files in the 'mediatek' directory. Change path in the driver header now that the firmware has been added. Fixes: e2f64db13aa1 ("net: ethernet: mtk_wed: introduce WED support for MT7988") Signed-off-by: Daniel Golle <daniel@makrotopia.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://patch.msgid.link/Zxz0GWTR5X5LdWPe@pidgin.makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30Merge branch 'mlxsw-fixes'Jakub Kicinski
Petr Machata says: ==================== mlxsw: Fixes In this patchset: - Tx header should be pushed for each packet which is transmitted via Spectrum ASICs. Patch #1 adds a missing call to skb_cow_head() to make sure that there is both enough room to push the Tx header and that the SKB header is not cloned and can be modified. - Commit b5b60bb491b2 ("mlxsw: pci: Use page pool for Rx buffers allocation") converted mlxsw to use page pool for Rx buffers allocation. Sync for CPU and for device should be done for Rx pages. In patches #2 and #3, add the missing calls to sync pages for, respectively, CPU and the device. - Patch #4 then fixes a bug to IPv6 GRE forwarding offload. Patch #5 adds a generic forwarding test that fails with mlxsw ports prior to the fix. ==================== Link: https://patch.msgid.link/cover.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30selftests: forwarding: Add IPv6 GRE remote change testsIdo Schimmel
Test that after changing the remote address of an ip6gre net device traffic is forwarded as expected. Test with both flat and hierarchical topologies and with and without an input / output keys. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/02b05246d2cdada0cf2fccffc0faa8a424d0f51b.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30mlxsw: spectrum_ipip: Fix memory leak when changing remote IPv6 addressIdo Schimmel
The device stores IPv6 addresses that are used for encapsulation in linear memory that is managed by the driver. Changing the remote address of an ip6gre net device never worked properly, but since cited commit the following reproducer [1] would result in a warning [2] and a memory leak [3]. The problem is that the new remote address is never added by the driver to its hash table (and therefore the device) and the old address is never removed from it. Fix by programming the new address when the configuration of the ip6gre net device changes and removing the old one. If the address did not change, then the above would result in increasing the reference count of the address and then decreasing it. [1] # ip link add name bla up type ip6gre local 2001:db8:1::1 remote 2001:db8:2::1 tos inherit ttl inherit # ip link set dev bla type ip6gre remote 2001:db8:3::1 # ip link del dev bla # devlink dev reload pci/0000:01:00.0 [2] WARNING: CPU: 0 PID: 1682 at drivers/net/ethernet/mellanox/mlxsw/spectrum.c:3002 mlxsw_sp_ipv6_addr_put+0x140/0x1d0 Modules linked in: CPU: 0 UID: 0 PID: 1682 Comm: ip Not tainted 6.12.0-rc3-custom-g86b5b55bc835 #151 Hardware name: Nvidia SN5600/VMOD0013, BIOS 5.13 05/31/2023 RIP: 0010:mlxsw_sp_ipv6_addr_put+0x140/0x1d0 [...] Call Trace: <TASK> mlxsw_sp_router_netdevice_event+0x55f/0x1240 notifier_call_chain+0x5a/0xd0 call_netdevice_notifiers_info+0x39/0x90 unregister_netdevice_many_notify+0x63e/0x9d0 rtnl_dellink+0x16b/0x3a0 rtnetlink_rcv_msg+0x142/0x3f0 netlink_rcv_skb+0x50/0x100 netlink_unicast+0x242/0x390 netlink_sendmsg+0x1de/0x420 ____sys_sendmsg+0x2bd/0x320 ___sys_sendmsg+0x9a/0xe0 __sys_sendmsg+0x7a/0xd0 do_syscall_64+0x9e/0x1a0 entry_SYSCALL_64_after_hwframe+0x77/0x7f [3] unreferenced object 0xffff898081f597a0 (size 32): comm "ip", pid 1626, jiffies 4294719324 hex dump (first 32 bytes): 20 01 0d b8 00 02 00 00 00 00 00 00 00 00 00 01 ............... 21 49 61 83 80 89 ff ff 00 00 00 00 01 00 00 00 !Ia............. backtrace (crc fd9be911): [<00000000df89c55d>] __kmalloc_cache_noprof+0x1da/0x260 [<00000000ff2a1ddb>] mlxsw_sp_ipv6_addr_kvdl_index_get+0x281/0x340 [<000000009ddd445d>] mlxsw_sp_router_netdevice_event+0x47b/0x1240 [<00000000743e7757>] notifier_call_chain+0x5a/0xd0 [<000000007c7b9e13>] call_netdevice_notifiers_info+0x39/0x90 [<000000002509645d>] register_netdevice+0x5f7/0x7a0 [<00000000c2e7d2a9>] ip6gre_newlink_common.isra.0+0x65/0x130 [<0000000087cd6d8d>] ip6gre_newlink+0x72/0x120 [<000000004df7c7cc>] rtnl_newlink+0x471/0xa20 [<0000000057ed632a>] rtnetlink_rcv_msg+0x142/0x3f0 [<0000000032e0d5b5>] netlink_rcv_skb+0x50/0x100 [<00000000908bca63>] netlink_unicast+0x242/0x390 [<00000000cdbe1c87>] netlink_sendmsg+0x1de/0x420 [<0000000011db153e>] ____sys_sendmsg+0x2bd/0x320 [<000000003b6d53eb>] ___sys_sendmsg+0x9a/0xe0 [<00000000cae27c62>] __sys_sendmsg+0x7a/0xd0 Fixes: cf42911523e0 ("mlxsw: spectrum_ipip: Use common hash table for IPv6 address mapping") Reported-by: Maksym Yaremchuk <maksymy@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/e91012edc5a6cb9df37b78fd377f669381facfcb.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30mlxsw: pci: Sync Rx buffers for deviceAmit Cohen
Non-coherent architectures, like ARM, may require invalidating caches before the device can use the DMA mapped memory, which means that before posting pages to device, drivers should sync the memory for device. Sync for device can be configured as page pool responsibility. Set the relevant flag and define max_len for sync. Cc: Jiri Pirko <jiri@resnulli.us> Fixes: b5b60bb491b2 ("mlxsw: pci: Use page pool for Rx buffers allocation") Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/92e01f05c4f506a4f0a9b39c10175dcc01994910.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30mlxsw: pci: Sync Rx buffers for CPUAmit Cohen
When Rx packet is received, drivers should sync the pages for CPU, to ensure the CPU reads the data written by the device and not stale data from its cache. Add the missing sync call in Rx path, sync the actual length of data for each fragment. Cc: Jiri Pirko <jiri@resnulli.us> Fixes: b5b60bb491b2 ("mlxsw: pci: Use page pool for Rx buffers allocation") Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/461486fac91755ca4e04c2068c102250026dcd0b.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30mlxsw: spectrum_ptp: Add missing verification before pushing Tx headerAmit Cohen
Tx header should be pushed for each packet which is transmitted via Spectrum ASICs. The cited commit moved the call to skb_cow_head() from mlxsw_sp_port_xmit() to functions which handle Tx header. In case that mlxsw_sp->ptp_ops->txhdr_construct() is used to handle Tx header, and txhdr_construct() is mlxsw_sp_ptp_txhdr_construct(), there is no call for skb_cow_head() before pushing Tx header size to SKB. This flow is relevant for Spectrum-1 and Spectrum-4, for PTP packets. Add the missing call to skb_cow_head() to make sure that there is both enough room to push the Tx header and that the SKB header is not cloned and can be modified. An additional set will be sent to net-next to centralize the handling of the Tx header by pushing it to every packet just before transmission. Cc: Richard Cochran <richardcochran@gmail.com> Fixes: 24157bc69f45 ("mlxsw: Send PTP packets as data packets to overcome a limitation") Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/5145780b07ebbb5d3b3570f311254a3a2d554a44.1729866134.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-30net: skip offload for NETIF_F_IPV6_CSUM if ipv6 header contains extensionBenoît Monin
As documented in skbuff.h, devices with NETIF_F_IPV6_CSUM capability can only checksum TCP and UDP over IPv6 if the IP header does not contains extension. This is enforced for UDP packets emitted from user-space to an IPv6 address as they go through ip6_make_skb(), which calls __ip6_append_data() where a check is done on the header size before setting CHECKSUM_PARTIAL. But the introduction of UDP encapsulation with fou6 added a code-path where it is possible to get an skb with a partial UDP checksum and an IPv6 header with extension: * fou6 adds a UDP header with a partial checksum if the inner packet does not contains a valid checksum. * ip6_tunnel adds an IPv6 header with a destination option extension header if encap_limit is non-zero (the default value is 4). The thread linked below describes in more details how to reproduce the problem with GRE-in-UDP tunnel. Add a check on the network header size in skb_csum_hwoffload_help() to make sure no IPv6 packet with extension header is handed to a network device with NETIF_F_IPV6_CSUM capability. Link: https://lore.kernel.org/netdev/26548921.1r3eYUQgxm@benoit.monin/T/#u Fixes: aa3463d65e7b ("fou: Add encap ops for IPv6 tunnels") Signed-off-by: Benoît Monin <benoit.monin@gmx.fr> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/5fbeecfc311ea182aa1d1c771725ab8b4cac515e.1729778144.git.benoit.monin@gmx.fr Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-31cfi: tweak llvm version for HAVE_CFI_ICALL_NORMALIZE_INTEGERSAlice Ryhl
The llvm fix [1] did not make it for 19.0.0, but ended up getting backported to llvm 19.1.3 [2]. Thus, fix the version requirement to correctly specify which versions have the bug. Link: https://github.com/llvm/llvm-project/pull/104826 [1] Link: https://github.com/llvm/llvm-project/pull/113938 [2] Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202410281414.c351044e-oliver.sang@intel.com Fixes: 8b8ca9c25fe6 ("cfi: fix conditions for HAVE_CFI_ICALL_NORMALIZE_INTEGERS") Signed-off-by: Alice Ryhl <aliceryhl@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20241030-cfi-icall-1913-v1-1-ab8a26e13733@google.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2024-10-30KVM: x86/mmu: Batch TLB flushes when zapping collapsible TDP MMU SPTEsDavid Matlack
Set SPTEs directly to SHADOW_NONPRESENT_VALUE and batch up TLB flushes when zapping collapsible SPTEs, rather than freezing them first. Freezing the SPTE first is not required. It is fine for another thread holding mmu_lock for read to immediately install a present entry before TLBs are flushed because the underlying mapping is not changing. vCPUs that translate through the stale 4K mappings or a new huge page mapping will still observe the same GPA->HPA translations. KVM must only flush TLBs before dropping RCU (to avoid use-after-free of the zapped page tables) and before dropping mmu_lock (to synchronize with mmu_notifiers invalidating mappings). In VMs backed with 2MiB pages, batching TLB flushes improves the time it takes to zap collapsible SPTEs to disable dirty logging: $ ./dirty_log_perf_test -s anonymous_hugetlb_2mb -v 64 -e -b 4g Before: Disabling dirty logging time: 14.334453428s (131072 flushes) After: Disabling dirty logging time: 4.794969689s (76 flushes) Skipping freezing SPTEs also avoids stalling vCPU threads on the frozen SPTE for the time it takes to perform a remote TLB flush. vCPUs faulting on the zapped mapping can now immediately install a new huge mapping and proceed with guest execution. Signed-off-by: David Matlack <dmatlack@google.com> Link: https://lore.kernel.org/r/20240823235648.3236880-3-dmatlack@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Drop @max_level from kvm_mmu_max_mapping_level()David Matlack
Drop the @max_level parameter from kvm_mmu_max_mapping_level(). All callers pass in PG_LEVEL_NUM, so @max_level can be replaced with PG_LEVEL_NUM in the function body. No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> Link: https://lore.kernel.org/r/20240823235648.3236880-2-dmatlack@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86: Don't emit TLB flushes when aging SPTEs for mmu_notifiersSean Christopherson
Follow x86's primary MMU, which hasn't flushed TLBs when clearing Accessed bits for 10+ years, and skip all TLB flushes when aging SPTEs in response to a clear_flush_young() mmu_notifier event. As documented in x86's ptep_clear_flush_young(), the probability and impact of "bad" reclaim due to stale A-bit information is relatively low, whereas the performance cost of TLB flushes is relatively high. I.e. the cost of flushing TLBs outweighs the benefits. On KVM x86, the cost of TLB flushes is even higher, as KVM doesn't batch TLB flushes for mmu_notifier events (KVM's mmu_notifier contract with MM makes it all but impossible), and sending IPIs forces all running vCPUs to go through a VM-Exit => VM-Enter roundtrip. Furthermore, MGLRU aging of secondary MMUs is expected to use flush-less mmu_notifiers, i.e. flushing for the !MGLRU will make even less sense, and will be actively confusing as it wouldn't be clear why KVM "needs" to flush TLBs for legacy LRU aging, but not for MGLRU aging. Cc: James Houghton <jthoughton@google.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/all/20240926013506.860253-18-jthoughton@google.com Link: https://lore.kernel.org/r/20241011021051.1557902-19-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: Allow arch code to elide TLB flushes when aging a young pageSean Christopherson
Add a Kconfig to allow architectures to opt-out of a TLB flush when a young page is aged, as invalidating TLB entries is not functionally required on most KVM-supported architectures. Stale TLB entries can result in false negatives and theoretically lead to suboptimal reclaim, but in practice all observations have been that the performance gained by skipping TLB flushes outweighs any performance lost by reclaiming hot pages. E.g. the primary MMUs for x86 RISC-V, s390, and PPC Book3S elide the TLB flush for ptep_clear_flush_young(), and arm64's MMU skips the trailing DSB that's required for ordering (presumably because there are optimizations related to eliding other TLB flushes when doing make-before-break). Link: https://lore.kernel.org/r/20241011021051.1557902-18-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Set Dirty bit for new SPTEs, even if _hardware_ A/D bits are ↵Sean Christopherson
disabled When making a SPTE, set the Dirty bit in the SPTE as appropriate, even if hardware A/D bits are disabled. Only EPT allows A/D bits to be disabled, and for EPT, the bits are software-available (ignored by hardware) when A/D bits are disabled, i.e. it is perfectly legal for KVM to use the Dirty to track dirty pages in software. Link: https://lore.kernel.org/r/20241011021051.1557902-17-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Dedup logic for detecting TLB flushes on leaf SPTE changesSean Christopherson
Now that the shadow MMU and TDP MMU have identical logic for detecting required TLB flushes when updating SPTEs, move said logic to a helper so that the TDP MMU code can benefit from the comments that are currently exclusive to the shadow MMU. No functional change intended. Link: https://lore.kernel.org/r/20241011021051.1557902-16-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Stop processing TDP MMU roots for test_age if young SPTE foundSean Christopherson
Return immediately if a young SPTE is found when testing, but not updating, SPTEs. The return value is a boolean, i.e. whether there is one young SPTE or fifty is irrelevant (ignoring the fact that it's impossible for there to be fifty SPTEs, as KVM has a hard limit on the number of valid TDP MMU roots). Link: https://lore.kernel.org/r/20241011021051.1557902-15-seanjc@google.com [sean: use guard(rcu)(), as suggested by Paolo] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Process only valid TDP MMU roots when aging a gfn rangeSean Christopherson
Skip invalid TDP MMU roots when aging a gfn range. There is zero reason to process invalid roots, as they by definition hold stale information. E.g. if a root is invalid because its from a previous memslot generation, in the unlikely event the root has a SPTE for the gfn, then odds are good that the gfn=>hva mapping is different, i.e. doesn't map to the hva that is being aged by the primary MMU. Link: https://lore.kernel.org/r/20241011021051.1557902-14-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Use Accessed bit even when _hardware_ A/D bits are disabledSean Christopherson
Use the Accessed bit in SPTEs even when A/D bits are disabled in hardware, i.e. propagate accessed information to SPTE.Accessed even when KVM is doing manual tracking by making SPTEs not-present. In addition to eliminating a small amount of code in is_accessed_spte(), this also paves the way for preserving Accessed information when a SPTE is zapped in response to a mmu_notifier PROTECTION event, e.g. if a SPTE is zapped because NUMA balancing kicks in. Note, EPT is the only flavor of paging in which A/D bits are conditionally enabled, and the Accessed (and Dirty) bit is software-available when A/D bits are disabled. Note #2, there are currently no concrete plans to preserve Accessed information. Explorations on that front were the initial catalyst, but the cleanup is the motivation for the actual commit. Link: https://lore.kernel.org/r/20241011021051.1557902-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Set shadow_dirty_mask for EPT even if A/D bits disabledSean Christopherson
Set shadow_dirty_mask to the architectural EPT Dirty bit value even if A/D bits are disabled at the module level, i.e. even if KVM will never enable A/D bits in hardware. Doing so provides consistent behavior for Accessed and Dirty bits, i.e. doesn't leave KVM in a state where it sets shadow_accessed_mask but not shadow_dirty_mask. Functionally, this should be one big nop, as consumption of shadow_dirty_mask is always guarded by a check that hardware A/D bits are enabled. Link: https://lore.kernel.org/r/20241011021051.1557902-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Set shadow_accessed_mask for EPT even if A/D bits disabledSean Christopherson
Now that KVM doesn't use shadow_accessed_mask to detect if hardware A/D bits are enabled, set shadow_accessed_mask for EPT even when A/D bits are disabled in hardware. This will allow using shadow_accessed_mask for software purposes, e.g. to preserve accessed status in a non-present SPTE acros NUMA balancing, if something like that is ever desirable. Link: https://lore.kernel.org/r/20241011021051.1557902-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Add a dedicated flag to track if A/D bits are globally enabledSean Christopherson
Add a dedicated flag to track if KVM has enabled A/D bits at the module level, instead of inferring the state based on whether or not the MMU's shadow_accessed_mask is non-zero. This will allow defining and using shadow_accessed_mask even when A/D bits aren't used by hardware. Link: https://lore.kernel.org/r/20241011021051.1557902-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: WARN and flush if resolving a TDP MMU fault clears MMU-writableSean Christopherson
Do a remote TLB flush if installing a leaf SPTE overwrites an existing leaf SPTE (with the same target pfn, which is enforced by a BUG() in handle_changed_spte()) and clears the MMU-Writable bit. Since the TDP MMU passes ACC_ALL to make_spte(), i.e. always requests a Writable SPTE, the only scenario in which make_spte() should create a !MMU-Writable SPTE is if the gfn is write-tracked or if KVM is prefetching a SPTE. When write-protecting for write-tracking, KVM must hold mmu_lock for write, i.e. can't race with a vCPU faulting in the SPTE. And when prefetching a SPTE, the TDP MMU takes care to avoid clobbering a shadow-present SPTE, i.e. it should be impossible to replace a MMU-writable SPTE with a !MMU-writable SPTE when handling a TDP MMU fault. Cc: David Matlack <dmatlack@google.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241011021051.1557902-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Fold mmu_spte_update_no_track() into mmu_spte_update()Sean Christopherson
Fold the guts of mmu_spte_update_no_track() into mmu_spte_update() now that the latter doesn't flush when clearing A/D bits, i.e. now that there is no need to explicitly avoid TLB flushes when aging SPTEs. Opportunistically WARN if mmu_spte_update() requests a TLB flush when aging SPTEs, as aging should never modify a SPTE in such a way that KVM thinks a TLB flush is needed. Link: https://lore.kernel.org/r/20241011021051.1557902-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Drop ignored return value from kvm_tdp_mmu_clear_dirty_slot()Sean Christopherson
Drop the return value from kvm_tdp_mmu_clear_dirty_slot() as its sole caller ignores the result (KVM flushes after clearing dirty logs based on the logs themselves, not based on SPTEs). Cc: David Matlack <dmatlack@google.com> Link: https://lore.kernel.org/r/20241011021051.1557902-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Don't flush TLBs when clearing Dirty bit in shadow MMUSean Christopherson
Don't force a TLB flush when an SPTE update in the shadow MMU happens to clear the Dirty bit, as KVM unconditionally flushes TLBs when enabling dirty logging, and when clearing dirty logs, KVM flushes based on its software structures, not the SPTEs. I.e. the flows that care about accurate Dirty bit information already ensure there are no stale TLB entries. Opportunistically drop is_dirty_spte() as mmu_spte_update() was the sole caller. Link: https://lore.kernel.org/r/20241011021051.1557902-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bitSean Christopherson
Don't force a TLB flush if mmu_spte_update() clears the Accessed bit, as access tracking tolerates false negatives, as evidenced by the mmu_notifier hooks that explicitly test and age SPTEs without doing a TLB flush. In practice, this is very nearly a nop. spte_write_protect() and spte_clear_dirty() never clear the Accessed bit. make_spte() always sets the Accessed bit for !prefetch scenarios. FNAME(sync_spte) only sets SPTE if the protection bits are changing, i.e. if a flush will be needed regardless of the Accessed bits. And FNAME(pte_prefetch) sets SPTE if and only if the old SPTE is !PRESENT. That leaves kvm_arch_async_page_ready() as the one path that will generate a !ACCESSED SPTE *and* overwrite a PRESENT SPTE. And that's very arguably a bug, as clobbering a valid SPTE in that case is nonsensical. Tested-by: Alex Bennée <alex.bennee@linaro.org> Link: https://lore.kernel.org/r/20241011021051.1557902-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Fold all of make_spte()'s writable handling into one if-elseSean Christopherson
Now that make_spte() no longer uses a funky goto to bail out for a special case of its unsync handling, combine all of the unsync vs. writable logic into a single if-else statement. No functional change intended. Link: https://lore.kernel.org/r/20241011021051.1557902-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Always set SPTE's dirty bit if it's created as writableSean Christopherson
When creating a SPTE, always set the Dirty bit if the Writable bit is set, i.e. if KVM is creating a writable mapping. If two (or more) vCPUs are racing to install a writable SPTE on a !PRESENT fault, only the "winning" vCPU will create a SPTE with W=1 and D=1, all "losers" will generate a SPTE with W=1 && D=0. As a result, tdp_mmu_map_handle_target_level() will fail to detect that the losing faults are effectively spurious, and will overwrite the D=1 SPTE with a D=0 SPTE. For normal VMs, overwriting a present SPTE is a small performance blip; KVM blasts a remote TLB flush, but otherwise life goes on. For upcoming TDX VMs, overwriting a present SPTE is much more costly, and can even lead to the VM being terminated if KVM isn't careful, e.g. if KVM attempts TDH.MEM.PAGE.AUG because the TDX code doesn't detect that the new SPTE is actually the same as the old SPTE (which would be a bug in its own right). Suggested-by: Sagi Shahar <sagis@google.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241011021051.1557902-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: x86/mmu: Flush remote TLBs iff MMU-writable flag is cleared from RO SPTESean Christopherson
Don't force a remote TLB flush if KVM happens to effectively "refresh" a read-only SPTE that is still MMU-Writable, as KVM allows MMU-Writable SPTEs to have Writable TLB entries, even if the SPTE is !Writable. Remote TLBs need to be flushed only when creating a read-only SPTE for write-tracking, i.e. when installing a !MMU-Writable SPTE. In practice, especially now that KVM doesn't overwrite existing SPTEs when prefetching, KVM will rarely "refresh" a read-only, MMU-Writable SPTE, i.e. this is unlikely to eliminate many, if any, TLB flushes. But, more precisely flushing makes it easier to understand exactly when KVM does and doesn't need to flush. Note, x86 architecturally requires relevant TLB entries to be invalidated on a page fault, i.e. there is no risk of putting a vCPU into an infinite loop of read-only page faults. Cc: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241011021051.1557902-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30sched/ext: Fix scx vs sched_delayedPeter Zijlstra
Commit 98442f0ccd82 ("sched: Fix delayed_dequeue vs switched_from_fair()") forgot about scx :/ Fixes: 98442f0ccd82 ("sched: Fix delayed_dequeue vs switched_from_fair()") Reported-by: Tejun Heo <tj@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lkml.kernel.org/r/20241030104934.GK14555@noisy.programming.kicks-ass.net
2024-10-30KVM: Protect vCPU's "last run PID" with rwlock, not RCUSean Christopherson
To avoid jitter on KVM_RUN due to synchronize_rcu(), use a rwlock instead of RCU to protect vcpu->pid, a.k.a. the pid of the task last used to a vCPU. When userspace is doing M:N scheduling of tasks to vCPUs, e.g. to run SEV migration helper vCPUs during post-copy, the synchronize_rcu() needed to change the PID associated with the vCPU can stall for hundreds of milliseconds, which is problematic for latency sensitive post-copy operations. In the directed yield path, do not acquire the lock if it's contended, i.e. if the associated PID is changing, as that means the vCPU's task is already running. Reported-by: Steve Rutherford <srutherford@google.com> Reviewed-by: Steve Rutherford <srutherford@google.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240802200136.329973-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30x86/uaccess: Avoid barrier_nospec() in 64-bit copy_from_user()Linus Torvalds
The barrier_nospec() in 64-bit copy_from_user() is slow. Instead use pointer masking to force the user pointer to all 1's for an invalid address. The kernel test robot reports a 2.6% improvement in the per_thread_ops benchmark [1]. This is a variation on a patch originally by Josh Poimboeuf [2]. Link: https://lore.kernel.org/202410281344.d02c72a2-oliver.sang@intel.com [1] Link: https://lore.kernel.org/5b887fe4c580214900e21f6c61095adf9a142735.1730166635.git.jpoimboe@kernel.org [2] Tested-and-reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-10-30KVM: Return '0' directly when there's no task to yield toSean Christopherson
Do "return 0" instead of initializing and returning a local variable in kvm_vcpu_yield_to(), e.g. so that it's more obvious what the function returns if there is no task. No functional change intended. Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240802200136.329973-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: Rework core loop of kvm_vcpu_on_spin() to use a single for-loopSean Christopherson
Rework kvm_vcpu_on_spin() to use a single for-loop instead of making "two" passes over all vCPUs. Given N=kvm->last_boosted_vcpu, the logic is to iterate from vCPU[N+1]..vcpu[N-1], i.e. using two loops is just a kludgy way of handling the wrap from the last vCPU to vCPU0 when a boostable vCPU isn't found in vcpu[N+1]..vcpu[MAX]. Open code the xa_load() instead of using kvm_get_vcpu() to avoid reading online_vcpus in every loop, as well as the accompanying smp_rmb(), i.e. make it a custom kvm_for_each_vcpu(), for all intents and purposes. Oppurtunistically clean up the comment explaining the logic. Link: https://lore.kernel.org/r/20240802202121.341348-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30Merge tag 'perf-tools-fixes-for-v6.12-2-2024-10-30' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools Pull perf tools fixes from Arnaldo Carvalho de Melo: - Update more header copies with the kernel sources, including const.h, msr-index.h, arm64's cputype.h, kvm's, bits.h and unaligned.h - The return from 'write' isn't a pid, fix cut'n'paste error in 'perf trace' - Fix up the python binding build on architectures without HAVE_KVM_STAT_SUPPORT - Add some more bounds checks to augmented_raw_syscalls.bpf.c (used to collect syscall pointer arguments in 'perf trace') to make the resulting bytecode to pass the kernel BPF verifier, allowing us to go back accepting clang 12.0.1 as the minimum version required for compiling BPF sources - Add __NR_capget for x86 to fix a regression on running perf + intel PT (hw tracing) as non-root setting up the capabilities as described in https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html - Fix missing syscalltbl in non-explicitly listed architectures, noticed on ARM 32-bit, that still needs a .tbl generator for the syscall id<->name tables, should be added for v6.13 - Handle 'perf test' failure when handling broken DWARF for ASM files * tag 'perf-tools-fixes-for-v6.12-2-2024-10-30' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools: perf cap: Add __NR_capget to arch/x86 unistd tools headers: Update the linux/unaligned.h copy with the kernel sources tools headers arm64: Sync arm64's cputype.h with the kernel sources tools headers: Synchronize {uapi/}linux/bits.h with the kernel sources tools arch x86: Sync the msr-index.h copy with the kernel sources perf python: Fix up the build on architectures without HAVE_KVM_STAT_SUPPORT perf test: Handle perftool-testsuite_probe failure due to broken DWARF tools headers UAPI: Sync kvm headers with the kernel sources perf trace: Fix non-listed archs in the syscalltbl routines perf build: Change the clang check back to 12.0.1 perf trace augmented_raw_syscalls: Add more checks to pass the verifier perf trace augmented_raw_syscalls: Add extra array index bounds checking to satisfy some BPF verifiers perf trace: The return from 'write' isn't a pid tools headers UAPI: Sync linux/const.h with the kernel headers
2024-10-30KVM: selftests: Use ARRAY_SIZE for array lengthJiapeng Chong
Use of macro ARRAY_SIZE to calculate array size minimizes the redundant code and improves code reusability. ./tools/testing/selftests/kvm/x86_64/debug_regs.c:169:32-33: WARNING: Use ARRAY_SIZE. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=10847 Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20240913054315.130832-1-jiapeng.chong@linux.alibaba.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30KVM: selftests: Remove unused macro in the hardware disable testBa Jing
The macro GUEST_CODE_PIO_PORT is never referenced in the code, just remove it. Signed-off-by: Ba Jing <bajing@cmss.chinamobile.com> Link: https://lore.kernel.org/r/20240903043135.11087-1-bajing@cmss.chinamobile.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30rpcrdma: Always release the rpcrdma_device's xa_arrayChuck Lever
Dai pointed out that the xa_init_flags() in rpcrdma_add_one() needs to have a matching xa_destroy() in rpcrdma_remove_one() to release underlying memory that the xarray might have accrued during operation. Reported-by: Dai Ngo <dai.ngo@oracle.com> Fixes: 7e86845a0346 ("rpcrdma: Implement generic device removal") Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-10-30KVM: VMX: Remove the unused variable "gpa" in __invept()Yan Zhao
Remove the unused variable "gpa" in __invept(). The INVEPT instruction only supports two types: VMX_EPT_EXTENT_CONTEXT (1) and VMX_EPT_EXTENT_GLOBAL (2). Neither of these types requires a third variable "gpa". The "gpa" variable for __invept() is always set to 0 and was originally introduced for the old non-existent type VMX_EPT_EXTENT_INDIVIDUAL_ADDR (0). This type was removed by commit 2b3c5cbc0d81 ("kvm: don't use bit24 for detecting address-specific invalidation capability") and commit 63f3ac48133a ("KVM: VMX: clean up declaration of VPID/EPT invalidation types"). Since this variable is not useful for error handling either, remove it to avoid confusion. No functional changes expected. Cc: Yuan Yao <yuan.yao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241014045931.1061-1-yan.y.zhao@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30Merge branch 'fixes-for-bits-iterator'Alexei Starovoitov
Hou Tao says: ==================== The patch set fixes several issues in bits iterator. Patch #1 fixes the kmemleak problem of bits iterator. Patch #2~#3 fix the overflow problem of nr_bits. Patch #4 fixes the potential stack corruption when bits iterator is used on 32-bit host. Patch #5 adds more test cases for bits iterator. Please see the individual patches for more details. And comments are always welcome. --- v4: * patch #1: add ack from Yafang * patch #3: revert code-churn like changes: (1) compute nr_bytes and nr_bits before the check of nr_words. (2) use nr_bits == 64 to check for single u64, preventing build warning on 32-bit hosts. * patch #4: use "BITS_PER_LONG == 32" instead of "!defined(CONFIG_64BIT)" v3: https://lore.kernel.org/bpf/20241025013233.804027-1-houtao@huaweicloud.com/T/#t * split the bits-iterator related patches from "Misc fixes for bpf" patch set * patch #1: use "!nr_bits || bits >= nr_bits" to stop the iteration * patch #2: add a new helper for the overflow problem * patch #3: decrease the limitation from 512 to 511 and check whether nr_bytes is too large for bpf memory allocator explicitly * patch #5: add two more test cases for bit iterator v2: http://lore.kernel.org/bpf/d49fa2f4-f743-c763-7579-c3cab4dd88cb@huaweicloud.com ==================== Link: https://lore.kernel.org/r/20241030100516.3633640-1-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30selftests/bpf: Add three test cases for bits_iterHou Tao
Add more test cases for bits iterator: (1) huge word test Verify the multiplication overflow of nr_bits in bits_iter. Without the overflow check, when nr_words is 67108865, nr_bits becomes 64, causing bpf_probe_read_kernel_common() to corrupt the stack. (2) max word test Verify correct handling of maximum nr_words value (511). (3) bad word test Verify early termination of bits iteration when bits iterator initialization fails. Also rename bits_nomem to bits_too_big to better reflect its purpose. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241030100516.3633640-6-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30bpf: Use __u64 to save the bits in bits iteratorHou Tao
On 32-bit hosts (e.g., arm32), when a bpf program passes a u64 to bpf_iter_bits_new(), bpf_iter_bits_new() will use bits_copy to store the content of the u64. However, bits_copy is only 4 bytes, leading to stack corruption. The straightforward solution would be to replace u64 with unsigned long in bpf_iter_bits_new(). However, this introduces confusion and problems for 32-bit hosts because the size of ulong in bpf program is 8 bytes, but it is treated as 4-bytes after passed to bpf_iter_bits_new(). Fix it by changing the type of both bits and bit_count from unsigned long to u64. However, the change is not enough. The main reason is that bpf_iter_bits_next() uses find_next_bit() to find the next bit and the pointer passed to find_next_bit() is an unsigned long pointer instead of a u64 pointer. For 32-bit little-endian host, it is fine but it is not the case for 32-bit big-endian host. Because under 32-bit big-endian host, the first iterated unsigned long will be the bits 32-63 of the u64 instead of the expected bits 0-31. Therefore, in addition to changing the type, swap the two unsigned longs within the u64 for 32-bit big-endian host. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241030100516.3633640-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30bpf: Check the validity of nr_words in bpf_iter_bits_new()Hou Tao
Check the validity of nr_words in bpf_iter_bits_new(). Without this check, when multiplication overflow occurs for nr_bits (e.g., when nr_words = 0x0400-0001, nr_bits becomes 64), stack corruption may occur due to bpf_probe_read_kernel_common(..., nr_bytes = 0x2000-0008). Fix it by limiting the maximum value of nr_words to 511. The value is derived from the current implementation of BPF memory allocator. To ensure compatibility if the BPF memory allocator's size limitation changes in the future, use the helper bpf_mem_alloc_check_size() to check whether nr_bytes is too larger. And return -E2BIG instead of -ENOMEM for oversized nr_bytes. Fixes: 4665415975b0 ("bpf: Add bits iterator") Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241030100516.3633640-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30bpf: Add bpf_mem_alloc_check_size() helperHou Tao
Introduce bpf_mem_alloc_check_size() to check whether the allocation size exceeds the limitation for the kmalloc-equivalent allocator. The upper limit for percpu allocation is LLIST_NODE_SZ bytes larger than non-percpu allocation, so a percpu argument is added to the helper. The helper will be used in the following patch to check whether the size parameter passed to bpf_mem_alloc() is too big. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241030100516.3633640-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()Hou Tao
bpf_iter_bits_destroy() uses "kit->nr_bits <= 64" to check whether the bits are dynamically allocated. However, the check is incorrect and may cause a kmemleak as shown below: unreferenced object 0xffff88812628c8c0 (size 32): comm "swapper/0", pid 1, jiffies 4294727320 hex dump (first 32 bytes): b0 c1 55 f5 81 88 ff ff f0 f0 f0 f0 f0 f0 f0 f0 ..U........... f0 f0 f0 f0 f0 f0 f0 f0 00 00 00 00 00 00 00 00 .............. backtrace (crc 781e32cc): [<00000000c452b4ab>] kmemleak_alloc+0x4b/0x80 [<0000000004e09f80>] __kmalloc_node_noprof+0x480/0x5c0 [<00000000597124d6>] __alloc.isra.0+0x89/0xb0 [<000000004ebfffcd>] alloc_bulk+0x2af/0x720 [<00000000d9c10145>] prefill_mem_cache+0x7f/0xb0 [<00000000ff9738ff>] bpf_mem_alloc_init+0x3e2/0x610 [<000000008b616eac>] bpf_global_ma_init+0x19/0x30 [<00000000fc473efc>] do_one_initcall+0xd3/0x3c0 [<00000000ec81498c>] kernel_init_freeable+0x66a/0x940 [<00000000b119f72f>] kernel_init+0x20/0x160 [<00000000f11ac9a7>] ret_from_fork+0x3c/0x70 [<0000000004671da4>] ret_from_fork_asm+0x1a/0x30 That is because nr_bits will be set as zero in bpf_iter_bits_next() after all bits have been iterated. Fix the issue by setting kit->bit to kit->nr_bits instead of setting kit->nr_bits to zero when the iteration completes in bpf_iter_bits_next(). In addition, use "!nr_bits || bits >= nr_bits" to check whether the iteration is complete and still use "nr_bits > 64" to indicate whether bits are dynamically allocated. The "!nr_bits" check is necessary because bpf_iter_bits_new() may fail before setting kit->nr_bits, and this condition will stop the iteration early instead of accessing the zeroed or freed kit->bits. Considering the initial value of kit->bits is -1 and the type of kit->nr_bits is unsigned int, change the type of kit->nr_bits to int. The potential overflow problem will be handled in the following patch. Fixes: 4665415975b0 ("bpf: Add bits iterator") Acked-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241030100516.3633640-2-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>