summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2024-12-20arm64: dts: st: add csi & dcmipp node in stm32mp25Alain Volmat
Add nodes describing the csi and dcmipp controllers handling the camera pipeline on the stm32mp25x. Signed-off-by: Alain Volmat <alain.volmat@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: Swap USART3 and UART8 alias on STM32MP15xx DHCOM SoMMarek Vasut
Swap USART3 and UART8 aliases on STM32MP15xx DHCOM SoM, make sure UART8 is listed first, USART3 second, because the UART8 is labeled as UART2 on the SoM pinout, while USART3 is labeled as UART3 on the SoM pinout. Fixes: 34e0c7847dcf ("ARM: dts: stm32: Add DH Electronics DHCOM STM32MP1 SoM and PDK2 board") Signed-off-by: Marek Vasut <marex@denx.de> Reviewed-by: Christoph Niedermaier <cniedermaier@dh-electronics.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: add counter subnodes on stm32mp157 dk boardsFabrice Gasnier
Enable the counter nodes without dedicated pins. With such configuration, the counter interface can be used on internal clock to generate events. Signed-off-by: Fabrice Gasnier <fabrice.gasnier@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: add counter subnodes on stm32mp157c-ev1Fabrice Gasnier
Enable the counter nodes without dedicated pins. With such configuration, the counter interface can be used on internal clock to generate events. Signed-off-by: Fabrice Gasnier <fabrice.gasnier@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: add counter subnodes on stm32mp135f-dkFabrice Gasnier
Enable the counter nodes without dedicated pins. With such configuration, the counter interface can be used on internal clock to generate events. Signed-off-by: Fabrice Gasnier <fabrice.gasnier@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: populate all timer counter nodes on stm32mp15Fabrice Gasnier
Counter driver originally had support limited to quadrature interface and simple counter. It has been improved[1], so add the remaining stm32 timer counter nodes. [1] https://lore.kernel.org/linux-arm-kernel/20240307133306.383045-1-fabrice.gasnier@foss.st.com/ Signed-off-by: Fabrice Gasnier <fabrice.gasnier@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-20ARM: dts: stm32: populate all timer counter nodes on stm32mp13Fabrice Gasnier
Counter driver originally had support limited to quadrature interface and simple counter. It has been improved[1], so add the remaining stm32 timer counter nodes. [1] https://lore.kernel.org/linux-arm-kernel/20240307133306.383045-1-fabrice.gasnier@foss.st.com/ Signed-off-by: Fabrice Gasnier <fabrice.gasnier@foss.st.com> Signed-off-by: Alexandre Torgue <alexandre.torgue@foss.st.com>
2024-12-19KVM: x86/mmu: Treat TDP MMU faults as spurious if access is already allowedSean Christopherson
Treat slow-path TDP MMU faults as spurious if the access is allowed given the existing SPTE to fix a benign warning (other than the WARN itself) due to replacing a writable SPTE with a read-only SPTE, and to avoid the unnecessary LOCK CMPXCHG and subsequent TLB flush. If a read fault races with a write fault, fast GUP fails for any reason when trying to "promote" the read fault to a writable mapping, and KVM resolves the write fault first, then KVM will end up trying to install a read-only SPTE (for a !map_writable fault) overtop a writable SPTE. Note, it's not entirely clear why fast GUP fails, or if that's even how KVM ends up with a !map_writable fault with a writable SPTE. If something else is going awry, e.g. due to a bug in mmu_notifiers, then treating read faults as spurious in this scenario could effectively mask the underlying problem. However, retrying the faulting access instead of overwriting an existing SPTE is functionally correct and desirable irrespective of the WARN, and fast GUP _can_ legitimately fail with a writable VMA, e.g. if the Accessed bit in primary MMU's PTE is toggled and causes a PTE value mismatch. The WARN was also recently added, specifically to track down scenarios where KVM is unnecessarily overwrites SPTEs, i.e. treating the fault as spurious doesn't regress KVM's bug-finding capabilities in any way. In short, letting the WARN linger because there's a tiny chance it's due to a bug elsewhere would be excessively paranoid. Fixes: 1a175082b190 ("KVM: x86/mmu: WARN and flush if resolving a TDP MMU fault clears MMU-writable") Reported-by: Lei Yang <leiyang@redhat.com> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219588 Tested-by: Lei Yang <leiyang@redhat.com> Link: https://lore.kernel.org/r/20241218213611.3181643-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: SVM: Allow guest writes to set MSR_AMD64_DE_CFG bitsSean Christopherson
Drop KVM's arbitrary behavior of making DE_CFG.LFENCE_SERIALIZE read-only for the guest, as rejecting writes can lead to guest crashes, e.g. Windows in particular doesn't gracefully handle unexpected #GPs on the WRMSR, and nothing in the AMD manuals suggests that LFENCE_SERIALIZE is read-only _if it exists_. KVM only allows LFENCE_SERIALIZE to be set, by the guest or host, if the underlying CPU has X86_FEATURE_LFENCE_RDTSC, i.e. if LFENCE is guaranteed to be serializing. So if the guest sets LFENCE_SERIALIZE, KVM will provide the desired/correct behavior without any additional action (the guest's value is never stuffed into hardware). And having LFENCE be serializing even when it's not _required_ to be is a-ok from a functional perspective. Fixes: 74a0e79df68a ("KVM: SVM: Disallow guest from changing userspace's MSR_AMD64_DE_CFG value") Fixes: d1d93fa90f1a ("KVM: SVM: Add MSR-based feature support for serializing LFENCE") Reported-by: Simon Pilkington <simonp.git@mailbox.org> Closes: https://lore.kernel.org/all/52914da7-a97b-45ad-86a0-affdf8266c61@mailbox.org Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: stable@vger.kernel.org Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20241211172952.1477605-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Play nice with protected guests in complete_hypercall_exit()Sean Christopherson
Use is_64_bit_hypercall() instead of is_64_bit_mode() to detect a 64-bit hypercall when completing said hypercall. For guests with protected state, e.g. SEV-ES and SEV-SNP, KVM must assume the hypercall was made in 64-bit mode as the vCPU state needed to detect 64-bit mode is unavailable. Hacking the sev_smoke_test selftest to generate a KVM_HC_MAP_GPA_RANGE hypercall via VMGEXIT trips the WARN: ------------[ cut here ]------------ WARNING: CPU: 273 PID: 326626 at arch/x86/kvm/x86.h:180 complete_hypercall_exit+0x44/0xe0 [kvm] Modules linked in: kvm_amd kvm ... [last unloaded: kvm] CPU: 273 UID: 0 PID: 326626 Comm: sev_smoke_test Not tainted 6.12.0-smp--392e932fa0f3-feat #470 Hardware name: Google Astoria/astoria, BIOS 0.20240617.0-0 06/17/2024 RIP: 0010:complete_hypercall_exit+0x44/0xe0 [kvm] Call Trace: <TASK> kvm_arch_vcpu_ioctl_run+0x2400/0x2720 [kvm] kvm_vcpu_ioctl+0x54f/0x630 [kvm] __se_sys_ioctl+0x6b/0xc0 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e </TASK> ---[ end trace 0000000000000000 ]--- Fixes: b5aead0064f3 ("KVM: x86: Assume a 64-bit hypercall for guests with protected state") Cc: stable@vger.kernel.org Cc: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/r/20241128004344.4072099-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: SVM: Disable AVIC on SNP-enabled system without HvInUseWrAllowed featureSuravee Suthikulpanit
On SNP-enabled system, VMRUN marks AVIC Backing Page as in-use while the guest is running for both secure and non-secure guest. Any hypervisor write to the in-use vCPU's AVIC backing page (e.g. to inject an interrupt) will generate unexpected #PF in the host. Currently, attempt to run AVIC guest would result in the following error: BUG: unable to handle page fault for address: ff3a442e549cc270 #PF: supervisor write access in kernel mode #PF: error_code(0x80000003) - RMP violation PGD b6ee01067 P4D b6ee02067 PUD 10096d063 PMD 11c540063 PTE 80000001149cc163 SEV-SNP: PFN 0x1149cc unassigned, dumping non-zero entries in 2M PFN region: [0x114800 - 0x114a00] ... Newer AMD system is enhanced to allow hypervisor to modify the backing page for non-secure guest on SNP-enabled system. This enhancement is available when the CPUID Fn8000_001F_EAX bit 30 is set (HvInUseWrAllowed). This table describes AVIC support matrix w.r.t. SNP enablement: | Non-SNP system | SNP system ----------------------------------------------------- Non-SNP guest | AVIC Activate | AVIC Activate iff | | HvInuseWrAllowed=1 ----------------------------------------------------- SNP guest | N/A | Secure AVIC Therefore, check and disable AVIC in kvm_amd driver when the feature is not available on SNP-enabled system. See the AMD64 Architecture Programmer’s Manual (APM) Volume 2 for detail. (https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/ programmer-references/40332.pdf) Fixes: 216d106c7ff7 ("x86/sev: Add SEV-SNP host initialization support") Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20241104075845.7583-1-suravee.suthikulpanit@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19arm64: dts: qcom: qcs8300: Add watchdog nodeXin Liu
Add the watchdog node for QCS8300 SoC. Signed-off-by: Xin Liu <quic_liuxin@quicinc.com>
2024-12-20Merge tag 'drm-misc-next-2024-12-19' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/misc/kernel into drm-next drm-misc-next for 6.14: UAPI Changes: Cross-subsystem Changes: Core Changes: - connector: Add a mutex to protect ELD access, Add a helper to create a connector in two steps Driver Changes: - amdxdna: Add RyzenAI-npu6 Support, various improvements - rcar-du: Add r8a779h0 Support - rockchip: various improvements - zynqmp: Add DP audio support - bridges: - ti-sn65dsi83: Add ti,lvds-vod-swing optional properties - panels: - new panels: Tianma TM070JDHG34-00, Multi-Inno Technology MI1010Z1T-1CP11 Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maxime Ripard <mripard@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20241219-truthful-demonic-hound-598f63@houat
2024-12-20Merge tag 'drm-misc-fixes-2024-12-19' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/misc/kernel into drm-fixes drm-misc-fixes for v6.13-rc4: - udma-buf fixes related to sealing. - dma-buf build warning fix when debugfs is not enabled. - Assorted drm/panel fixes. - Correct error return in drm_dp_tunnel_mgr_create. - Fix even more divide by zero in drm_mode_vrefresh. - Fix FBDEV dependencies in Kconfig. - Documentation fix for drm_sched_fini. - IVPU NULL pointer, memory leak and WARN fix. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/d0763051-87b7-483e-89e0-a9f993383450@linux.intel.com
2024-12-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.13-rc4). No conflicts. Adjacent changes: drivers/net/ethernet/renesas/rswitch.h 32fd46f5b69e ("net: renesas: rswitch: remove speed from gwca structure") 922b4b955a03 ("net: renesas: rswitch: rework ts tags management") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-19Revert "arm64: dts: qcom: x1e80100: enable OTG on USB-C controllers"Johan Hovold
This reverts commit f042bc234c2e00764b8aa2c9e2f8177cdc63f664. A recent change enabling role switching for the x1e80100 USB-C controllers breaks UCSI and DisplayPort Alternate Mode when the controllers are in host mode: ucsi_glink.pmic_glink_ucsi pmic_glink.ucsi.0: PPM init failed, stop trying As enabling OTG mode currently breaks SuperSpeed hotplug and suspend, and with retimer (and orientation detection) support not even merged yet, let's revert at least until we have stable host mode in mainline. Fixes: f042bc234c2e ("arm64: dts: qcom: x1e80100: enable OTG on USB-C controllers") Reported-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/all/hw2pdof4ajadjsjrb44f2q4cz4yh5qcqz5d3l7gjt2koycqs3k@xx5xvd26uyef Link: https://lore.kernel.org/lkml/Z1gbyXk-SktGjL6-@hovoldconsulting.com/ Cc: Jonathan Marek <jonathan@marek.ca> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Abel Vesa <abel.vesa@linaro.org> Link: https://lore.kernel.org/r/20241210111444.26240-4-johan+linaro@kernel.org Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2024-12-19Revert "arm64: dts: qcom: x1e80100-crd: enable otg on usb ports"Johan Hovold
This reverts commit 2dd3250191bcfe93b0c9da46624af830310400a7. A recent change enabling OTG mode on the x1e81000 CRD breaks suspend. Specifically, the device hard resets during resume if suspended with all controllers in device mode (i.e. no USB device connected). The corresponding change on the T14s also led to SuperSpeed hotplugs not being detected. With retimer (and orientation detection) support not even merged yet, let's revert at least until we have stable host mode in mainline. Fixes: 2dd3250191bc ("arm64: dts: qcom: x1e80100-crd: enable otg on usb ports") Reported-by: Abel Vesa <abel.vesa@linaro.org> Cc: Jonathan Marek <jonathan@marek.ca> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Abel Vesa <abel.vesa@linaro.org> Link: https://lore.kernel.org/r/20241210111444.26240-3-johan+linaro@kernel.org Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2024-12-19arm64/sysreg: Get rid of CPACR_ELx SysregFieldsMarc Zyngier
There is no such thing as CPACR_ELx in the architecture. What we have is CPACR_EL1, for which CPTR_EL12 is an accessor. Rename CPACR_ELx_* to CPACR_EL1_*, and fix the bit of code using these names. Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241219173351.1123087-5-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/sysreg: Convert *_EL12 accessors to MappingMarc Zyngier
Perform a bulk convert of the remaining EL12 accessors to use the Mapping qualifier, which makes things a bit clearer. Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241219173351.1123087-4-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/sysreg: Get rid of the TCR2_EL1x SysregFieldsMarc Zyngier
TCR2_EL1x is a pretty bizarre construct, as it is shared between TCR2_EL1 and TCR2_EL12. But the latter is obviously only an accessor to the former. In order to make things more consistent, upgrade TCR2_EL1x to a full-blown sysreg definition for TCR2_EL1, and describe TCR2_EL12 as a mapping to TCR2_EL1. This results in a couple of minor changes to the actual code. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241219173351.1123087-3-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/sysreg: Allow a 'Mapping' descriptor for system registersMarc Zyngier
*EL02 and *_EL12 system registers are actually only accessors for EL0 and EL1 registers accessed from EL2 when HCR_EL2.E2H==1. They do not have fields of their own. To that effect, introduce a 'Mapping' entry, describing which system register an _EL12 register maps to. Implementation wise, this is handled the same was as Fields, which ls only a comment. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241219173351.1123087-2-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PANArd Biesheuvel
There are a couple of instances of Kconfig constraints where PAN must be enabled too if TTBR0 sw PAN is enabled, primarily to avoid dealing with the modified TTBR0_EL1 sysreg format that is used when 52-bit physical addressing and/or CnP are enabled (support for either implies support for hardware PAN as well, which will supersede PAN emulation if both are available) Let's simplify this, and always enable ARM64_PAN when enabling TTBR0 sw PAN. This decouples the PAN configuration from the VA size selection, permitting us to simplify the latter in subsequent patches. (Note that PAN and TTBR0 sw PAN can still be disabled after this patch, but not independently) To avoid a convoluted circular Kconfig dependency involving KCSAN, make ARM64_MTE select ARM64_PAN too, instead of depending on it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241212081841.2168124-13-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/kvm: Avoid invalid physical addresses to signal owner updatesArd Biesheuvel
The pKVM stage2 mapping code relies on an invalid physical address to signal to the internal API that only the annotations of descriptors should be updated, and these are stored in the high bits of invalid descriptors covering memory that has been donated to protected guests, and is therefore unmapped from the host stage-2 page tables. Given that these invalid PAs are never stored into the descriptors, it is better to rely on an explicit flag, to clarify the API and to avoid confusion regarding whether or not the output address of a descriptor can ever be invalid to begin with (which is not the case with LPA2). That removes a dependency on the logic that reasons about the maximum PA range, which differs on LPA2 capable CPUs based on whether LPA2 is enabled or not, and will be further clarified in subsequent patches. Cc: Quentin Perret <qperret@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Quentin Perret <qperret@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241212081841.2168124-12-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/kvm: Configure HYP TCR.PS/DS based on host stage1Ard Biesheuvel
When the host stage1 is configured for LPA2, the value currently being programmed into TCR_EL2.T0SZ may be invalid unless LPA2 is configured at HYP as well. This means kvm_lpa2_is_enabled() is not the right condition to test when setting TCR_EL2.DS, as it will return false if LPA2 is only available for stage 1 but not for stage 2. Similary, programming TCR_EL2.PS based on a limited IPA range due to lack of stage2 LPA2 support could potentially result in problems. So use lpa2_is_enabled() instead, and set the PS field according to the host's IPS, which is capped at 48 bits if LPA2 support is absent or disabled. Whether or not we can make meaningful use of such a configuration is a different question. Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241212081841.2168124-11-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/mm: Override PARange for !LPA2 and use it consistentlyArd Biesheuvel
When FEAT_LPA{,2} are not implemented, the ID_AA64MMFR0_EL1.PARange and TCR.IPS values corresponding with 52-bit physical addressing are reserved. Setting the TCR.IPS field to 0b110 (52-bit physical addressing) has side effects, such as how the TTBRn_ELx.BADDR fields are interpreted, and so it is important that disabling FEAT_LPA2 (by overriding the ID_AA64MMFR0.TGran fields) also presents a PARange field consistent with that. So limit the field to 48 bits unless LPA2 is enabled, and update existing references to use the override consistently. Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs") Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241212081841.2168124-10-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabledArd Biesheuvel
Currently, LPA2 kernel support implies support for up to 52 bits of physical addressing, and this is reflected in global definitions such as PHYS_MASK_SHIFT and MAX_PHYSMEM_BITS. This is potentially problematic, given that LPA2 hardware support is modeled as a CPU feature which can be overridden, and with LPA2 hardware support turned off, attempting to map physical regions with address bits [51:48] set (which may exist on LPA2 capable systems booting with arm64.nolva) will result in corrupted mappings with a truncated output address and bogus shareability attributes. This means that the accepted physical address range in the mapping routines should be at most 48 bits wide when LPA2 support is configured but not enabled at runtime. Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs") Cc: stable@vger.kernel.org Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241212081841.2168124-9-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19KVM: x86: Remove hwapic_irr_update() from kvm_x86_opsChao Gao
Remove the redundant .hwapic_irr_update() ops. If a vCPU has APICv enabled, KVM updates its RVI before VM-enter to L1 in vmx_sync_pir_to_irr(). This guarantees RVI is up-to-date and aligned with the vIRR in the virtual APIC. So, no need to update RVI every time the vIRR changes. Note that KVM never updates vmcs02 RVI in .hwapic_irr_update() or vmx_sync_pir_to_irr(). So, removing .hwapic_irr_update() has no impact to the nested case. Signed-off-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20241111085947.432645-1-chao.gao@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Honor event priority when emulating PI delivery during VM-EnterSean Christopherson
Move the handling of a nested posted interrupt notification that is unblocked by nested VM-Enter (unblocks L1 IRQs when ack-on-exit is enabled by L1) from VM-Enter emulation to vmx_check_nested_events(). To avoid a pointless forced immediate exit, i.e. to not regress IRQ delivery latency when a nested posted interrupt is pending at VM-Enter, block processing of the notification IRQ if and only if KVM must block _all_ events. Unlike injected events, KVM doesn't need to actually enter L2 before updating the vIRR and vmcs02.GUEST_INTR_STATUS, as the resulting L2 IRQ will be blocked by hardware itself, until VM-Enter to L2 completes. Note, very strictly speaking, moving the IRQ from L2's PIR to IRR before entering L2 is still technically wrong. But, practically speaking, only an L1 hypervisor or an L0 userspace that is deliberately checking event priority against PIR=>IRR processing can even notice; L2 will see architecturally correct behavior, as KVM ensures the VM-Enter is finished before doing anything that would effectively preempt the PIR=>IRR movement. Reported-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20241101191447.1807602-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Use vmcs01's controls shadow to check for IRQ/NMI windows at VM-EnterSean Christopherson
Use vmcs01's execution controls shadow to check for IRQ/NMI windows after a successful nested VM-Enter, instead of snapshotting the information prior to emulating VM-Enter. It's quite difficult to see that the entire reason controls are snapshot prior nested VM-Enter is to read them from vmcs01 (vmcs02 is loaded if nested VM-Enter is successful). That could be solved with a comment, but explicitly using vmcs01's shadow makes the code self-documenting to a certain extent. No functional change intended (vmcs01's execution controls must not be modified during emulation of nested VM-Enter). Link: https://lore.kernel.org/r/20241101191447.1807602-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Drop manual vmcs01.GUEST_INTERRUPT_STATUS.RVI check at VM-EnterSean Christopherson
Drop the manual check for a pending IRQ in vmcs01's RVI field during nested VM-Enter, as the recently added call to kvm_apic_has_interrupt() when checking for pending events after successful VM-Enter is a superset of the RVI check (IRQs that are pending in RVI are also pending in L1's IRR). Link: https://lore.kernel.org/r/20241101191447.1807602-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Check for pending INIT/SIPI after entering non-root modeSean Christopherson
Explicitly check for a pending INIT or SIPI after entering non-root mode during nested VM-Enter emulation, as no VMCS information is quered as part of the check, i.e. there is no need to check for INIT/SIPI while vmcs01 is still loaded. Link: https://lore.kernel.org/r/20241101191447.1807602-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Explicitly update vPPR on successful nested VM-EnterSean Christopherson
Always request pending event evaluation after successful nested VM-Enter if L1 has a pending IRQ, as KVM will effectively do so anyways when APICv is enabled, by way of vmx_has_apicv_interrupt(). This will allow dropping the aforementioned APICv check, and will also allow handling nested Posted Interrupt processing entirely within vmx_check_nested_events(), which is necessary to honor priority between concurrent events. Note, checking for pending IRQs has a subtle side effect, as it results in a PPR update for L1's vAPIC (PPR virtualization does happen at VM-Enter, but for nested VM-Enter that affects L2's vAPIC, not L1's vAPIC). However, KVM updates PPR _constantly_, even when PPR technically shouldn't be refreshed, e.g. kvm_vcpu_has_events() re-evaluates PPR if IRQs are unblocked, by way of the same kvm_apic_has_interrupt() check. Ditto for nested VM-Enter itself, when nested posted interrupts are enabled. Thus, trying to avoid a PPR update on VM-Enter just to be pedantically accurate is ridiculous, given the behavior elsewhere in KVM. Link: https://lore.kernel.org/kvm/20230312180048.1778187-1-jason.cj.chen@intel.com Closes: https://lore.kernel.org/all/20240920080012.74405-1-mankku@gmail.com Signed-off-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20241101191447.1807602-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: VMX: Allow toggling bits in MSR_IA32_RTIT_CTL when enable bit is clearedAdrian Hunter
Allow toggling other bits in MSR_IA32_RTIT_CTL if the enable bit is being cleared, the existing logic simply ignores the enable bit. E.g. KVM will incorrectly reject a write of '0' to stop tracing. Fixes: bf8c55d8dc09 ("KVM: x86: Implement Intel PT MSRs read/write emulation") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> [sean: rework changelog, drop stable@] Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20241101185031.1799556-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: nVMX: Defer SVI update to vmcs01 on EOI when L2 is active w/o VIDChao Gao
If KVM emulates an EOI for L1's virtual APIC while L2 is active, defer updating GUEST_INTERUPT_STATUS.SVI, i.e. the VMCS's cache of the highest in-service IRQ, until L1 is active, as vmcs01, not vmcs02, needs to track vISR. The missed SVI update for vmcs01 can result in L1 interrupts being incorrectly blocked, e.g. if there is a pending interrupt with lower priority than the interrupt that was EOI'd. This bug only affects use cases where L1's vAPIC is effectively passed through to L2, e.g. in a pKVM scenario where L2 is L1's depriveleged host, as KVM will only emulate an EOI for L1's vAPIC if Virtual Interrupt Delivery (VID) is disabled in vmc12, and L1 isn't intercepting L2 accesses to its (virtual) APIC page (or if x2APIC is enabled, the EOI MSR). WARN() if KVM updates L1's ISR while L2 is active with VID enabled, as an EOI from L2 is supposed to affect L2's vAPIC, but still defer the update, to try to keep L1 alive. Specifically, KVM forwards all APICv-related VM-Exits to L1 via nested_vmx_l1_wants_exit(): case EXIT_REASON_APIC_ACCESS: case EXIT_REASON_APIC_WRITE: case EXIT_REASON_EOI_INDUCED: /* * The controls for "virtualize APIC accesses," "APIC- * register virtualization," and "virtual-interrupt * delivery" only come from vmcs12. */ return true; Fixes: c7c9c56ca26f ("x86, apicv: add virtual interrupt delivery support") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kvm/20230312180048.1778187-1-jason.cj.chen@intel.com Reported-by: Markku Ahvenjärvi <mankku@gmail.com> Closes: https://lore.kernel.org/all/20240920080012.74405-1-mankku@gmail.com Cc: Janne Karhunen <janne.karhunen@gmail.com> Signed-off-by: Chao Gao <chao.gao@intel.com> [sean: drop request, handle in VMX, write changelog] Tested-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20241128000010.4051275-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19Merge tag 'kvm-selftests-treewide-6.14' of https://github.com/kvm-x86/linux ↵Paolo Bonzini
into HEAD KVM selftests "tree"-wide changes for 6.14: - Rework vcpu_get_reg() to return a value instead of using an out-param, and update all affected arch code accordingly. - Convert the max_guest_memory_test into a more generic mmu_stress_test. The basic gist of the "conversion" is to have the test do mprotect() on guest memory while vCPUs are accessing said memory, e.g. to verify KVM and mmu_notifiers are working as intended. - Play nice with treewrite builds of unsupported architectures, e.g. arm (32-bit), as KVM selftests' Makefile doesn't do anything to ensure the target architecture is actually one KVM selftests supports. - Use the kernel's $(ARCH) definition instead of the target triple for arch specific directories, e.g. arm64 instead of aarch64, mainly so as not to be different from the rest of the kernel.
2024-12-19arm64: dts: mediatek: mt8195: Remove suspend-breaking reset from pcie1Nícolas F. R. A. Prado
The MAC reset for PCIe port 1 on MT8195 when asserted during suspend causes the system to hang during resume with the following error (with no_console_suspend enabled): mtk-pcie-gen3 112f8000.pcie: PCIe link down, current LTSSM state: detect.quiet (0x0) mtk-pcie-gen3 112f8000.pcie: PM: dpm_run_callback(): genpd_resume_noirq+0x0/0x24 returns -110 mtk-pcie-gen3 112f8000.pcie: PM: failed to resume noirq: error -110 This issue is specific to MT8195. On MT8192 with the PCIe reset, MT8192_INFRA_RST4_PCIE_TOP_SWRST, added to the DT node, the issue is not observed. Since without the reset, the PCIe controller and WiFi card connected to it, work just as well, remove the reset to allow the system to suspend and resume properly. Fixes: ecc0af6a3fe6 ("arm64: dts: mt8195: Add pcie and pcie phy nodes") Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Link: https://lore.kernel.org/r/20241218-mt8195-pcie1-reset-suspend-fix-v1-1-1c021dda42a6@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2024-12-19arm64: dts: mt7986: add overlay for SATA power socket on BPI-R3Frank Wunderlich
Bananapi R3 has a Power socket entended for using external SATA drives. This Socket is off by default but can be switched with gpio 8. Add an overlay to activate it. Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Link: https://lore.kernel.org/r/20241206132401.70259-1-linux@fw-web.de Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2024-12-19arm64: dts: mediatek: mt8188: Add GPU speed bin NVMEM cellsHsin-Te Yuan
On the MT8188, the chip is binned for different GPU voltages at the highest OPPs. The binning value is stored in the efuse. Add the NVMEM cell, and tie it to the GPU. Signed-off-by: Hsin-Te Yuan <yuanhsinte@chromium.org> Link: https://lore.kernel.org/r/20241213-speedbin-v1-1-a0053ead9477@chromium.org Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2024-12-19arm64: dts: mediatek: mt8183: willow: Support second source touchscreenHsin-Te Yuan
Some willow devices use second source touchscreen. Fixes: f006bcf1c972 ("arm64: dts: mt8183: Add kukui-jacuzzi-willow board") Signed-off-by: Hsin-Te Yuan <yuanhsinte@chromium.org> Link: https://lore.kernel.org/r/20241213-touchscreen-v3-2-7c1f670913f9@chromium.org Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2024-12-19arm64: dts: mediatek: mt8183: kenzo: Support second source touchscreenHsin-Te Yuan
Some kenzo devices use second source touchscreen. Fixes: 0a9cefe21aec ("arm64: dts: mt8183: Add kukui-jacuzzi-kenzo board") Signed-off-by: Hsin-Te Yuan <yuanhsinte@chromium.org> Link: https://lore.kernel.org/r/20241213-touchscreen-v3-1-7c1f670913f9@chromium.org Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2024-12-19powerpc: Large user copy aware of full:rt:lazy preemptionShrikanth Hegde
Large user copy_to/from (more than 16 bytes) uses vmx instructions to speed things up. Once the copy is done, it makes sense to try schedule as soon as possible for preemptible kernels. So do this for preempt=full/lazy and rt kernel. Not checking for lazy bit here, since it could lead to unnecessary context switches. Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241116192306.88217-3-sshegde@linux.ibm.com
2024-12-19powerpc: Add preempt lazy supportShrikanth Hegde
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi. Since Powerpc doesn't use the generic entry/exit, add lazy check at exit to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for return to kernel. Ran a few benchmarks and db workload on Power10. Performance is close to preempt=none/voluntary. Since Powerpc systems can have large core count and large memory, preempt lazy is going to be helpful in avoiding soft lockup issues. Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241116192306.88217-2-sshegde@linux.ibm.com
2024-12-19arm64: dts: zynqmp: Add DMA for DP audioTomi Valkeinen
Add the two DMA channels used for the DisplayPort audio to the zynqmp_dpsub node. Acked-by: Michal Simek <michal.simek@amd.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Link: https://patchwork.freedesktop.org/patch/msgid/20241023-xilinx-dp-audio-v4-2-5128881457be@ideasonboard.com
2024-12-18mm: introduce cpu_icache_is_aliasing() across all architecturesZi Yan
In commit eacd0e950dc2 ("ARC: [mm] Lazy D-cache flush (non aliasing VIPT)"), arc adds the need to flush dcache to make icache see the code page change. This also requires special handling for clear_user_(high)page(). Introduce cpu_icache_is_aliasing() to make MM code query special clear_user_(high)page() easier. This will be used by the following commit. Link: https://lkml.kernel.org/r/20241209182326.2955963-1-ziy@nvidia.com Fixes: 5708d96da20b ("mm: avoid zeroing user movable page twice with init_on_alloc=1") Signed-off-by: Zi Yan <ziy@nvidia.com> Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Alexander Potapenko <glider@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Vineet Gupta <vgupta@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-12-18KVM: x86: Add information about pending requests to kvm_exit tracepointMaxim Levitsky
Print pending requests in the kvm_exit tracepoint, which allows userspace to gather information on how often KVM interrupts vCPUs due to specific requests. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240910200350.264245-3-mlevitsk@redhat.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: x86: Add interrupt injection information to the kvm_entry tracepointMaxim Levitsky
Add VMX/SVM specific interrupt injection info the kvm_entry tracepoint. As is done with kvm_exit, gather the information via a kvm_x86_ops hook to avoid the moderately costly VMREADs on VMX when the tracepoint isn't enabled. Opportunistically rename the parameters in the get_exit_info() declaration to match the names used by both SVM and VMX. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240910200350.264245-2-mlevitsk@redhat.com [sean: drop is_guest_mode() change, use intr_info/error_code for names] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: SVM: Handle event vectoring error in check_emulate_instruction()Ivan Orlov
Detect unhandleable vectoring in check_emulate_instruction() to prevent infinite retry loops on SVM, and to eliminate the main differences in how VM-Exits during event vectoring are handled on SVM versus VMX. E.g. if the vCPU puts its IDT in emulated MMIO memory and generates an event, without the check_emulate_instruction() change, SVM will re-inject the event and resume the guest, and effectively put the vCPU into an infinite loop. Signed-off-by: Ivan Orlov <iorlov@amazon.com> Link: https://lore.kernel.org/r/20241217181458.68690-6-iorlov@amazon.com [sean: grab "svm" locally, massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: VMX: Handle event vectoring error in check_emulate_instruction()Ivan Orlov
Move handling of emulation during event vectoring, which KVM doesn't support, into VMX's check_emulate_instruction(), so that KVM detects all unsupported emulation, not just cached emulated MMIO (EPT misconfig). E.g. on emulated MMIO that isn't cached (EPT Violation) or occurs with legacy shadow paging (#PF). Rejecting emulation on other sources of emulation also fixes a largely theoretical flaw (thanks to the "unprotect and retry" logic), where KVM could incorrectly inject a #DF: 1. CPU executes an instruction and hits a #GP 2. While vectoring the #GP, a shadow #PF occurs 3. On the #PF VM-Exit, KVM re-injects #GP 4. KVM emulates because of the write-protected page 5. KVM "successfully" emulates and also detects the #GP 6. KVM synthesizes a #GP, and since #GP has already been injected, incorrectly escalates to a #DF. Fix the comment about EMULTYPE_PF as this flag doesn't necessarily mean MMIO anymore: it can also be set due to the write protection violation. Note, handle_ept_misconfig() checks vmx_check_emulate_instruction() before attempting emulation of any kind. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Ivan Orlov <iorlov@amazon.com> Link: https://lore.kernel.org/r/20241217181458.68690-5-iorlov@amazon.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: x86: Try to unprotect and retry on unhandleable emulation failureIvan Orlov
If emulation is "rejected" by check_emulate_instruction(), try to unprotect and retry instruction execution before reporting the error to userspace. Currently, check_emulate_instruction() never signals failure when "unprotect and retry" is possible, but that will change in the future as both VMX and SVM will reject emulation due to coincident exception vectoring. E.g. if there is a write to a shadowed page table when vectoring an event, then unprotecting the gfn and retrying the instruction will allow the guest to make forward progress in most cases, i.e. will allow the vCPU to keep running instead of returning an error to userspace. This ensures that the subsequent patches won't make KVM exit to userspace when handling an intercepted #PF during vectoring without checking whether unprotect and retry is possible. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Ivan Orlov <iorlov@amazon.com> Link: https://lore.kernel.org/r/20241217181458.68690-4-iorlov@amazon.com [sean: massage changelog to clarify this is a nop for the current code] Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18KVM: x86: Add emulation status for unhandleable exception vectoringIvan Orlov
Add emulation status for unhandleable vectoring, i.e. when KVM can't emulate an instruction because emulation was triggered on an exit that occurred while the CPU was vectoring an event. Such a situation can occur if guest sets the IDT descriptor base to point to MMIO region, and triggers an exception after that. Exit to userspace with event delivery error when KVM can't emulate an instruction when vectoring an event. Signed-off-by: Ivan Orlov <iorlov@amazon.com> Link: https://lore.kernel.org/r/20241217181458.68690-3-iorlov@amazon.com [sean: massage changelog and X86EMUL_UNHANDLEABLE_VECTORING comment] Signed-off-by: Sean Christopherson <seanjc@google.com>