summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2023-08-18powerpc/83xx: Split usb.cChristophe Leroy
usb.c contains three independent parts with no common part. Split it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop usb.o from Makefile to fix build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/75712b54bf9cb85ab10e47cd2772cd2a098ca895.1692199324.git.christophe.leroy@csgroup.eu
2023-08-18powerpc/83xx: Fix style problems in usb.c and remove unneccessary includes ↵Christophe Leroy
from mpc83xx.h Replace printk(KERN_WARN with pr_warn( Remove a couple of blank lines Re-align multi-line code. Replace asm/io.h by linux/io.h mpc83xx.h doesn't need linux/device.h or asm/pci-bridge.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/2cb498f637e082a4af8032311fad3cae84d6aa5d.1692199324.git.christophe.leroy@csgroup.eu
2023-08-18powerpc/fsl_pci: Make fsl_add_bridge() staticChristophe Leroy
Since commit 905e75c46dba ("powerpc/fsl-pci: Unify pci/pcie initialization code") fsl_add_bridge() is not used anymore outside of fsl_pci.c Make it static. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/2115e3597d81e72a865820af54f0e290d0fd2b3a.1692199186.git.christophe.leroy@csgroup.eu
2023-08-18powerpc/512x: Make mpc512x_select_reset_compat() staticChristophe Leroy
mpc512x_select_reset_compat() is only used in the file it is defined. Make it static. Move mpc512x_restart_init() after mpc512x_select_reset_compat(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/36a19e13025dbf17e92e832dd24150642b0e9bad.1692341499.git.christophe.leroy@csgroup.eu
2023-08-17ARC: entry: EV_MachineCheck dont re-read ECRVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: entry: ARcompact EV_ProtV to use r10 directlyVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: entry: rework (non-functional)Vineet Gupta
- comments update - rename syscall_trace_entry - use PT_xxx in entry code Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: __switch_to: move ksp to thread_info from thread_structVineet Gupta
task's arch specific bits are carried in 2 places - embedded thread_struct in task_struct - associated thread_info (hoisted in task's stack page) and syntactically: (thread_info *)(task_struct->stack) ksp (dynamic kernel stack top) currently lives in thread_struct but given its deep location in task struct likely to cache miss when accessed from __switch_to(). Moving it to thread_info would be more efficient given proximity to frequently accessed items such as preempt_count thus very likely to be in cache, specially in schedular code. Note however that currently tsk.thread.ksp takes 1 memory access (off of tsk pointer) while new code tsk->stack.ksp would take 2, but likely to be in cache. Moreover if task is current the 2nd reference can be elided and instead derived from SP as (SP & ~(THREAD_SIZE - 1)) All of this also makes __switch_to() code simpler and we can see the 2 ways of retirving ksp (descrobed above) in new code. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: __switch_to: asm with dwarf ops (vs. inline asm)Vineet Gupta
__switch_to() is final step of context switch, swapping kernel modes stack (and callee regs) of outgoing task with next task. It is also the starting point of stack unwinging of a sleeping task and captures SP, FP, BLINK and the corresponding dwarf info. Back when dinosaurs still roamed around, ARC gas didn't support CFI pseudo ops and gcc was responsible for generating dwarf info. Thus it had to be written in "C" with inline asm to do the hand crafting of stack. The function prologue (and crucial saving of blink etc) was still gcc generated but not visible in code. Likewise dwarf info was missing. Now with modern tools, we can make things more obvious by writing the code in asm and adding approproate dwarf cfi pseudo ops. This is mostly non functional change, except for slight chnages to asm - ARCompact doesn't support MOV_S fp, sp, so we use MOV Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: kernel stack: INIT_THREAD need not setup @init_stack in @kspVineet Gupta
There are 2 pointers to kernel mode stack of a task - task_struct.stack: base address of stack page (max possible stack top) - thread_info.ksp : runtime stack top in __switch_to INIT_THREAD was setting up ksp to stack base which was not really needed - it would get overwritten with dynamic value on first call to __switch_to when init is switched out for the very first time. - generic code already does init_task.stack = init_stack and ARC code uses that to retrieve task's stack base. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: entry: use gp to cache task pointer (vs. r25)Vineet Gupta
The motivation is eventual ABI considerations for ARCv3 but even without it this change us worthwhile as diffstat reduces 100 net lines r25 is a callee saved register, normally not saved by entry code in pt_regs. However because of its usage in CONFIG_ARC_CURR_IN_REG it needs to be. This in turn requires a whole bunch of special casing when we need to access r25. Then there is distinction between user mode r25 vs. kernel mode r25 - hence distinct SAVE_CALLEE_SAVED_{USER,KERNEL} Instead use gp which is a scratch register and thus saved already in entry code. This cleans things up significantly and much nocer on eyes: - SAVE_CALLEE_SAVED_{USER,KERNEL} are now exactly same - no special user_r25 slot in pt_reggs Note that typical global asm registers are callee-saved (r25), but gp is not callee-saved thus needs additional -ffixed-<reg> toggle Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #4: boot log per ISAVineet Gupta
- boot log now clearly per ISA - global struct cpuinfo_arc[] elimiated - local struct struct arcinfo kept for passing info between functions Tested-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202308162101.Ve5jBg80-lkp@intel.com Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #3: don't exportVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #2: cacheVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #1: mmVineet Gupta
This is first step in eliminating struct cpuinfo_arc[NR_CPUS] Back when we had just ARCompact ISA, the idea was to read/bit-fiddle the BCRs once and and cache decoded information in a global struct ready to use. With ARCv2 it was modified to contained abstract / ISA agnostic information. However with ARCv3 there 's too much disparity to abstract in common structures. So drop the entire decode once and store paradigm. Afterall there's only 2 users of this machinery anyways: boot printing and cat /proc/cpuinfo. None is performance critical to warrant locking away resident memory per cpu. This patch is first step in that direction - decouples struct cpuinfo_arc_mmu from global struct cpuinfo_arc - mmu code still has a trimmed down static version of struct cpuinfo_arc_mmu to cache information needed in performance critical code such as tlb flush routines - folds read_decode_mmu_bcr() into arc_mmu_mumbojumbo() - setup_processor() directly calls arc_mmu_init() and not via arc_cpu_init() Tested-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202308151213.qKZPMiyz-lkp@intel.com/ Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17arm64: dts: qcom: sdm845-db845c: Mark cont splash memory region as reservedAmit Pundir
Adding a reserved memory region for the framebuffer memory (the splash memory region set up by the bootloader). It fixes a kernel panic (arm-smmu: Unhandled context fault at this particular memory region) reported on DB845c running v5.10.y. Cc: stable@vger.kernel.org # v5.10+ Reviewed-by: Caleb Connolly <caleb.connolly@linaro.org> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20230726132719.2117369-2-amit.pundir@linaro.org Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2023-08-17ARM: dts: qcom: apq8064: add support to gsbi4 uartDavid Heidelberg
This patch adds support to gsbi4 uart which is used in LG Mako. Signed-off-by: David Heidelberg <david@ixit.cz> Link: https://lore.kernel.org/r/20230814150040.64133-1-david@ixit.cz Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2023-08-18powerpc/ps3: refactor strncpy usageJustin Stitt
`strncpy` is deprecated for use on NUL-terminated destination strings [1]. `make_first_field()` should use similar implementation to `make_field()` due to memcpy having more obvious behavior here. The end result yields the same behavior as the previous `strncpy`-based implementation including the NUL-padding. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [1] Link: https://github.com/KSPP/linux/issues/90 Signed-off-by: Justin Stitt <justinstitt@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230816-strncpy-arch-powerpc-platforms-ps3-repository-v1-1-88283b02fb09@google.com
2023-08-17ARCv2: memset: don't prefetch for len == 0 which happens a alotVineet Gupta
This avoids potential "bleeding" when size == 0 as cache line would be dirtied (and possibly fetched from other cores) and due to the same reaons more optimal too. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: uaccess: elide unaliged handling if hardware supportsVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: uaccess: use optimized generic __strnlen_user/__strncpy_from_userVineet Gupta
The existing ARC variants have 2 issues - Use ZOL which may not be present in forthcoming architecture - Byte loop based vs. generic version which is word loop based Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: uaccess: remove arc specific out-of-line handles for -OsVineet Gupta
Everything is now out-of-line in lib/usercopy.c Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17KVM: x86: Disallow guest CPUID lookups when IRQs are disabledSean Christopherson
Now that KVM has a framework for caching guest CPUID feature flags, add a "rule" that IRQs must be enabled when doing guest CPUID lookups, and enforce the rule via a lockdep assertion. CPUID lookups are slow, and within KVM, IRQs are only ever disabled in hot paths, e.g. the core run loop, fast page fault handling, etc. I.e. querying guest CPUID with IRQs disabled, especially in the run loop, should be avoided. Link: https://lore.kernel.org/r/20230815203653.519297-16-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "vNMI enabled"Sean Christopherson
Track "virtual NMI exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vnmi" param means that the code isn't strictly equivalent, as vnmi_enabled could have been set if nested=false where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Link: https://lore.kernel.org/r/20230815203653.519297-15-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "vGIF enabled"Sean Christopherson
Track "virtual GIF exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vgif" param means that the code isn't strictly equivalent, as vgif_enabled could have been set if nested=false where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are Link: https://lore.kernel.org/r/20230815203653.519297-14-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "Pause Filter enabled"Sean Christopherson
Track "Pause Filtering is exposed to L1" via governed feature flags instead of using dedicated bits/flags in vcpu_svm. No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "LBRv enabled"Sean Christopherson
Track "LBR virtualization exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "lbrv" param means that the code isn't strictly equivalent, as lbrv_enabled could have been set if nested=false where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Link: https://lore.kernel.org/r/20230815203653.519297-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "vVM{SAVE,LOAD} enabled"Sean Christopherson
Track "virtual VMSAVE/VMLOAD exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Opportunistically add a comment explaining why KVM disallows virtual VMLOAD/VMSAVE when the vCPU model is Intel. No functional change intended. Link: https://lore.kernel.org/r/20230815203653.519297-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "TSC scaling enabled"Sean Christopherson
Track "TSC scaling exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, this fixes a benign bug where KVM would mark TSC scaling as exposed to L1 even if overall nested SVM supported is disabled, i.e. KVM would let L1 write MSR_AMD64_TSC_RATIO even when KVM didn't advertise TSCRATEMSR support to userspace. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nSVM: Use KVM-governed feature framework to track "NRIPS enabled"Sean Christopherson
Track "NRIPS exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: nVMX: Use KVM-governed feature framework to track "nested VMX enabled"Sean Christopherson
Track "VMX exposed to L1" via a governed feature flag instead of using a dedicated helper to provide the same functionality. The main goal is to drive convergence between VMX and SVM with respect to querying features that are controllable via module param (SVM likes to cache nested features), avoiding the guest CPUID lookups at runtime is just a bonus and unlikely to provide any meaningful performance benefits. Note, X86_FEATURE_VMX is set in kvm_cpu_caps if and only if "nested" is true, and the CPU obviously supports VMX if KVM+VMX is running. I.e. the check on "nested" is now implicitly down by the kvm_cpu_cap_has() check in kvm_governed_feature_check_and_set(). No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Reviwed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: x86: Use KVM-governed feature framework to track "XSAVES enabled"Sean Christopherson
Use the governed feature framework to track if XSAVES is "enabled", i.e. if XSAVES can be used by the guest. Add a comment in the SVM code to explain the very unintuitive logic of deliberately NOT checking if XSAVES is enumerated in the guest CPUID model. No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: VMX: Rename XSAVES control to follow KVM's preferred "ENABLE_XYZ"Sean Christopherson
Rename the XSAVES secondary execution control to follow KVM's preferred style so that XSAVES related logic can use common macros that depend on KVM's preferred style. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20230815203653.519297-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: VMX: Check KVM CPU caps, not just VMX MSR support, for XSAVE enablingSean Christopherson
Check KVM CPU capabilities instead of raw VMX support for XSAVES when determining whether or not XSAVER can/should be exposed to the guest. Practically speaking, it's nonsensical/impossible for a CPU to support "enable XSAVES" without XSAVES being supported natively. The real motivation for checking kvm_cpu_cap_has() is to allow using the governed feature's standard check-and-set logic. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: VMX: Recompute "XSAVES enabled" only after CPUID updateSean Christopherson
Recompute whether or not XSAVES is enabled for the guest only if the guest's CPUID model changes instead of redoing the computation every time KVM generates vmcs01's secondary execution controls. The boot_cpu_has() and cpu_has_vmx_xsaves() checks should never change after KVM is loaded, and if they do the kernel/KVM is hosed. Opportunistically add a comment explaining _why_ XSAVES is effectively exposed to the guest if and only if XSAVE is also exposed to the guest. Practically speaking, no functional change intended (KVM will do fewer computations, but should still see the same xsaves_enabled value whenever KVM looks at it). Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: x86/mmu: Use KVM-governed feature framework to track "GBPAGES enabled"Sean Christopherson
Use the governed feature framework to track whether or not the guest can use 1GiB pages, and drop the one-off helper that wraps the surprisingly non-trivial logic surrounding 1GiB page usage in the guest. No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: x86: Add a framework for enabling KVM-governed x86 featuresSean Christopherson
Introduce yet another X86_FEATURE flag framework to manage and cache KVM governed features (for lack of a better name). "Governed" in this case means that KVM has some level of involvement and/or vested interest in whether or not an X86_FEATURE can be used by the guest. The intent of the framework is twofold: to simplify caching of guest CPUID flags that KVM needs to frequently query, and to add clarity to such caching, e.g. it isn't immediately obvious that SVM's bundle of flags for "optional nested SVM features" track whether or not a flag is exposed to L1. Begrudgingly define KVM_MAX_NR_GOVERNED_FEATURES for the size of the bitmap to avoid exposing governed_features.h in arch/x86/include/asm/, but add a FIXME to call out that it can and should be cleaned up once "struct kvm_vcpu_arch" is no longer expose to the kernel at large. Cc: Zeng Guang <guang.zeng@intel.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20230815203653.519297-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: SVM: correct the size of spec_ctrl field in VMCB save areaManali Shukla
Correct the spec_ctrl field in the VMCB save area based on the AMD Programmer's manual. Originally, the spec_ctrl was listed as u32 with 4 bytes of reserved area. The AMD Programmer's Manual now lists the spec_ctrl as 8 bytes in VMCB save area. The Public Processor Programming reference for Genoa, shows SPEC_CTRL as 64b register, but the AMD Programmer's Manual lists SPEC_CTRL as 32b register. This discrepancy will be cleaned up in next revision of the AMD Programmer's Manual. Since remaining bits above bit 7 are reserved bits in SPEC_CTRL MSR and thus, not being used, the spec_ctrl added as u32 in the VMCB save area is currently not an issue. Fixes: 3dd2775b74c9 ("KVM: SVM: Create a separate mapping for the SEV-ES save area") Suggested-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Manali Shukla <manali.shukla@amd.com> Link: https://lore.kernel.org/r/20230717041903.85480-1-manali.shukla@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17x86: kvm: x86: Remove unnecessary initial values of variablesLi zeming
bitmap and khz is assigned first, so it does not need to initialize the assignment. Signed-off-by: Li zeming <zeming@nfschina.com> Link: https://lore.kernel.org/r/20230817002631.2885-1-zeming@nfschina.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: VMX: Rename vmx_get_max_tdp_level() to vmx_get_max_ept_level()Shiyuan Gao
In VMX, ept_level looks better than tdp_level and is consistent with SVM's get_npt_level(). Signed-off-by: Shiyuan Gao <gaoshiyuan@baidu.com> Link: https://lore.kernel.org/r/20230810113853.98114-1-gaoshiyuan@baidu.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: SVM: Set target pCPU during IRTE update if target vCPU is runningSean Christopherson
Update the target pCPU for IOMMU doorbells when updating IRTE routing if KVM is actively running the associated vCPU. KVM currently only updates the pCPU when loading the vCPU (via avic_vcpu_load()), and so doorbell events will be delayed until the vCPU goes through a put+load cycle (which might very well "never" happen for the lifetime of the VM). To avoid inserting a stale pCPU, e.g. due to racing between updating IRTE routing and vCPU load/put, get the pCPU information from the vCPU's Physical APIC ID table entry (a.k.a. avic_physical_id_cache in KVM) and update the IRTE while holding ir_list_lock. Add comments with --verbose enabled to explain exactly what is and isn't protected by ir_list_lock. Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted interrupt") Reported-by: dengqiao.joey <dengqiao.joey@bytedance.com> Cc: stable@vger.kernel.org Cc: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Maxim Levitsky <mlevitsk@redhat.com> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20230808233132.2499764-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: SVM: Take and hold ir_list_lock when updating vCPU's Physical ID entrySean Christopherson
Hoist the acquisition of ir_list_lock from avic_update_iommu_vcpu_affinity() to its two callers, avic_vcpu_load() and avic_vcpu_put(), specifically to encapsulate the write to the vCPU's entry in the AVIC Physical ID table. This will allow a future fix to pull information from the Physical ID entry when updating the IRTE, without potentially consuming stale information, i.e. without racing with the vCPU being (un)loaded. Add a comment to call out that ir_list_lock does NOT protect against multiple writers, specifically that reading the Physical ID entry in avic_vcpu_put() outside of the lock is safe. To preserve some semblance of independence from ir_list_lock, keep the READ_ONCE() in avic_vcpu_load() even though acuiring the spinlock effectively ensures the load(s) will be generated after acquiring the lock. Cc: stable@vger.kernel.org Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20230808233132.2499764-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: x86: Remove WARN sanity check on hypervisor timer vs. UNINITIALIZED vCPUSean Christopherson
Drop the WARN in KVM_RUN that asserts that KVM isn't using the hypervisor timer, a.k.a. the VMX preemption timer, for a vCPU that is in the UNINITIALIZIED activity state. The intent of the WARN is to sanity check that KVM won't drop a timer interrupt due to an unexpected transition to UNINITIALIZED, but unfortunately userspace can use various ioctl()s to force the unexpected state. Drop the sanity check instead of switching from the hypervisor timer to a software based timer, as the only reason to switch to a software timer when a vCPU is blocking is to ensure the timer interrupt wakes the vCPU, but said interrupt isn't a valid wake event for vCPUs in UNINITIALIZED state *and* the interrupt will be dropped in the end. Reported-by: Yikebaer Aizezi <yikebaer61@gmail.com> Closes: https://lore.kernel.org/all/CALcu4rbFrU4go8sBHk3FreP+qjgtZCGcYNpSiEXOLm==qFv7iQ@mail.gmail.com Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20230808232057.2498287-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: x86: Remove break statements that will never be executedLike Xu
Fix compiler warnings when compiling KVM with [-Wunreachable-code-break]. No functional change intended. Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Like Xu <likexu@tencent.com> Link: https://lore.kernel.org/r/20230807094243.32516-1-likexu@tencent.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17KVM: Wrap kvm_{gfn,hva}_range.pte in a per-action unionSean Christopherson
Wrap kvm_{gfn,hva}_range.pte in a union so that future notifier events can pass event specific information up and down the stack without needing to constantly expand and churn the APIs. Lockless aging of SPTEs will pass around a bitmap, and support for memory attributes will pass around the new attributes for the range. Add a "KVM_NO_ARG" placeholder to simplify handling events without an argument (creating a dummy union variable is midly annoying). Opportunstically drop explicit zero-initialization of the "pte" field, as omitting the field (now a union) has the same effect. Cc: Yu Zhao <yuzhao@google.com> Link: https://lore.kernel.org/all/CAOUHufagkd2Jk3_HrVoFFptRXM=hX2CV8f+M-dka-hJU4bP8kw@mail.gmail.com Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Acked-by: Yu Zhao <yuzhao@google.com> Link: https://lore.kernel.org/r/20230729004144.1054885-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-08-17arm64/ptrace: Ensure that the task sees ZT writes on first useMark Brown
When the value of ZT is set via ptrace we don't disable traps for SME. This means that when a the task has never used SME before then the value set via ptrace will never be seen by the target task since it will trigger a SME access trap which will flush the register state. Disable SME traps when setting ZT, this means we also need to allocate storage for SVE if it is not already allocated, for the benefit of streaming SVE. Fixes: f90b529bcbe5 ("arm64/sme: Implement ZT0 ptrace support") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: <stable@vger.kernel.org> # 6.3.x Link: https://lore.kernel.org/r/20230816-arm64-zt-ptrace-first-use-v2-1-00aa82847e28@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-08-17arm64/ptrace: Ensure that SME is set up for target when writing SSVE stateMark Brown
When we use NT_ARM_SSVE to either enable streaming mode or change the vector length for a process we do not currently do anything to ensure that there is storage allocated for the SME specific register state. If the task had not previously used SME or we changed the vector length then the task will not have had TIF_SME set or backing storage for ZA/ZT allocated, resulting in inconsistent register sizes when saving state and spurious traps which flush the newly set register state. We should set TIF_SME to disable traps and ensure that storage is allocated for ZA and ZT if it is not already allocated. This requires modifying sme_alloc() to make the flush of any existing register state optional so we don't disturb existing state for ZA and ZT. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Reported-by: David Spickett <David.Spickett@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: <stable@vger.kernel.org> # 5.19.x Link: https://lore.kernel.org/r/20230810-arm64-fix-ptrace-race-v1-1-a5361fad2bd6@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-08-17x86/ibt: Convert IBT selftest to asmJosh Poimboeuf
The following warning is reported when frame pointers and kernel IBT are enabled: vmlinux.o: warning: objtool: ibt_selftest+0x11: sibling call from callable instruction with modified stack frame The problem is that objtool interprets the indirect branch in ibt_selftest() as a sibling call, and GCC inserts a (partial) frame pointer prologue before it: 0000 000000000003f550 <ibt_selftest>: 0000 3f550: f3 0f 1e fa endbr64 0004 3f554: e8 00 00 00 00 call 3f559 <ibt_selftest+0x9> 3f555: R_X86_64_PLT32 __fentry__-0x4 0009 3f559: 55 push %rbp 000a 3f55a: 48 8d 05 02 00 00 00 lea 0x2(%rip),%rax # 3f563 <ibt_selftest_ip> 0011 3f561: ff e0 jmp *%rax Note the inline asm is missing ASM_CALL_CONSTRAINT, so the 'push %rbp' happens before the indirect branch and the 'mov %rsp, %rbp' happens afterwards. Simplify the generated code and make it easier to understand for both tools and humans by moving the selftest to proper asm. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/99a7e16b97bda97bf0a04aa141d6241cd8a839a2.1680912949.git.jpoimboe@kernel.org
2023-08-17s390/paes: fix PKEY_TYPE_EP11_AES handling for secure keyblobsHolger Dengler
Commit 'fa6999e326fe ("s390/pkey: support CCA and EP11 secure ECC private keys")' introduced PKEY_TYPE_EP11_AES securekey blobs as a supplement to the PKEY_TYPE_EP11 (which won't work in environments with session-bound keys). This new keyblobs has a different maximum size, so fix paes crypto module to accept also these larger keyblobs. Fixes: fa6999e326fe ("s390/pkey: support CCA and EP11 secure ECC private keys") Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-08-17s390/pkey: fix PKEY_TYPE_EP11_AES handling for sysfs attributesHolger Dengler
Commit 'fa6999e326fe ("s390/pkey: support CCA and EP11 secure ECC private keys")' introduced a new PKEY_TYPE_EP11_AES securekey type as a supplement to the existing PKEY_TYPE_EP11 (which won't work in environments with session-bound keys). The pkey EP11 securekey attributes use PKEY_TYPE_EP11_AES (instead of PKEY_TYPE_EP11) keyblobs, to make the generated keyblobs usable also in environments, where session-bound keys are required. There should be no negative impacts to userspace because the internal structure of the keyblobs is opaque. The increased size of the generated keyblobs is reflected by the changed size of the attributes. Fixes: fa6999e326fe ("s390/pkey: support CCA and EP11 secure ECC private keys") Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>